5 6 7 8 9 10 11 12 13 14 15
REST API Status codes (400 vs. 500)

In a project that we're working on, we're consuming REST APIs delivered by services built by another team working for the same customer. We had a discussion about what were appropriate error codes to return for various situations. The discussion boiled down to: should a service return a 500 error code or a 400 error code when a request cannot be processed?

I took a quick look at the documentation for a couple of the larger REST API providers and they are using the 500 code only for catastrophic failure and using the 400 code for anything related to query-input validation errors.

Microsoft Azure Common REST API Error Codes

Code 400:

  • The requested URI does not represent any resource on the server.
  • One of the request inputs is out of range.
  • One of the request inputs is not valid.
  • A required query parameter was not specified for this request.
  • One of the query parameters specified in the request URI is not supported.
  • An invalid value was specified for one of the query parameters in the request URI.

Code 500:

  • The server encountered an internal error. Please retry the request.
  • The operation could not be completed within the permitted time.
  • The server is currently unable to receive requests. Please retry your request.

Twitter Error Codes & Responses

Code 400:

The request was invalid or cannot be otherwise served. An accompanying error message will explain further.

Code 500:

Something is broken. Please post to the group so the Twitter team can investigate.

REST API Tutorial HTTP Status Codes

Code 400:

General error when fulfilling the request would cause an invalid state. Domain validation errors, missing data, etc. are some examples.

Code 500:

A generic error message, given when no more specific message is suitable. The general catch-all error when the server-side throws an exception. Use this only for errors that the consumer cannot address from their endnever return this intentionally.

REST HTTP status codes

For input validation failure: 400 Bad Request + your optional description. This is suggested in the book "RESTful Web Services".

Dealing with improper disposal in WCF clients

There's an old problem in generated WCF clients in which the Dispose() method calls Close() on the client irrespective of whether there was a fault. If there was a fault, then the method should call Abort() instead. Failure to do so causes another exception, which masks the original exception. Client code will see the subsequent fault rather than the original one. A developer running the code in debug mode will have be misled as to what really happened.

You can see WCF Clients and the "Broken" IDisposable Implementation by David Barrett for a more in-depth analysis, but that's the gist of it.

This issue is still present in the ClientBase implementation in .NET 4.5.1. The linked article shows how you can add your own implementation of the Dispose() method in each generated client. An alternative is to use a generic adaptor if you don't feel like adding a custom dispose to every client you create.1

**public class** SafeClient<T> : IDisposable
  **where** T : ICommunicationObject, IDisposable
{
  **public** SafeClient(T client)
  {
    **if** (client == **null**) { **throw new** ArgumentNullException("client"); }

    Client = client;
  }
  
  **public** T Client { **get**; **private set**; }

  **public void** Dispose()
  {
    Dispose(**true**);
    GC.SuppressFinalize(**this**);
  }

  **protected virtual void** Dispose(**bool** disposing)
  {
    **if** (disposing)
    {
      **if** (Client != **null**)
      {
        **if** (Client.State == CommunicationState.Faulted) 
        {
          Client.Abort();
        }
        **else**
        {
          Client.Close();
        }

        Client = **default**(T);
      }
    }
  }  
}

To use your WCF client safely, you wrap it in the class defined above, as shown below.

**using** (**var** safeClient = **new** SafeClient<SystemLoginServiceClient>(**new** SystemLoginServiceClient(...)))
{
  **var** client = safeClient.Client;
  // Work with "client"
}

If you can figure out how to initialize your clients without passing parameters to the constructor, you could slim it down by adding a "new" generic constraint to the parameter T in SafeClient and then using the SafeClient as follows:

**using** (**var** safeClient = **new** SafeClient<SystemLoginServiceClient>())
{
  **var** client = safeClient.Client;
  // Work with "client"
}


  1. The code included in this article is a sketch of a solution and has not been tested. It does compile, though.

OpenBSD takes on OpenSSL

imageMuch of the Internet has been affected by the Heartbleed vulnerability in the widely used OpenSSL server-side software. The bug effectively allows anyone to collect random data from the memory of machines running the affected software, which was about 60% of encrypted sites worldwide. A massive cleanup effort ensued, but the vulnerability has been in the software for two years, so there's no telling how much information was stolen in the interim.

The OpenSSL software is used not only to encrypt HTTPS connections to web servers but also to generate the certificates that undergird those connections as well as many PKIs. Since data could have been stolen over a period of two years, it should be assumed that certificates, usernames and passwords have been stolen as well. Pessimism is the only way to be sure.1

In fact, any data that was loaded into memory on a server running a pre-Heartbleed version of the OpenSSL software is potentially compromised.

How to respond

We should all generate new certificates, ensuring that the root certificate from which we generate has also been re-generated and is clean. We should also choose new passwords for all affected sites. I use LastPass to manage my passwords, which makes it much easier to use long, complicated and most importantly unique passwords. If you're not already using a password manager, now would be a good time to start.

And this goes especially for those who tend to reuse their password on different sites. If one of those sites is cracked, then the hacker can use that same username/password combination on other popular sites and get into your stuff everywhere instead of just on the compromised site.

Forking OpenSSL

Though there are those who are blaming open-source software, we should instead blame ourselves for using software of unknown quality to run our most trusted connections. That the software was designed and built without the required quality controls is a different issue. People are going to write bad software. If you use their free software and it ends up not being as secure as advertised, you have to take at least some of the blame on yourself.

Instead, the security experts and professionals who've written so many articles and done so many reviews over the years touting the benefits of Open SSL should take more of the blame. They are the ones who misused their reputations by touting poorly written software to which they had source-code access, but were too lazy to perform a serious evaluation.

An advantage of open-source software is that we can at least pinpoint exactly when a bug appeared. Another is that the entire codebase is available to all, so others can jump in and try to fix it. Sure, it would have been nice if the expert security programmers of the world had jumped in earlier, but better late than never.

The site OpenSSL Rampage follows the efforts of the OpenBSD team to refactor and modernize the OpenSSL codebase. They are documenting their progress live on Tumblr, which collects commit messages, tweets, blog posts and official security warnings that result from their investigations and fixes.

They are working on a fork and are making radical changes, so it's unlikely that the changes will be taken up in the official OpenSSL fork but perhaps a new TLS/SSL tool will be available soon.2

VMS and custom memory managers

The messages tell tales of support for extinct operating systems like VMS, whose continued support makes for much more complicated code to support current OSs. This complexity, in turn, hides further misuses of malloc as well as misuses of custom buffer-allocation schemes that the OpenSSL team came up with because "malloc is too slow". Sometimes memory is freed twice for good measure.

The article Today's bugs have BRANDS? Be still my bleeding heart [logo] by Verity Stob has a (partially) humorous take on the most recent software errors that have reared their ugly heads. As also mentioned in that article, the Heartbleed Explained by Randall Munroe cartoon shows the Heartbleed issue well, even for non-technical people.

Lots o' cruft

This is all sounds horrible and one wonders how the software runs at all. Don't worry: the code base contains a tremendous amount of cruft that is never used. It is compiled and still included, but it acts as a cozy nest of code that is wrapped around the actual code.

There are vast swaths of script files that haven't been used for years, that can build versions of the software under compilers and with options that haven't been seen on this planet since before .. well, since before Tumblr or Facebook. For example, there's no need to retain a forest of macros at the top of many header files for the Metrowerks compiler for PowerPC on OS9. No reason at all.

There are also incompatibly licensed components in regular use as well as those associated with components that don't seem to be used anymore.

Modes and options and platforms: oh my!

There are compiler options for increasing resiliency that seem to work. Turning these off, however, yields an application that crashes immediately. There are clearly no tests for any of these modes. OpenSSL sounds like a classically grown system that has little in the way of code conventions, patterns or architecture. There seems to be no one who regularly cleans out and decides which code to keep and which to make obsolete. And, even when code is deemed obsolete, it remains in the code base over a decade later.

Security professionals wrote this?

This is to say nothing of how their encryption algorithm actually works. There are tales on that web site of the OpenSSL developers desperately having tried to keep entropy high by mixing in the current time every once in a while. Or even mixing in bits of the private key for good measure.

A lack of discipline (or skill)

The current OpenSSL codebase seems to be a minefield for security reviewers or for reviewers of any kind. A codebase like this is also terrible for new developers, the onboarding of which you want to encourage in such a widely used, distributed, open-source project.

Instead, the current state of the code says: don't touch, you don't know what to change or remove because clearly the main developers don't know either. The last person who knew may have died or left the project years ago.

It's clear that the code has not been reviewed in the way that it should be. Code on this level and for this purpose needs good developers/reviewers who constantly consider most of the following points during each review:

  • Correctness (does the code do what it should? Does it do it in an acceptable way?)
  • Patterns (does this code invent its own way of doing things?)
  • Architecture (is this feature in the right module?)
  • Security implications
  • Performance
  • Memory leaks/management (as long as they're still using C, which they honestly shouldn't be)
  • Supported modes/options/platforms
  • Third-party library usage/licensing
  • Automated tests (are there tests for the new feature or fix? Do existing tests still run?)
  • Comments/documentation (is the new code clear in what it does? Any tips for those who come after?)
  • Syntax (using braces can be important)

Living with OpenSSL (for now)

It sounds like it is high time that someone does what the BSD team is doing. A spring cleaning can be very healthy for software, especially once it's reached a certain age. That goes double for software that was blindly used by 60% of the encrypted web sites in the world.

It's wonderful that OpenSSL exists. Without it, we wouldn't be as encrypted as we are. But the apparent state of this code bespeaks of failure to manage on all levels. The developers of software this important must be of higher quality. They must be the best of the best, not just anyone who read about encryption on Wikipedia and "wants to help". Wanting to help is nice, but you have to know what you're doing.

OpenSSL will be with us for a while. It may be crap code and it may lack automated tests, but it has been manually (and possibly regression-) tested and used a lot, so it has earned a certain badge of reliability and predictability. The state of the code means only that future changes are riskier, not necessarily that the current software is not usable.

Knowing that the code is badly written should make everyone suspicious of patches -- which we now know are likely to break something in that vast pile of C code -- but not suspicious of the officially supported versions from Debian and Ubuntu (for example). Even if the developer team of OpenSSL doesn't test a lot (or not automatically for all options, at any rate -- they may just be testing the "happy path"), the major Linux distros do. So there's that comfort, at least.



  1. As Ripley so famously put it in the movie Aliens: "I say we take off and nuke the entire site from orbit. It's the only way to be sure."

  2. It will, however, be quite a while before the new fork is as battle-tested as OpenSSL.

The Internet of Things

This article originally appeared on earthli News and has been cross-posted here.


The article Smart TVs, smart fridges, smart washing machines? Disaster waiting to happen by Peter Bright discusses the potential downsides to having a smart home1: namely our inability to create smart software for our mediocre hardware. And once that software is written and spread throughout dozens of devices in your home, it will function poorly and quickly be taken over by hackers because "[h]ardware companies are generally bad at writing softwareand bad at updating it."

And, should hackers fail to crack your stove's firmware immediately, for the year or two where the software works as designed, it will, in all likelihood, "[...] be funneling sweet, sweet, consumer analytics back to the mothership as fast as it can", as one commentator on that article put it.

Manufacturers aren't in business to make you happy

Making you happy isn't even incidental to their business model now that monopolies have ensured that there is nowhere you can turn to get better service. Citing from the article above:

These devices will inevitably be abandoned by their manufacturers, and the result will be lots of "smart" functionalityfridges that know what we buy and when, TVs that know what shows we watchall connected to the Internet 24/7, all completely insecure.

Manufacturers almost exclusively design hardware with extremely short lifetimes, hewing to planned obsolescence. While this a great capitalist strategy, it is morally repugnant to waste so many resources and so much energy to create gadgets that will break in order to force consumers to buy new gadgets. Let's put that awful aspect of our civilization to the side for a moment and focus on other consequences.

These same manufacturers are going to take this bulletproof strategy to appliances that have historically had much longer lifetimes. They will also presumably take their extremely lackluster reputation for updating firmware and software into this market. The software will be terrible to begin with, it will be full of security holes and it will receive patches for only about 10% of its expected lifetime. What could possibly go wrong?

Either the consumer will throw away a perfectly good appliance in order to upgrade the software or the appliance will be an upstanding citizen of one, if not several, botnets. Or perhaps other, more malicious services will be funneling information about you and your household to others, all unbeknownst to you.

People are the problem2

These are not scare tactics; this is an inevitability. People have proven themselves to be wildly incapable of comprehending the devices that they already have. They have no idea how they work and have only vague ideas of what they're giving up. It might as well be magic to them. To paraphrase the classic Arthur C. Clarke citation: "Any sufficiently advanced technology is indistinguishable from magic" especially for a sufficiently technically oblivious audience.

Start up a new smart phone and try to create your account on it. Try to do so without accidentally giving away the keys to your data-kingdom. It is extremely difficult to do, even if you are technically savvy and vigilant.

Most people just accept any conditions, store everything everywhere, use the same terribly insecure password for everything and don't bother locking down privacy options, even if available. Their data is spread around the world in dozens of places and they've implicitly given away perpetual licenses to anything they've ever written or shot or created to all of the big providers.

They are sheep ready to be sheared by not only the companies they thought they could trust, but also by national spy agencies and technically adept hackers who've created an entire underground economy fueled by what can only be called deliberate ignorance, shocking gullibility and a surfeit of free time and disposable income.

The Internet of Things

The Internet of Things is a catch-phrase that describes a utopia where everything is connected to everything else via the Internet and a whole universe of new possibilities explode out of this singularity that will benefit not only mankind but the underlying effervescent glory that forms the strata of existence.

The article Ars readers react to Smart fridges and the sketchy geography of normals follows up the previous article and includes the following comment:

What I do want, is the ability to check what's in my fridge from my phone while I'm out in the grocery store to see if there's something I need.

That sounds so intriguing, doesn't it? How great would that be? The one time a year that you actually can't remember what you put in your refrigerator. On the other hand, how the hell can your fridge tell what you have? What are the odds that this technology will even come close to functioning as advertised? Would it not be more reasonable for your grocery purchases to go to a database and for you to tell that database when you've actually used or thrown out ingredients? Even if your fridge was smart, you'd have to wire up your dry-goods pantry in a similar way and commit to only storing food in areas that are under surveillance.

The commentator went on to write,

I do agree that security is a huge, huge issue, and one that needs to be addressed. But I really don't see how resisting the "Internet of things" is the longterm solution. The way technology seems to be trending, this is an inevitability, not a could be.

Resisting the "Internet of things" is not being proposed as the long-term solution. It is being proposed as a short- to medium-term solution because the purveyors of this shining vision of nirvana have proven themselves time and again to be utterly incapable of actually delivering the panaceas that they promise in a stream of consumption-inducing fraud. Instead, they consistently end up lining their own pockets while we all fritter away even more precious waking time ministering to the retarded digital children that they've birthed from their poisoned loins and foisted upon us.

Stay out of it, for now

Hand-waving away the almost-certain security catastrophe as if it can be easily solved is extremely disingenuous. This is not a world that anyone really wants to take part in until the security problems are solved. You do not want to be an early adopter here. And you most especially do not want to do so by buying the cheapest, most-discounted model available as people are also wont to do. Stay out of the fight until the later rounds: remove the SIM card, shut off Internet connectivity where it's not needed and shut down Bluetooth.

The best-case scenario is that early adopters will have their time wasted. Early rounds of software promise to be a tremendous time-suck for all involved. Managing a further herd of purportedly more efficient and optimized devices is a sucker's game. The more you buy, the less likely you are to be in charge of what you do with your free time.

As it stands, we already fight with our phones, begging them to connect to inadequate data networks and balky WLANs. We spend inordinate amounts of time trying to trick their garbage software into actually performing any of its core services. Failing that -- which is an inevitability -- we simply live with the mediocrity, wasting our time every day babysitting gadgets and devices and software that are supposed to be working for us.

Instead, it is we who end up performing the same monotonous and repetitive tasks dozens of times every day because the manufacturers have -- usually in a purely self-interested and quarterly revenue-report driven rush to market -- utterly failed to test the basic functions of their devices. Subsequent software updates do little to improve this situation, generally avoiding fixes for glaring issues in favor of adding social-network integration or some other marketing-driven hogwash.

Avoiding this almost-certain clusterf*#k does not make you a Luddite. It makes you a realist, an astute observer of reality. There has never been a time in history when so much content and games and media has been at the fingertips of anyone with a certain standard of living. At the same time, though, we seem to be so bedazzled by this wonder that we ignore the glaring and wholly incongruous dreadfulness of the tools that we are offered to navigate, watch and curate it.

If you just use what you're given without complaint, then things will never get better. Stay on the sidelines and demand better -- and be prepared to wait for it.



  1. Or a smart car or anything smart that works perfectly well without being smart.

  2. To be clear: the author is not necessarily excluding himself here. It's not easy to turn on, tune in and drop out, especially when your career is firmly in the tech world. It's also not easy to be absolutely aware of what you're giving up in as you make use of the myriad of interlinked services offered to you every day.

Mixing your own SQL into Quino queries: part 2 of 2

In the first installment, we covered the basics of mixing custom SQL with ORM-generated queries. We also took a look at a solution that uses direct ADO database access to perform arbitrarily complex queries.

In this installment, we will see more elegant techniques that make use of the CustomCommandText property of Quino queries. We'll approach the desired solution in steps, proceeding from attempt #1 -- attempt #5.

tl;dr: Skip to attempt #5 to see the final result without learning why it's correct.

Attempt #1: Replacing the entire query with custom SQL

An application can assign the CustomCommandText property of any Quino query to override some of the generated SQL. In the example below, we override all of the text, so that Quino doesn't generate any SQL at all. Instead, Quino is only responsible for sending the request to the database and materializing the objects based on the results.

[Test]
public void TestExecuteCustomCommand()
{
  var people = Session.GetList<Person>();

  people.Query.CustomCommandText = new CustomCommandText
  {
    Text = @"
SELECT ALL 
""punchclock__person"".""id"", 
""punchclock__person"".""companyid"", 
""punchclock__person"".""contactid"", 
""punchclock__person"".""customerid"", 
""punchclock__person"".""initials"", 
""punchclock__person"".""firstname"", 
""punchclock__person"".""lastname"", 
""punchclock__person"".""genderid"", 
""punchclock__person"".""telephone"", 
""punchclock__person"".""active"", 
""punchclock__person"".""isemployee"", 
""punchclock__person"".""birthdate"", 
""punchclock__person"".""salary"" 
FROM punchclock__person WHERE lastname = 'Rogers'"
  };

  Assert.That(people.Count, Is.EqualTo(9));
}

This example solves two of the three problems outlined above:

  • It uses only a single query.
  • It will work with a remote application server (although it makes assumptions about the kind of SQL expected by the backing database on that server).
  • But it is even more fragile than the previous example as far as hard-coded SQL goes. You'll note that the fields expected by the object-materializer have to be explicitly included in the correct order.

Let's see if we can address the third issue by getting Quino to format the SELECT clause for us.

Attempt #2: Generating the SELECT clause

The following example uses the AccessToolkit of the IQueryableDatabase to format the list of properties obtained from the metadata for a Person. The application no longer makes assumptions about which properties are included in the select statement, what order they should be in or how to format them for the SQL expected by the database.

[Test]
public virtual void TestExecuteCustomCommandWithStandardSelect()
{
  var people = Session.GetList<Person>();

  var accessToolkit = DefaultDatabase.AccessToolkit;
  var properties = Person.Metadata.DefaultLoadGroup.Properties;
  var fields = properties.Select(accessToolkit.GetField);

  people.Query.CustomCommandText = new CustomCommandText
  {
    Text = string.Format(
      @"SELECT ALL {0} FROM punchclock__person WHERE lastname = 'Rogers'",
      fields.FlattenToString()
    )
  };

  Assert.That(people.Count, Is.EqualTo(9));
}

This example fixes the problem with the previous one but introduces a new problem: it no longer works with a remote application because it assumes that the client-side driver is a database with an AccessToolkit. The next example addresses this problem.

Attempt #3: Using a hard-coded AccessToolkit

The version below uses a hard-coded AccessToolkit so that it doesn't rely on the external data driver being a direct ADO database. It still makes an assumption about the database on the server but that is usually quite acceptable because the backing database for most applications rarely changes.1

[Test]
public void TestCustomCommandWithPostgreSqlSelect()
{
  var people = Session.GetList<Person>();

  var accessToolkit = new PostgreSqlMetaDatabase().AccessToolkit;
  var properties = Person.Metadata.DefaultLoadGroup.Properties;
  var fields = properties.Select(accessToolkit.GetField);

  people.Query.CustomCommandText = new CustomCommandText
  {
    Text = string.Format(
      @"SELECT ALL {0} FROM punchclock__person WHERE lastname = 'Rogers'",
      fields.FlattenToString()
    )
  };

  Assert.That(people.Count, Is.EqualTo(9));
}

We now have a version that satisfies all three conditions to a large degree. The application uses only a single query and the query works with both local databases and remoting servers. It still makes some assumptions about database-schema names (e.g. "punchclock__person" and "lastname"). Let's see if we can clean up some of these as well.

Attempt #4: Replacing only the where clause

Instead of replacing the entire query text, an application can replace individual sections of the query, letting Quino fill in the rest of the query with its standard generated SQL. An application can append or prepend text to the generated SQL or replace it entirely. Because the condition for our query is so simple, the example below replaces the entire WHERE clause instead of adding to it.

[Test]
public void TestCustomWhereExecution()
{
  var people = Session.GetList<Person>();

  people.Query.CustomCommandText = new CustomCommandText();
  people.Query.CustomCommandText.SetSection(
    CommandTextSections.Where, 
    CommandTextAction.Replace, 
    "lastname = 'Rogers'"
  );

  Assert.That(people.Count, Is.EqualTo(9));
}

That's much nicer -- still not perfect, but nice. The only remaining quibble is that the identifier lastname is still hard-coded. If the model changes in a way where that property is renamed or removed, this code will continue to compile but will fail at run-time. This is a not insignificant problem if your application ends up using these kinds of queries throughout its business logic.

Attempt #5: Replacing the where clause with generated field names

In order to fix this query and have a completely generic query that fails to compile should anything at all change in the model, we can mix in the technique that we used in attempts #2 and #3: using the AccessToolkit to format fields for SQL. To make the query 100% statically checked, we'll also use the generated metadata -- LastName -- to indicate which property we want to format as SQL.

[Test]
public void TestCustomWhereExecution()
{
  var people = Session.GetList<Person>();

  var accessToolkit = new PostgreSqlMetaDatabase().AccessToolkit;
  var lastNameField = accessToolkit.GetField(Person.MetaProperties.LastName);

  people.Query.CustomCommandText = new CustomCommandText();
  people.Query.CustomCommandText.SetSection(
    CommandTextSections.Where, 
    CommandTextAction.Replace, 
    string.Format("{0} = 'Rogers'", lastNameField)
  );

  Assert.That(people.Count, Is.EqualTo(9));
}

The query above satisfies all of the conditions we outlined above. it's clear that the condition is quite simple and that real-world business logic will likely be much more complex. For those situations, the best approach is to fall back to using the direct ADO approach mixed with using Quino facilities like the AccessToolkit as much as possible to create a fully customized SQL text.

Many thanks to Urs for proofreading and suggestions on overall structure.



  1. If an application needs to be totally database-agnostic, then it will need to do some extra legwork that we won't cover in this post.

Mixing your own SQL into Quino queries: part 1 of 2

The Quino ORM1 manages all CrUD -- Create, Update, Delete -- operations for your application. This basic behavior is generally more than enough for standard user interfaces. When a user works with a single object in a window and saves it, there really isn't that much to optimize.

Modeled methods

A more complex editing process may include several objects at once and perhaps trigger events that create additional auditing objects. Even in these cases, there are still only a handful of save operations to execute. To keep the architecture clean, an application is encouraged to model these higher-level operations with methods in the metadata (modeled methods).

The advantage to using modeled methods is that they can be executed in an application server as well as locally in the client. When an application uses a remote application server rather than a direct connection to a database, modeled methods are executed in the service layer and therefore have much less latency to the database.

When Quino's query language isn't enough

If an application needs even more optimization, then it may be necessary to write custom SQL -- or even to use stored procedures to move the query into the database. Mixing SQL with an ORM can be a tricky business. It's even more of a challenge with an ORM like that in Quino, which generates the database schema and shields the user from tables, fields and SQL syntax almost entirely.

What are the potential pitfalls when using custom query text (e.g. SQL) with Quino?

  • Schema element names: An application needs to figure out the names of database objects like table and columns. It would be best not to hard-code them so that when the model changes, the custom code will be automatically updated.

    • If the query is in a stored procedure, then the database may ensure that the code is updated or at least checked when the schema changes.2
    • If the query is in application code, then care can be taken to keep that query in-sync with the model
  • Materialization: In particular, the selected fields in a projection must match the expectations of the ORM exactly so that it can materialize the objects properly. We'll see how to ensure this in examples below.

There are two approaches to executing custom code:

  • ADO: Get a reference to the underlying ADO infrastructure to execute queries directly without using Quino at all. With this approach, Quino can still help an application retrieve properly configured connections and commands.
  • CustomCommandText: An application commonly adds restrictions and sorts to the IQuery object using expressions, but can also add text directly to enhance or replace sections of the generated query.

All of the examples below are taken directly from the Quino test suite. Some variables -- like DefaultDatabase -- are provided by the Quino base testing classes but their purpose, types and implementation should be relatively obvious.

Using ADO directly

You can use the AdoDataConnectionTools to get the underlying ADO connection for a given Session so that any commands you execute are guaranteed to be executed in the same transactions as are already active on that session. If you use these tools, your ADO code will also automatically use the same connection parameters as the rest of your application without having to use hard-coded connection strings.

The first example shows a test from the Quino framework that shows how easy it is to combine results returned from another method into a standard Quino query.

[Test]
public virtual void TestExecuteAdoDirectly()
{
  var ids = GetIds().ToList();
  var people = Session.GetList<Person>();

  people.Query.Where(Person.MetaProperties.Id, ExpressionOperator.In, ids);

  Assert.That(people.Count, Is.EqualTo(9));
}

The ADO-access code is hidden inside the call to GetIds(), the implementation for which is shown below. Your application can get the connection for a session as described above and then create commands using the same helper class. If you call CreateCommand() directly on the ADO connection, you'll have a problem when running inside a transaction on SQL Server. The SQL Server ADO implementation requires that you assign the active transaction object to each command. Quino takes care of this bookkeeping for you if you use the helper method.

private IEnumerable<int> GetIds()
{
  using (var helper = AdoDataConnectionTools.GetAdoConnection(Session, "Name"))
  {
    using (var command = helper.CreateCommand())
    {
      command.AdoCommand.CommandText = 
        @"SELECT id FROM punchclock__person WHERE lastname = 'Rogers'";

      using (var reader = command.AdoCommand.ExecuteReader())
      {
        while (reader.Read())
        {
          yield return reader.GetInt32(0);
        }
      }
    }
  }
}

There are a few drawbacks to this approach:

  • Your application will make two queries instead of one.
  • The hard-coded SQL will break if you make model changes that affect those tables and fields.
  • The ADO approach only works if the application has a direct connection to the database. An application that uses ADO will not be able to switch to an application-server driver without modification.

In the second part, we will improve on this approach by using the CustomCommandText property of a Quino query. This will allow us to use only a single query. We will also improve maintainability by reducing the amount of code that isn't checked by the compiler (e.g. the SQL text above).

Stay tuned for part 2, coming soon!

Many thanks to Urs for proofreading and suggestions on overall structure.



  1. This article uses features of Quino that will only become available in version 1.12. Almost all of the examples will also work in earlier versions but the AdoDataConnectionTools is not available until 1.12. The functionality of this class can, however, be back-ported if necessary.

  2. More likely, though, is that the Quino schema migration will be prevented from applying updates if there are custom stored procedures that use tables and columns that need to be changed.

v1.11.0: .Improvements to local evaluation & remoting

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

  • Local evaluation: improved support for local evaluation in combination with remoting and, sorting, limits and offsets (QNO-4330, QNO-3655, QNO-4224)
  • Many other bug fixes and minor improvements

Breaking changes

  • No known breaking changes.
Java 8

imageThis article discusses and compares the initial version of Java 8 and C# 4.5.1. I have not used Java 8 and I have not tested that any of the examples -- Java or C# -- even compile, but they should be pretty close to valid.

Java 8 has finally been released and -- drum roll, please -- it has closures/lambdas, as promised! I would be greeting this as champagne-cork--popping news if I were still a Java programmer.1 As an ex-Java developer, I greet this news more with an ambivalent shrug than with any overarching joy. It's a sunny morning and I'm in a good mood, so I'm able to suppress what would be a more than appropriate comment: "it's about time".

Since I'm a C# programmer, I'm more interested in peering over the fence at the pile of goodies that Java just received for its eighth birthday and see if it got something "what I ain't got". I found a concise list of new features in the article Will Java 8 Kill Scala? by Ahmed Soliman and was distraught/pleased2 to discover that Java had in fact gotten two presents that C# doesn't already have.

As you'll see, these two features aren't huge and the lack of them doesn't significantly impact design or expressiveness, but you know how jealousy works:

Jealousy doesn't care.

Jealousy is.

I'm sure I'll get over it, but it will take time.3

Default methods and static interface methods

Java 8 introduces support for static methods on interfaces as well as default methods that, taken together, amount to functionality that is more or less what extensions methods brings to C#.

In Java 8, you can define static methods on an interface, which is nice, but it becomes especially useful when combined with the keyword default on those methods. As defined in Default Methods:

Default methods enable you to add new functionality to the interfaces of your libraries and ensure binary compatibility with code written for older versions of those interfaces.

In Java, you no longer have to worry that adding a method to an interface will break implementations of that interface in other jar files that have not yet been recompiled against the new version of the interface. You can avoid that by adding a default implementation for your method. This applies only to those methods where a default implementation is possible, of course.

The page includes an example but it's relatively obvious what it looks like:

**public interface** ITransformer
{
  **string** Adjust(**string** value);
  **string** NewAdjust(**string** value)
  {
    **return** value.Replace(' ', '\t');
  }
}

How do these compare with extension methods in C#?

Extension methods are nice because they allow you to quasi-add methods to an interface without requiring an implementor to actually implement them. My rule of thumb is that any method that can be defined purely in terms of the public API of an interface should be defined as an extension method rather than added to the interface.

Java's default methods are a twist on this concept that addresses a limitation of extension methods. What is that limitation? That the method definition in the extension method can't be overridden by the actual implementation behind the interface. That is, the default implementation can be expressed purely in terms of the public interface, but perhaps a specific implementor of the interface would like to do that plus something more. Or would perhaps like to execute the extension method in a different way, but only for a specific implementation. There is no way to do this with extension methods.

Interface default methods in Java 8 allow you to provide a fallback implementation but also allows any class to actually implement that method and override the fallback.

Functional Interfaces

Functional interfaces are a nice addition, too, and something I've wanted in C# for some time. Eric Meijer of Microsoft doesn't miss an opportunity to point out that this is a must for functional languages (he's exaggerating, but the point is taken).

Saying that a language supports functional interface simply means that a lambda defined in that language can be assigned to any interface with a single method that has the same signature as that lambda.

An example in C# should make things clearer:

**public interface** ITransformer
{
  **string** Adjust(**string** value);
}

**public static class** Utility
{
  **public static void** WorkOnText(**string** text, ITransformer)
  {
    // Do work
  }
}

In order to call WorkOnText() in C#, I am required to define a class that implements ITransformer. There is no other way around it. However, in a language that allows functional interfaces, I could call the method with a lambda directly. The following code looks like C# but won't actually compile.

Utility.WorkOnText(
  "Hello world",
  s => s.Replace("Hello", "Goodbye cruel")
);

For completeness, let's also see how much extra code it is do this in C#, which has no functional interfaces.

**public class** PessimisticTransformer : ITransformer
{
  **public string** Adjust(**string** value)
  {
    **return** value.Replace("Hello", "Goodbye cruel");
  }
}

Utility.WorkOnText(
  "Hello world",
  **new** PessimisticTransformer()
);

That's quite a huge difference. It's surprising that C# hasn't gotten this functionality yet. It's hard to see what the downside is for this feature -- it doesn't seem to alter semantics.

While it is supported in Java, there are other restrictions. The signature has to match exactly. What happens if we add an optional parameter to the interface-method definition?

**public interface** ITransformer
{
  **string** Adjust(**string** value, ITransformer additional = **null**);
}

In the C# example, the class implementing the interface would have to be updated, of course, but the code at calling location remains unchanged. The functional interface's definition is the calling location, so the change would be closer to the implementation instead of more abstracted from it.

**public class** PessimisticTransformer : ITransformer
{
  **public string** Adjust(**string** value, ITransformer additional = **null**)
  {
    **return** value.Replace("Hello", "Goodbye cruel");
  }
}

// Using a class
Utility.WorkOnText(
  "Hello world",
  **new** PessimisticTransformer()
);

// Using a functional interface
Utility.WorkOnText(
  "Hello world",
  (s, a) => s.Replace("Hello", "Goodbye cruel")
);

I would take the functional interface any day.

Java Closures

As a final note, Java 8 has finally acquired closures/lambdas4 but there is a limitation on which functions can be passed as lambdas. It turns out that the inclusion of functional interfaces is a workaround for not having first-class functions in the language.

Citing the article,

[...] you cannot pass any function as first-class to other functions, the function must be explicitly defined as lambda or using Functional Interfaces

While in C# you can assign any method with a matching signature to a lambda variable or parameter, Java requires that the method be first assigned to a variable that is "explicitly assigned as lambda" in order to use. This isn't a limitation on expressiveness but may lead to clutter.

In C# I can write the following:

**public string** Twist(**string** value)
{ 
  **return** value.Reverse();
}

**public string** Alter(**this string** value, Func<**string**, **string**> func)
{
  **return** func(value);
}

**public string** ApplyTransformations(**string** value)
{
  **return** value.Alter(Twist).Alter(s => s.Reverse());
}

This example shows how you can declare a Func to indicate that the parameter is a first-class function. I can pass the Twist function or I can pass an inline lambda, as shown in ApplyTransformations. However, in Java, I can't declare a Func: only functional interfaces. In order to replicate the C# example above in Java, I would do the following:

**public** String twist(String value)
{ 
  **return** new StringBuilder(value).reverse().toString();
}

**public** String alter(String value, ITransformer transformer)
{
  **return** transformer.adjust(value);
}

**public** String applyTransformations(String value)
{
  **return** alter(alter(value, s -> twist(s)), s -> new StringBuilder(s).reverse().toString();
}

Note that the Java example cannot pass Twist directly; instead, it wraps it in a lambda so that it can be passed as a functional interface. Also, the C# example uses an extension method, which allows me to "add" methods to class string, which is not really possible in Java.

Overall, though, while these things feel like deal-breakers to a programming-language snob5 -- especially those who have a choice as to which language to use -- Java developers can rejoice that their language has finally acquired features that both increase expressiveness and reduce clutter.6

As a bonus, as a C# developer, I find that I don't have to be so jealous after all.

Though I'd still really like me some functional interfaces.



  1. Even if I were still a Java programmer, the champagne might still stay in the bottle because adoption of the latest runtime in the Java world is extremely slow-paced. Many projects and products require a specific, older version of the JVM and preclude updating to take advantage of newer features. The .NET world naturally has similar limitations but the problem seems to be less extreme.

  2. Distraught because the features look quite interesting and useful and C# doesn't have them and pleased because (A) I am not so immature that I can't be happy for others and (B) I know that innovation in other languages is an important driver in your own language.

  3. Totally kidding here. I'm not insane. Take my self-diagnosis with a grain of salt.

  4. I know that lambdas and closures are not by definition the same and I'm not supposed to use the interchangeably. I'm trying to make sure that a C# developer who reads this article doesn't read "closure" (which is technically what a lambda in C# is because it's capable of "closing over" or capturing variables) and not understand that it means "lambda".

  5. Like yours truly.

  6. Even if most of those developers won't be able to use those features for quite some time because they work on projects or products that are reluctant to upgrade.

Setting up the Lenovo T440p Laptop

This article originally appeared on earthli News and has been cross-posted here.


I recently got a new laptop and ran into a few issues while setting it up for work. There's a tl;dr at the end for the impatient.

Lenovo has finally spruced up their lineup of laptops with a series that features:

  • An actually usable and large touchpad
  • A decent and relatively sensibly laid-out keyboard
  • Very long battery life (between 6-9 hours, depending on use)
  • Low-power Haswell processor
  • 14-inch full-HD (1920x1080)
  • Dual graphics cards
  • Relatively light at 2.1kg
  • Relatively small/thin form-factor
  • Solid-feeling, functional design w/latchless lid
  • Almost no stickers

I recently got one of these. Let's get it set up so that we can work.

Pop in the old SSD

Instead of setting up the hard drive that I ordered with the laptop, I'm going to transplant the SSD I have in my current laptop to the new machine. Though this maneuver no longer guarantees anguish as it would have in the old days, we'll see below that it doesn't work 100% smoothly.

As mentioned above, the case is well-designed and quite elegant. All I need is a Phillips screwdriver to take out two screws from the back and then a downward slide on the backing plate pulls off the entire bottom of the laptop.1

At any rate, I was able to easily remove the new/unwanted drive and replace it with my fully configured SSD. I replaced the backing plate, but didn't put the screws back in yet. I wasn't that confident that it would work.

My pessimism turns out to have been well-founded. I boot up the machine and was greeted by the BIOS showing me a list of all of the various places that it had checked in order to find a bootable volume.

It failed to find a bootable volume anywhere.

Try again. Still nothing.

UEFI and BIOS usability

From dim memory, I recalled that there's something called UEFI for newer machines and that Windows 8 likes it and that it may have been enabled on the drive that shipped with the laptop but almost certainly isn't on my SSD.

Snooping about in the BIOS settings -- who doesn't like to do that? -- I find that UEFI is indeed enabled. I disable that setting as well as something called UEFI secure-boot and try again. I am rewarded within seconds with my Windows 8 lock screen.

I was happy to have been able to fix the problem, but was disappointed that the error messages thrown up by a very modern BIOS are still so useless. To be more precise, the utter lack of error messages or warnings or hints was disappointing.

I already have access to the BIOS, so it's not a security issue. There is nothing to be gained by hiding from me the fact that the BIOS checked a potential boot volume and failed to find a UEFI bootable sector but did find a non-UEFI one. Would it have killed them to show the list of bootable volumes with a little asterisk or warning telling me that a volume could boot were I to disable UEFI? Wouldn't that have been nice? I'm not even asking them to let me jump right to the setting, though that would be above and beyond the call of duty.

Detecting devices

At any rate, we can boot and Windows 8, after "detecting devices" for a few seconds was able to start up to the lock screen. Let's log in.

I have no network access.

Checking the Device Manager reveals that a good half-dozen devices could not be recognized and no drivers were installed for them.

This is pathetic. It is 2014, people. Most of the hardware in this machine is (A) very standard equipment to have on a laptop and (B) made by Intel. Is it too much to ask to have the 20GB Windows 8 default installation include generic drivers that will work with even newer devices?

The drivers don't have to be optimized; they just have to work well enough to let the user work on getting better ones. Windows is able to do this for the USB ports, for the display and for the mouse and keyboard because it would be utter failure for it not to be able to do so. It is an ongoing mystery how network access has not yet been promoted to this category of mandatory devices.

When Windows 8 is utterly incapable of setting up the network card, then there is a very big problem. A chicken-and-egg problem that can only be solved by having (A) a USB stick and (B) another computer already attached to the Internet.

Thank goodness Windows 8 was able to properly set up the drivers for the USB port or I'd have had a sense-less laptop utterly incapable of ever bootstrapping itself into usefulness.

On the bright side, the Intel network driver was only 1.8MB, it installed with a single click and it worked immediately for both the wireless and Ethernet cards. So that was very nice.

Update System

The obvious next step once I have connectivity is to run Windows Update. That works as expected and even finds some extra driver upgrades once it can actually get online.

Since this is a Lenovo laptop, there is also the Lenovo System Update, which updates more drivers, applies firmware upgrades and installs/updates some useful utilities.

At least it would do all of those things if I could start it.

That's not 100% fair. It kind of started. It's definitely running, there's an icon in the task-bar and the application is not using any CPU. When I hover the icon, it even shows me a thumbnail of a perfectly rendered main window.

Click. Nothing. The main window does not appear.

Fortunately, I am not alone. As recently as November of 2013, there were others with the same problem.2 Unfortunately, no one was able to figure out why it happens nor were there workarounds offered.

I had the sound enabled, though and noticed that when I tried to execute a shortcut, it triggered an alert. And the System Update application seemed to be in the foreground -- somehow -- despite the missing main window.

Acting on a hunch, I pressed Alt + PrtSc to take a screenshot of the currently focused window. Paste into an image editor. Bingo.

image

Now that I could read the text on the main window, I could figure out which keys to press. I didn't get a screenshot of the first screen, but it showed a list of available updates. I pressed the following keys to initiate the download:

  • Alt + S to "Select all"
  • Alt + N to move to the next page
  • Alt + D to "Download" (the screenshot above)

Hovering the mouse cursor over the taskbar icon revealed the following reassuring thumbnail of the main window:

image

Lucky for me, the System Update was able to get the "restart now" onto the screen so that I could reboot when required. On reboot, the newest version of Lenovo System Update was able to make use of the main window once again.

Recommendations

  • If you can't boot off of a drive on a new machine, remember that UEFI might be getting in the way.
  • If you're going to replace the drive, make sure that you download the driver for your machine's network card to that hard drive so that you can at least establish connectivity and continue bootstrapping your machine back to usability.
  • Make sure you update the Lenovo System tools on the destination drive before transferring it to the new machine to avoid weird software bugs.

[![image](2014-01-22_22_27_49-hotkey_features_integration_for_windows_8.1_(64-bit),8(64-bit),7(32-bit,64-.png)](2014-01-22_22_27_49-hotkey_features_integration_for_windows_8.1(64-bit),8(64-bit),7(32-bit,_64-.png)


  1. I'm making this sound easier than it was. I'm not so well-versed in cracking open cases anymore. I was forced to download the manual to look up how to remove the backing plate. The sliding motion would probably have been intuitive for someone more accustomed to these tasks.

  2. In my searches for help, manuals and other software, I came across the following download, offered on Lenovo's web site. You can download something called "Hotkey Features Integration for Windows 8.1" and it only needs 11.17GB of space.

Quino: efficiency, hinting and local sorting

In Quino: partially-mapped queries we took a look at how Quino seamlessly maps as much as possible to the database, while handling unmappable query components locally as efficiently as possible.

Correctness is more important than efficiency

As efficiently as possible can be a bit of a weasel statement. We saw that partial application of restrictions could significantly reduce the data returned. And we saw that efficient handling of that returned data could minimize the impact on both performance and memory, keeping in mind, of course, that the primary goal is correctness.

However, as we saw in the previous article, it's still entirely possible that even an optimally mapped query will result in an unacceptable memory-usage or performance penalty. In these cases, we need to be able to hint or warn the developer that something non-optimal is occurring. It would also be nice if the developer could indicate whether or not queries with such deficiencies should even be executed.

When do things slow down?

Why would this be necessary? Doesn't the developer have ultimate control over which queries are called? The developer has control over queries in business-logic code. But recall that the queries that we are using are somewhat contrived in order to keep things simple. Quino is a highly generic metadata framework: most of the queries are constructed by standard components from expressions defined in the metadata.

For example, the UI may piece together a query from various sources in order to retrieve the data for a particular view. In such cases, the developer has less direct control to "repair" queries with hand-tuning. Instead, the developer has to view the application holistically and make repairs in the metadata. This is one of many reasons why Quino has local evaluation and does not simply throw an exception for partially mapped queries, as EF does.

Debugging data queries

imageIt is, in general, far better to continue working while executing a possibly sub-optimal and performance-damaging query than it is to simply crash out. Such behavior would increase the testing requirements for generated UIs considerably. Instead, the UI always works and the developer can focus on optimization and fine-tuning in the model, using tools like the Statistics Viewer, shown to the left.

imageThe statistics viewer shows all commands executed in an application, with a stack trace, messages (hints/warnings/info) and the original query and mapped SQL/remote statement for each command. The statistics are available for SQL-based data drivers, but also for remoting drivers for all payload types (including JSON).

The screenshot above is for the statistics viewer for Winform applications; we've also integrated statistics into web applications using Glimpse, a plugin architecture for displaying extra information for web-site developers. The screenshot to the right shows a preview-release version that will be released with Quino 1.11 at the end of March.

Sorting is all or nothing

One place where an application can run into efficiency problems is when the sort order for entities is too complex to map to the server.

If a single restriction cannot be mapped to the database, we can map all of the others and evaluate the unmappable ones locally. What happens if a single sort cannot be mapped to the database? Can we do the same thing? Again, to avoid being too abstract, let's start with an example.

var query = Session.GetQuery<Person>();
query
  .Where(Person.Fields.LastName, ExpressionOperator.StartsWith[1], "M")
  .OrderBy(Person.Fields.LastName)
  .OrderBy(Person.Fields.FirstName)
  .Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetList(query).Count, Is.Between(100, 120));

Both of these sorts can be mapped to the server so the performance and memory hit is very limited. The ORM will execute a single query and will return data for and create about 100 objects.

Now, let's replace one of the mappable sorts with something unmappable:

var query = Session.GetQuery<Person>();
query
  .Where(Person.Fields.LastName, ExpressionOperator.StartsWith[1], "M")
  .OrderBy(new DelegateExpression(c => c.GetObject<Person>().FirstName)
  .OrderBy(Person.Fields.LastName)
  .Join(Person.Relations.Company).WhereEqual(Company.Fields.Name, "IBM");

Assert.That(Session.GetList(query).Count, Is.Between(100, 120));

What's happening here? Instead of being able to map both sorts to the database, now only one can be mapped. Or can it? The primary sort can't be mapped, so there's obviously no point in mapping the secondary sort. Instead, all sorting must be applied locally.

What if we had been able to map the primary sort but not the secondary one? Then we could have the database apply the primary sort, returning the data partially ordered. We can apply the remaining sort in memory...but that won't work, will it? If we only applied the secondary sort in memory, then the data would end up sort only by that value. It turns out that, unlike restrictions, sorting is all-or-nothing. If we can't map all sorts to the database, then we have to apply them all locally.1

In this case, the damage is minimal because the restrictions can be mapped and guarantee that only about 100 objects are returned. Sorting 100 objects locally isn't likely to show up on the performance radar.

Still, sorting is a potential performance-killer: as soon as you stray from the path of standard sorting, you run the risk of either:

  • Choosing a sort that is mappable but not covered by an index on the database
  • Choosing a sort that is unmappable and losing out on index-optimized sorting on the database

In the next article, we'll discuss how we can extract slices from a result set -- using limit and offset -- and what sort of effect this can have on performance in partially mapped queries.



  1. The mapper also doesn't bother adding any ordering to the generated query if at least one ordering is unmappable. There's no point in wasting time on the database with a sort that will be re-applied locally.