Tick, tock (death of a ticket salesman)

This article originally appeared on earthli News and has been cross-posted here.

The following story tells tale of a day spent with the ongoing user-experience (UX) catastrophe that is the interface of the SBB/ZVV automated ticket machines.

While it's certainly possible that our experiences are unique and that others can easily purchase their perhaps simpler tickets, we have found that veering ever-so-slightly from the beaten path leads into some very deep and dark weeds.

Even were we to accept that the fault for the confusion engendered by the UI lay entirely with us, we can hardly be blamed for the mysterious time-delays and crashes.

In short, SBB/ZVV: give Encodo a call. We can help you fix this.


When I renewed my monthly train ticket a month ago, the lady behind the counter told me that I could have gotten it from a machine instead.

She handed me a little pamphlet that explained how to use their ticket machines.

I gave it back. In hindsight, this may have been a bit hasty.

The setup

My monthly pass was about to expire. I didn't need to get a whole month this time, so I figured that I'd just buy a six-pack of one-day tickets, in order to save money.

How hard could that be?

Initial encounter

Instead of heading back to the human-powered ticket desk, I took the lady's advice and approached a machine.

I got to the train station about eight minutes before my train, confident that this would be more than enough time for me to exercise my not insignificant technical skills to prise forth a ticket.

On the screen, I poked a smiling SBB stock-photo customer in the eye to begin.

I chose a normal ticket. I entered my destination. Then I elected to change the starting point (I already had a ticket covering part of the trip).

Done. This was too easy.

Now to get a multi-day ticket. Multi-day, multi-ride...what's the difference?

I touch the info icon on each. A wall of text. Scanning doesn't elucidate anything.

Tick, tock.

I select multi-ride just to keep moving.1

I'm done early. All that remains is to pay for the ticket.

I slide in my card...

It's blocked. I turn it around and try again.


The card reader's apparently broken.

The machine is blissfully unaware of this and pops up a helpful message, asking whether it should just forget the whole thing and cancel my transaction.

I tell it not to cancel, but to go back to payment-method selection.

Going back incurs an interminable pause.

Tick, tock.

I'll pay cash!

The machine gives a maximum of twenty bucks in change. I have no twenties; only fifties.

Can't pay cash.

Wait! I have some change! What if I get a single-day ticket?

Type, type, type

That's only CHF5.80.

Scrounge, scrounge

I have CHF5.60.

Tick, tock.

Can I just tell the conductor that the machines are broken? Does that even work?2

I look over at the second ticket machine. There's a man standing in front of it, looking back and forth between two credit cards in his hands.

Tick, tock.

He seems to be having trouble deciding how he will pay for his ... CHF197.-- ticket.3

Tick, tock.

The dam has broken. He's decided.


Tick, tock.

I jump on the machine after he's finished.

I type in the same commands as I'd typed in just a minute before on the other machine, my fingers flying over the keys, marveling at the comparative speed of the ZVV machine vs. that of the SBB one.

Wait...why can't I change the starting point of my journey like on the other machine? Where did that option go?

Start over.

Tick, tock.

Why can't I choose a multi-ticket from this machine? Where did that option go?

Fine. Get a single ticket. Anything at this point.

Type in both towns again, muscle memory helping me along.

Laaaaagggggg as I type. Why is searching a list of a few thousand items so slow?

Choose a single ticket. Wait, why is it cheaper now? What changed? I could pay for this one with my change now.4

No time to think about it.

Tick, tock.

The bell is ringing. The barrier is lowering. The train is coming.

There's the button for multi-card! It's on the final screen instead. I barely have time to register that this is a much better place for that button as I punch it.

I can feel the rumble of the approaching train in my heels.

I jam in my credit card. The reader works! Huzzah!

I type my code. Nope, the reader's still warming up. Please hold...

Tick, tock.

I type in my code. I can hear the train now. Start printing!

Tick, tock.

Oh, the machine wants me to decide whether I want a receipt before it will do anything else.

Yes, I want a receipt! Now start printing!

The printer has started!

I see the flashing light alerting me to the imminent arrival of my hard-won train ticket, still warm from the innards of the machine.

A paper drops into the slot, lying awkwardly in the tray. It looks rather large.

"One of your products could not be printed."

I reach in and pull out a forty-centimeter--long, curled monstrosity that purports to be my multi-ticket.

In my haste, I assumed that this was a valid ticket. That this hideous thing with ink staining the front of it every which way was the product that my labors had brought into the world. I assumed that the aforementioned product that could not be printed was my receipt. I thought that while I might be in trouble with bookkeeping back at the office for not having a receipt, that at least I had a ticket for the train.5

Never mind all of that. I had a multi-ticket. An overlarge and misshapen one, perhaps, but nonetheless a ticket.

Now, one final step to make it valid.

Tick, tock.

The train is gliding into the station

I hurried over and proudly punched my ticket, the ink delineating today's date mixing illegibly into the mess of ink already printed there.6 I didn't care. The ticket had printed. I had paid. And I was getting on that train legally. With a valid ticket.

Whether I could prove it to a conductor or not was another question, but my conscience was clear.

I folded my ticket three times and put it in my backpack -- because it wouldn't fit in my wallet.

Despite my misgivings, no conductor came to make me reveal my shame.

The ride home

I got to the office and somehow the SBB came up in conversation that afternoon. I remembered my ticket and hauled its weighty length out of my backpack.

My colleague laughed and expressed sincere doubt that this was a valid ticket. He thought I might slip by with it, but was almost certain that it had been annulled.

This seemed like a reasonable theory, but I was beyond caring one way or the other.

Still, when I arrived at the station three minutes early for my trip home, I spotted a ZVV machine on my platform.

Brimming with confidence, muscle memory and experience, I staged an assault, fingers flashing as I once again unerringly entered my order.

I was flying along, typing my destination, when...

"Your order encountered an error and could not be completed."

The screen froze, wiped itself clean and restored the introductory graphic. You know, the one with the people grinning their way through a day made more magical by having been able to purchase their tickets from one of these wonderful machines.

I turned and walked away.

I would take my chances with my malformed freak of a ticket, blissful in the knowledge that the SBB couldn't hurt me anymore.

I had stopped caring.

I napped on the way home. No conductor dared to disturb my peace.

The ticket purchase. Take #2.

The next morning, I was at the station early again. More level heads had convinced me to try again and a good night's rest had restored my optimism.

I went straight for the ZVV machine, avoiding the squat and grimly brooding SBB machine with its slow screen and faulty card reader.

Type, type, type.

Destination; starting location; multi-card; full days.

Wait. What? CHF70.-? For six days?

A whole month costs CHF119.-

Argghhh. Now what did I do wrong?

Back, back, back.

Tick, tock.

Choose multi-card; 1-hour.

CHF30.- For 6 trips? That's three days of commuting.

Dammit. This is a waste of money; I might as well just get a month.

Back, back, back.

Tick, tock.

One month. Personal or transferable? Personal is cheaper; personal.

Personal card number?

What? Oh, it's on my half-fare card.

Dig half-fare card out of wallet

Tick, tock.

Train's a'comin'.

Start typing my half-fare card number.

Crash. "Your order encountered an error and could not be completed."


Start over. The bell's ringing. The barrier is lowering. The train's a'comin'.

Just get a six-pack of 1-hour tickets. It's the cheapest option that lets me avoid having to buy a ticket again this evening. It's the least typing I can do.

Tick, tock.

Got it. Sweet Lord almighty, I think have a valid ticket.

No, really. This time I think I have a valid ticket.


Now it's valid.


On my way to my destination, I did the math again and realized that I would need another ticket in three days.

A chill ran down my spine.

Time to give up. I would return to the SBB counter and just buy a monthly ticket from a human being.

At the counter, I asked for a renewal on the recently-expired monthly ticket I handed to him, but starting on the next Monday, my hand simultaneously cramping at the thought of how much touchscreen-typing that would have entailed.

In seconds, he renewed it and handed me the new ticket, chirpily telling me,

"You could have bought this ticket on the machine instead. Just three taps and you'd have been done!"

That night, as I lay in bed, I wondered whether his family missed him yet.

  1. This would turn out to be incorrect, but it would also turn out not to matter (keep reading). Multi-day is a 24-hour pass for the day on which you stamp it; multi-ride is a 1-hour pass. The former is obviously more expensive than than the latter.

  2. A colleague would later tell me that, if you call the SBB and tell them that the machine is broken, and give your name, that you're free and clear. This seems like a lot of work on the customer's part. You try to pay for a train ride and end up working for the train company.

  3. Considering how my purchase on that machine turned out, I was happy that it had elected to mess with my purchase and not his, as he seemed to have had a much longer trip before him than I.

  4. It turned out that on the first machine I'd selected a daily ticket whereas my flying fingers had selected a one-hour ticket on the second machine. It was about 20% cheaper and fit my coin budget but I would have had to do everything again in Altstetten had I chosen that option.

  5. It turned out that I was wrong. Another colleague would tell me that day that this is the machine's way of telling me that the ticket is invalid. When it said that the second item -- my receipt -- could not be printed, it was notifying me that it had nullified the entire transaction. With a whole screen of space, the machine could have told me that in much clearer terms.

  6. Not only was the ticket invalid, but I had somehow managed to purchase the wrong zones anyway. Of the text I was barely able to decipher, I was almost certain that I didn't have the zones I needed. It explained why my ticket purchases on the next day would seem so much more expensive.

.NET 4.5.1 and Visual Studio 2013 previews are available

The article Announcing the .NET Framework 4.5.1 Preview provides an incredible amount of detail about a relatively exciting list of improvements for .NET developers.

x64 Edit & Continue

First and foremost, the Edit-and-Continue feature is now available for x64 builds as well as x86 builds. Whereas an appropriate cynical reaction is that "it's about damn time they got that done", another appropriate reaction is to just be happy that they will finally support x64-debugging as a first-class feature in Visual Studio 2013.

Now that they have feature-parity for all build types, they can move on to other issues in the debugger (see the list of suggestions at the end).

Async-aware debugging

We haven't had much opportunity to experience the drawbacks of the current debugger vis à vis asynchronous debugging, but the experience outlined in the call-stack screenshot below is one that is familiar to anyone who's done multi-threaded (or multi-fiber, etc.) programming.


Instead of showing the actual stack location in the thread within which the asynchronous operation is being executed, the new and improved version of the debugger shows a higher-level interpretation that places the current execution point within the context of the asnyc operation. This is much more in keeping with the philosophy of the async/await feature in .NET 4.5, which lets developers write asynchronous code in what appears to be a serial fashion. This improved readability has been translated to the debugger now, as well.


Return-value inspection

The VS2013 debugger can now show the "direct return values and the values of embedded methods (the arguments)" for the current line.1 Instead of manually selecting the text segment and using the Quick Watch window, you can now just see the chain of values in the "Autos" debugger pane.


Nuget Improvements

We are also releasing an update in Visual Studio 2013 Preview to provide better support for apps that indirectly depend on multiple versions of a single NuGet package. You can think of this as sane NuGet library versioning for desktop apps.

We've been bitten by the afore-mentioned issue and are hopeful that the solution in Visual Studio 2013 will fill the gaps in the current release. The article describes several other improvements to the Nuget services, including integration with Windows Update for large-scale deployment. They also mentioned "a curated list of Microsoft .NET Framework NuGet Packages to help you discover these releases, published in OData format on the NuGet site", but don't mention whether the Nuget UI in VS2013 has been improved. The current UI, while not as awful and slow as initial versions, is still not very good for discovery and is quite clumsy for installation and maintenance.

User Voice for Visual Studio/.NET

You're not limited to just waiting on the sidelines to see which feature Microsoft has decided to implement in the latest version of .NET/Visual Studio. You should head over to the User Voice for Visual Studio site to get an account and vote for the issues you'd like the to work on next.

Here's a list of the ones I found interesting, and some of which I've voted on.

  1. In a similar vein, I found the issue Bring back Classic Visual Basic, an improved version of VB6 to be interesting, simply because of the large number of votes for it (1712 at the time of writing). While it's understandable that VB6 developers don't understand the programming paradigm that came with the transition to .NET, the utterly reactionary desire to go back to VB6 is somewhat unfathomable. It's 2013, you can't put the dynamic/lambda/jitted genie back in the bottle. If you can't run with the big dogs, you'll have to stay on the porch...and stop being a developer. There isn't really any room for software written in a glorified batch language anymore.

  2. This feature has been available for the unmanaged-code debugger (read: C++) for a while now.

v1.8.7: Bug fix release for v1.8.6 (no new features)

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.


  • QNO-4196: Reporting does not preview and print the correct font (for some fonts and only under Windows 8)
  • QNO-3779: Add support for basic connection-pooling; remoting servers share a server-side connection pool; clients use a connection pool per session by default
  • QNO-4204, QNO-4203: Data statistics show wrong timing information and got rid of ShortCircuit event type (replaced with primary data handler)
  • QNO-4179: One to one relations do not show the correct object on the primary key class

Breaking changes

No known breaking changes

Time Machine Backups

This article originally appeared on earthli News and has been cross-posted here.

I continue to be mystified as to how Microsoft has not managed to create a backup system as seamless and straightforward and efficient as Time Machine for OS X. The software is, however, not without its faults. As is usual with Apple software, Time Machine becomes quite frustrating and unwieldy when something goes ever so slightly wrong.

When it works, it works very well. It is unobtrusive. You have hourly backups. It is as technology should be: serving you.

At the beginning of the year, I bought an NAS (Network-attached Storage) to improve file-sharing at home. I then moved my Time Machine backups from an individual external hard disk for each OS X machine with Time Machine support (a grand total of two of them) to the home cloud (the aforementioned NAS).

This all worked quite well. I connected each machine to the NAS directly to create the initial, full backup and, after that, the machines burbled along, backing up efficiently over the wireless network.

That is until, one day, something went mysteriously wrong. Both of my machines have experienced this, seemingly without cause. The helpful error message is shown below.


If you read through it carefully, you'll see quite an implicit threat: the "Start New Backup" button, offered as the "quick-win" solution, will simply throw away all of your previous backups.

Don't be seduced by the "Back Up Later" button. All it does is show you the exact same message one day later. You are free to put off the decision indefinitely, but you will become well acquainted with this message.

Thanks Apple! Is that really the best that you can do? You just give up and tell me that I have to either (A) reconnect my machine to the LAN and run a backup that will take 12 hours or (B) just go ahead and try the same, but on the wireless LAN, which will take four times longer.

This is a typically technocratic software failure: the error was caught and acknowledged, so ... mission accomplished. That is most decidedly not the case. Apple should be eminently aware that this message will be shown to people for whom a fresh non-incremental backup entails not just dozens of hours but possibly days. Not only that, but uninterrupted hours/days. It is just not acceptable to give up so easily without even trying to repair the problem.

So that's where we stand: the automated backup -- lovely as it is when it works -- performs some sort of verification and then gives up. But a manual verification has, to date, never failed. And I've applied the solution below several times now, for both machines.

The solution is documented in Fix Time Machine Sparsebundle NAS Based Backup Errors by Garth Gillespie[^1]:

  1. su Admin (change to an administrator/sudoer account, if necessary)
  2. su - (change to the root user)
  3. chflags -R nouchg /Volumes/marco/Magni.sparsebundle (fix up flags/permissions)
  4. hdiutil attach -nomount -noverify -noautofsck /Volumes/marco/Magni.sparsebundle (attach backup volume, which automatically starts a file-system check)
  5. tail -f /var/log/fsck_hfs.log (show the progress for the file-system check)

The final command will show progress reports of the file-system check; if the check does not start, see the link above for more detailed instructions. Otherwise, you should see the message,

The volume Time Machine Backups appears to be OK.

in the log. Once this has run, you have to reset the status of the backup so that Time Machine thinks it can use it again:

  1. Browse to /Volumes/marco/Magni.sparsebundle in the Finder
  2. Right-click the file and select "Show Package Contents" from the menu
  3. Open the com.apple.TimeMachine.MachineID.plist file in a text editor
  4. Remove the two lines:
```Change the value of `VerificationState` to 0, as shown below:

VerificationState 0

It's not very straightforward, but it's worth it because *you won't lose your entire backup history*. In my experience -- and that of many, many others who've littered their complaints online -- Time Machine will, at some random time, once again fail verification and offer to chuck your entire backup because it can't think of a better solution.

Not only that, but once you've reset everything and Time Machine has run a backup, you might catch it surreptitiously re-running the verification. I highly recommend canceling that operation. Otherwise, despite the image just having been verified -- and used for backup -- not ten minutes before, Time Machine will once again throw its hands in the air, declare defeat and deliver the bad news that there's nothing for it but to start from scratch.

Irritating as it is to have to perform these steps manually, it doesn't even take that long, even when run over a wireless network. It would be utterly lovely if Apple could get this part working a little more reliably.


[^1]: The example path -- `/Volumes/marco/Magni.sparsebundle` -- is for a volume called "marco" on my NAS where the Time Machine backup for the machine "Magni" is stored. Obviously you will have better luck if you replace the volume and backup names in the path with those corresponding to your own NAS and machine.
Deleting multiple objects in Entity Framework

Many improvements have been made to Microsoft's Entity Framework (EF) since we here at Encodo last used it in production code. In fact, we'd last used it waaaaaay back in 2008 and 2009 when EF had just been released. Instead of EF, we've been using the Quino ORM whenever we can.

However, we've recently started working on a project where EF5 is used (EF6 is in the late stages of release, but is no longer generally available for production use). Though we'd been following the latest EF developments via the ADO.Net blog, we finally had a good excuse to become more familiar with the latest version with some hands-on experience.

Our history with EF

Entity Framework: Be Prepared was the first article we wrote about working with EF. It's quite long and documents the pain of using a 1.0 product from Microsoft. That version support only a database-first approach, the designer was slow and the ORM SQL-mapper was quite primitive. Most of the tips and advice in the linked article, while perhaps amusing, are no longer necessary (especially if you're using the Code-first approach, which is highly recommended).

Our next update, The Dark Side of Entity Framework: Mapping Enumerated Associations, discusses a very specific issue related to mapping enumerated types in an entity model (something that Quino does very well). This shortcoming in EF has also been addressed but we haven't had a chance to test it yet.

Our final article was on performance, Pre-generating Entity Framework (EF) Views, which, while still pertinent, no longer needs to be done manually (there's an Entity Framework Power Tools extension for that now).

So let's just assume that that was the old EF; what's the latest and greatest version like?

Well, as you may have suspected, you're not going to get an article about Code-first or database migrations.1 While a lot of things have been fixed and streamlined to be not only much more intuitive but also work much more smoothly, there are still a few operations that aren't so intuitive (or that aren't supported by EF yet).

Standard way to delete objects

One such operation is deleting multiple objects in the database. It's not that it's not possible, but that the only solution that immediately appears is to,

  • load the objects to delete into memory,
  • then remove these objects from the context
  • and finally save changes to the context, which will remove them from the database

The following code illustrates this pattern for a hypothetical list of users.

var users = context.Users.Where(u => u.Name == "John");

foreach (var u in users)


This seems somewhat roundabout and quite inefficient.2

Support for batch deletes?

While the method above is fine for deleting a small number of objects -- and is quite useful when removing different types of objects from various collections -- it's not very useful for a large number of objects. Retrieving objects into memory only to delete them is neither intuitive nor logical.

The question is: is there a way to tell EF to delete objects based on a query from the database?

I found an example attached as an answer to the post Simple delete query using EF Code First. The gist of it is shown below.

  "DELETE FROM Users WHERE Name = @name",
  new [] { new SqlParameter("@name", "John") }

To be clear right from the start, using ESQL is already sub-optimal because the identifiers are not statically checked. This query will cause a run-time error if the model changes so that the "Users" table no longer exists or the "Name" column no longer exists or is no longer a string.

Since I hadn't found anything else more promising, though, I continued with this approach, aware that it might not be usable as a pattern because of the compile-time trade-off.

Although the answer had four up-votes, it is not clear that either the author or any of his fans have actually tried to execute the code. The code above returns an IEnumerable<User> but doesn't actually do anything.

After I'd realized this, I went to MSDN for more information on the SqlQuery method. The documentation is not encouraging for our purposes (still trying to delete objects without first loading them), as it describes the method as follows (emphasis added),

Creates a raw SQL query that will return elements of the given generic type. The type can be any type that has properties that match the names of the columns returned from the query, or can be a simple primitive type.

This does not bode well for deleting objects using this method. Creating an enumerable does very little, though. In order to actually execute the query, you have to evaluate it.

Die Hoffnung stirbt zuletzt3 as we like to say on this side of the pond, so I tried evaluating the enumerable. A foreach should do the trick.

var users = context.Database.SqlQuery<User>(
  "DELETE FROM Users WHERE Name = @name", 
  new [] { new SqlParameter("@name", "John") }

foreach (var u in users)
  // NOP?

As indicated by the "NOP?" comment, it's unclear what one should actually do in this loop because the query already includes the command to delete the selected objects.

Our hopes are finally extinguished with the following error message:

System.Data.EntityCommandExecutionException : The data reader is incompatible with the specified 'Demo.User'. A member of the type, 'Id', does not have a corresponding column in the data reader with the same name.

That this approach does not work is actually a relief because it would have been far too obtuse and confusing to use in production.

It turns out that the SqlQuery only works with SELECT statements, as was strongly implied by the documentation.

var users = context.Database.SqlQuery<User>(
  "SELECT * FROM Users WHERE Name = @name",
  new [] { new SqlParameter("@name", "John") }

Once we've converted to this syntax, though, we can just use the much clearer and compile-time--checked version that we started with, repeated below.

var users = context.Users.Where(u => u.Name == "John");

foreach (var u in users)


So we're back where we started, but perhaps a little wiser for having tried.

Deleting objects with Quino

As a final footnote, I just want to point out how you would perform multiple deletes with the Quino ORM. It's quite simple, really. Any query that you can use to select objects you can also use to delete objects4.

So, how would I execute the query above in Quino?

Session.Delete(Session.CreateQuery<User>().WhereEquals(User.MetaProperties.Name, "John").Query);

To make it a little clearer instead of showing off with a one-liner:

var query = Session.CreateQuery<User>();
query.WhereEquals(User.MetaProperties.Name, "John");

Quino doesn't support using Linq to create queries, but its query API is still more statically checked than ESQL. You can see how the query could easily be extended to restrict on much more complex conditions, even including fields on joined tables.

Some combination of these reasons possibly accounts for EF's lack of support for batch deletes.

  1. As I wrote, We're using Code-first, which is much more comfortable than using the database-diagram editor of old. We're also using the nascent "Migrations" support, which has so far worked OK, though it's nowhere near as convenient as Quino's automated schema-migration.

  2. Though it is inefficient, it's better than a lot of other examples out there, which almost unilaterally include the call to context.SaveChanges() inside the foreach-loop. Doing so is wasteful and does not give EF an opportunity to optimize the delete calls into a single SQL statement (see footnote below).

  3. Translates to: "Hope is the last (thing) to die."

  4. With the following caveats, which generally apply to all queries with any ORM:

    * Many databases use a different syntax and provide different support for `DELETE` vs. `SELECT` operations.
    * Therefore, it is more likely that more complex conditions are not supported for `DELETE` operations on some database back-ends
    * Since the syntax often differs, it's more likely that a more complex query will fail to map properly in a `DELETE` operation than in a `SELECT` operation simply because that particular combination has never come up before.
    * That said, Quino has quite good support for deleting objects with restrictions not only on the table from which to delete data but also from other, joined tables.

asm.js: a highly optimizable compilation target

This article originally appeared on earthli News and has been cross-posted here.

The article Surprise! Mozilla can produce near-native performance on the Web by Peter Bright takes a (very) early look at asm.js, a compilation target that the Mozilla foundation is pushing as a way to bring high-performance C++/C applications (read: games) to browsers.

The tool chain is really, really cool. The Clang compiler has really come a long way and established itself as the new, more flexible compiler back-end to use (Apple's XCode has been using it since version 3.2 and it's been the default since XCode 4.2). Basically, Mozilla hooked up a JavaScript code generator to the Clang tool-chain. This way, they get compilation, error-handling and a lot of optimizations for free. From the article,

[The input] language is typically C or C++, and the compiler used to produce asm.js programs is another Mozilla project: Emscripten. Emscripten is a compiler based on the LLVM compiler infrastructure and the Clang C/C++ front-end. The Clang compiler reads C and C++ source code and produces an intermediate platform-independent assembler-like output called LLVM Intermediate Representation. LLVM optimizes the LLVM IR. LLVM IR is then fed into a backend code generatorthe part that actually produces executable code. Traditionally, this code generator would emit x86 code. With Emscripten, it's used to produce JavaScript.

Mozilla has had a certain amount of success with it, but if you read all the way through the article, the project is very much a work in progress. The benchmarks executed by Ars Technica, however, bear out Mozilla's claims of being within shooting distance of native performance (for some usages; e.g. native MT applications still blow it away because JavaScript lacks support for multi-threading and shared memory structures).

Just compiling C++/C code to JavaScript is only part of the solution: that wouldn't necessarily generate code that's any faster than hand-tuned JavaScript. The trick is to optimize the compilation target -- that is, if the code is going to be generated by a compiler, that compiler can avoid using JavaScript language features and patterns that are hard or impossible to optimize (read the latest spec to find out more). Not only that, but if the JavaScript engine is asm.js-aware, it will also be able to apply even more optimizations because the input code will be guaranteed not to make use of any dynamic features that require much more stringent checking and handling. From the article,

An engine that knows about asm.js also knows that asm.js programs are forbidden from using many JavaScript features. As a result, it can produce much more efficient code. Regular JavaScript JITs must have guards to detect this kind of dynamic behavior. asm.js JITs do not; asm.js forbids this kind of dynamic behavior, so the JITs do not need to handle it. This simpler modelno dynamic behavior, no memory allocation or deallocation, just a narrow set of well-defined integer and floating point operationsenables much greater optimization.

While the results so far are quite positive, there are still a few issues to address:

  • asm.js scripts are currently quite large; Chrome would barely run them at all and even Firefox needed to be restarted every once in a while. Guess which browser handled the scripts with aplomb? That's right: IE10
  • asm.js also preallocates a large amount of memory, managing its own heap and memory layout (using custom-built VMTs to emulate objects rather than using the slower dynamic typing native to JavaScript). This preallocation means that a script's base footprint is much larger than that for a normal JavaScript application.
  • Browsers that haven't optimized the asm.js codepath run it more slowly than regular JavaScript that does the same thing
  • Source-level debugging is not available and debugging the generated JavaScript is a fool's errand
Merge conflicts in source control

This article originally appeared on earthli News and has been cross-posted here.

I was recently asked a question about merge conflicts in source-control systems.

[...] there keep being issues of files being over written, changes backed out etc. from people coding in the same file from different teams.

My response was as follows:

tl;dr: The way to prevent this is to keep people who have no idea what they're doing from merging files.

Extended version

Let's talk about bad merges happening accidentally. Any source-control worth its salt will support at least some form of automatic merging.

An automatic merge is generally not a problem because the system will not automatically merge when there are conflicts (i.e. simultaneous edits of the same lines, or edits that are "close" to one another in the base file).

An automatic merge can, however, introduce semantic issues.

For example if both sides declared a method with the same name, but in different places in the same file, an automatic merge will include both copies but the resulting file won't compile (because the same method was declared twice).

Or, another example is as follows:

Base file

public void A(B b)
  var a = new A();


Team One version

public void A(B b)
  var a = new A();


Team Two version

public void A(B b)
  var a = null;


Automatic merge

public void A(B b)
  var a = null;


The automatically merged result will compile, but it will crash at run-time. Some tools (like ReSharper) will display a warning when the merged file is opened, showing that a method is being called on a provably null variable. However, if the file is never opened or the warning ignored or overlooked, the program will crash when run.

In my experience, though, this kind of automatic-merge "error" doesn't happen very often. Code-organization techniques like putting each type in its own file and keeping methods bodies relatively compact go a long way toward preventing such conflicts. They help to drastically reduce the likelihood that two developers will be working in the same area in a file.

With these relatively rare automatic-merge errors taken care of, let's move on to errors introduced deliberately through maliciousness or stupidity. This kind of error is also very rare, in my experience, but I work with very good people.

Let's say we have two teams: Team One - branch one Works on file 1 Team Two - branch two Works on file 1 Team One promotes file 1 into the Master B branch, there are some conflicts that they are working out but the file is promoted.

I originally answered that I wasn't sure what it meant to "promote" a file while still working on it. How can a file be commited or checked in without having resolved all of the conflicts?

As it turns out, it can't. As documented in TFS Server 2012 and Promoting changes, promotion simply means telling TFS to pick up local changes and add them to the list of "Pending Changes". This is part of a new TFS2012 feature called "Local Workspaces". A promoted change corresponds to having added a file to a change list in Perforce or having staged a file in Git.

The net effect, though, is that the change is purely local. That is has been promoted has nothing to do with merging or committing to the shared repository. Other users cannot see your promoted changes. When you pull down new changes from the server, conflicts with local "promoted" changes will be indicated as usual, even if TFS has already indicated conflicts between a previous change and another promoted, uncommitted version of the same file. Any other behavior else would be madness.1

Team Two checks in their file 1 into the Master B branch. They back out the changes that Team One made without telling anyone anything.

There's your problem. This should never happen unless Team Two has truly determined that their changes have replaced all of the work that Team One did or otherwise made it obsolete. If people don't know how to deal with merges, then they should not be merging.

Just as Stevie Wonder's not allowed behind the wheel of a car, neither should some developers be allowed to deal with merge conflicts. In my opinion, though, any developer who can't deal with merges in code that he or she is working on should be moved another team or, possibly, job. You have to know your own code and you have to know your tools.2

Team One figures out the conflicts in their branch and re-promotes file one (and other files) to Master B branch. The source control system remembers that file 1 was backed out by Team Two so it doesn't promote file 1 but doesn't let the user know.

This sounds insane. When a file is promoted -- i.e. added to the pending changes -- it is assumed that the current version is added to the pending changes, akin to staging a file in Git. When further changes are made to the file locally, the source-control system should indicate that it has changed since having been promoted (i.e. staged).

When you re-promote the file (re-stage it), TFS should treat that as the most recent version in your workspace. When you pull down the changes from Team 2, you will have all-new conflicts to resolve because your newly promoted file will still be in conflict with the changes they made to "file 1" -- namely that they threw away all of the changes that you'd made previously.

And, I'm not sure how it works in TFS, but in Git, you can't "back out" a commit without leaving a trail:

  • Either there is a merge commit where you can see that Team Two chose to "accept their version" rather than "merge" or "accept other version"
  • Or, there is a "revert" commit that "undoes" the changes from a previous commit

Either way, your local changes will cause a conflict because they will have altered the same file in the same place as either the "merge" or "revert" commit and -- this is important -- will have done so after that other commit.

To recap, let me summarize what this sounds like:

  • T1: I want to check in file1
  • TFS: You have conflicts
  • T1: Promote file1 so that TFS knows about (other users can't see it yet because it hasn't been committed)
  • TFS: Okie dokie
  • T2: I want to check in file1
  • TFS: You have conflicts
  • T2: F&#$ that. Use my version. Oh, and, f&#$ T1.
  • TFS: I hear and obey. T2/file1 it is.
  • T1: OK, I resolved conflicts; here's the final version of file1
  • TFS: Thanks! tosses T1/file1 out the window

I don't believe that this is really possible -- even with TFS -- but, if this is a possibility with your source-control, then you have two problems:

  1. You have team members who don't know how to merge
  2. Your source control is helping them torpedo development

There is probably a setting in your source-control system that disallows simultaneous editing for files. This is a pretty huge restriction, but if your developers either can't or won't play nice, you probably have no choice.

  1. This is not to rule out such behavior 100%, especially in a source-control system with which I am largely unfamiliar. It only serves to indicate the degree to which I would be unwilling to work with any system that exhibits this kind of behavior.

  2. Different companies can have different grace periods for learning these two things, of course. I suppose that grace period can be interminably long, but...

Windows Live accounts and Windows 8

This article originally appeared on earthli News and has been cross-posted here.


tl;dr: If your Windows 8 is mysteriously moving your Windows and taskbar around, it might be because of your Windows Live account synchronizing settings from one machine to another.

Starting with Windows 8, you can connect your local user account to your Windows Live account, sharing your preferences and some Windows-App-Store application settings and logins.

I had this enabled for a while but recently discovered that it was responsible for mysterious issues I'd been experiencing on my desktop at work and my laptop at home.

The advantage of using a synchronized account is that, once you log in to Windows 8 with these settings -- no matter where -- you'll get a familiar user interface. Two of the more visible, if mundane, settings are the lock-screen wallpaper and the desktop wallpaper.

Synchronizing wallpaper makes sense because, if you took the time to change the desktop on one machine, there's a good chance you want to have the same desktop on another.

On the other hand, I wonder how many people will be surprised to see the racy and dubiously work-friendly desktop wallpaper that they chose for their home computer automatically show up when they log in at work on Monday morning. Especially if they updated the lock screen as well as the desktop wallpaper. While this type of synchronizing might endanger one's employment status, it's also exactly the kind of synchronizing that I would expect from Windows because it's not hardware-specific.

For the last several months, I've been smoke-testing Windows 8 for general use at Encodo and it's mostly been a quite pleasant upgrade from Windows 7. I don't really make much use of features from Windows 8, but it's very stable and noticeably faster on startup and coming back from hibernate than its predecessor.

Though there are some minor quibbles1, it was generally a no-brainer upgrade -- except that Windows could not seem to remember the taskbar location on either my laptop at home or the desktop at work.

Maybe you see where this is going.

In hindsight, it's bloody obvious that the taskbar location was also being synced over the Windows Live account cloud but, in my defense, Windows moves my application windows around a lot. I have two monitors and if one of them is turned off or goes into a deep sleep, Windows will oblige by moving all windows onto the remaining monitor.2 When you restore the missing monitor back to life, Windows does nothing to help you and you have to move everything back manually. At any rate, the taskbar being moved around coincided enough with other windows being moved around that I figured it was just Windows 8 being flaky.

That the issue also happened on the laptop at home was decidedly odd, though.

Now that I know what was causing the problem, I've turned off the synchronization and each copy of Windows 8 now remembers where it's taskbar was. I guess that, in the trivial situation, where the hardware is the same on both ends, it would make sense to synchronize this setting. But in my situation, where one side has a 15.4" laptop screen and the other has two monitors -- one 24" and the other 27" -- it makes no sense at all.

It's a bit of a shame that I had to resort to the rather heavy-handed solution of simply turning of synchronization entirely but I couldn't find a more fine-grained setting. The Windows 8 UI is pretty dumbed down, so there are only controls for ON and OFF.

  1. The Windows-App-store UI for wireless networks and settings is poorly made. There is no consistency to whether you use a right or left click and you can only choose to "forget" a network rather than just disconnect from it temporarily.

  2. And resizing them to fit! Yay! Thanks for your help, Windows!

Networking event #1 2013: Working with HTML5

Our first networking event of the year is scheduled for tonight (19.04) with a presentation on HTML5 development. The talk, to be presented by Marco, will cover our experiences developing a larger project for the web.

Here's the main overview:

  • Project parameters: what did we build?

  • Components, libraries and features

    • HTML5 tags & objects
    • CSS3 concepts
    • jQuery basics
  • Tools

    • IDE & Browser
    • Testing & Optimization

You can find the entire presentation in the documents section.

A provably safe parallel language extension for C#

This article originally appeared on earthli News and has been cross-posted here.

The paper Uniqueness and Reference Immutability for Safe Parallelism by Colin S. Gordon, Matthew J. Parkinson, Jared Parsons, Aleks Bromfield, Joe Duffy is quite long (26 pages), detailed and involved. To be frank, most of the notation was foreign to me -- to say nothing of making heads or tails of most of the proofs and lemmas -- but I found the higher-level discussions and conclusions quite interesting.

The abstract is concise and describes the project very well:

A key challenge for concurrent programming is that side-effects (memory operations) in one thread can affect the behavior of another thread. In this paper, we present a type system to restrict the updates to memory to prevent these unintended side-effects. We provide a novel combination of immutable and unique (isolated) types that ensures safe parallelism (race freedom and deterministic execution). The type system includes support for polymorphism over type qualifiers, and can easily create cycles of immutable objects. Key to the system's flexibility is the ability to recover immutable or externally unique references after violating uniqueness without any explicit alias tracking. Our type system models a prototype extension to C# that is in active use by a Microsoft team. We describe their experiences building large systems with this extension. We prove the soundness of the type system by an embedding into a program logic.

The project proposes a type-system extension with which developers can write provably safe parallel programs -- i.e. "race freedom and deterministic execution" -- with the amount of actual parallelism determined when the program is analyzed and compiled rather than decided by a programmer creating threads of execution.

Isolating objects for parallelism

The "isolation" part of this type system reminds me a bit of the way that SCOOP addresses concurrency. That system also allows programs to designate objects as "separate" from other objects while also releasing the program from the onus of actually creating and managing separate execution contexts. That is, the syntax of the language allows a program to be written in a provably correct way (at least as far as parallelism is concerned; see the "other provable-language projects" section below). In order to execute such a program, the runtime loads not just the program but also another file that specifies the available virtual processors (commonly mapped to threads). Sections of code marked as "separate" can be run in parallel, depending on the available number of virtual processors. Otherwise, the program runs serially.

In SCOOP, methods are used as a natural isolation barrier, with input parameters marked as "separate". See SCOOP: Concurrency for Eiffel and SCOOP (software) for more details. The paper also contains an entire section listing other projects -- many implemented on the the JVM -- that have attempted to make provably safe programming languages.

The system described in this paper goes much further, adding immutability as well as isolation (the same concept as "separate" in SCOOP). An interesting extension to the type system is that isolated object trees are free to have references to immutable objects (since those can't negatively impact parallelism). This allows for globally shared immutable state and reduces argument-passing significantly. Additionally, there are readable and writable references: the former can only be read but may be modified by other objects (otherwise it would be immutable); the latter can be read and written and is equivalent to a "normal" object in C# today. In fact, "[...] writable is the default annotation, so any single-threaded C# that does not access global state also compiles with the prototype."

Permission types

In this safe-parallel extension, a standard type system is extended so that every type can be assigned such a permission and there is "support for polymorphism over type qualifiers", which means that the extended type system includes the permission in the type, so that, given B => A, a reference to readable B can be passed to a method that expects an immutable A. In addition, covariance is also supported for generic parameter types.

When they say that the "[k]ey to the system's flexibility is the ability to recover immutable or externally unique references after violating uniqueness without any explicit alias tracking", they mean that the type system allows programs to specify sections that accept isolated references as input, lets them convert to writable references and then convert back to isolated objects -- all without losing provably safe parallelism. This is quite a feat since it allows programs to benefit from isolation, immutability and provably safe parallelism without significantly changing common programming practice. In essence, it suffices to decorate variables and method parameters with these permission extensions to modify the types and let the compiler guide you as to further changes that need to be made. That is, an input parameter for a method will be marked as immutable so that it won't be changed and subsequent misuse has to be corrected.

Even better, they found that, in practice, it is possible to use extension methods to allow parallel and standard implementations of collections (lists, maps, etc.) to share most code.

A fully polymorphic version of a map() method for a collection can coexist with a parallelized version pmap() specialized for immutable or readable collections. [...] Note that the parallelized version can still be used with writable collections through subtyping and framing as long as the mapped operation is pure; no duplication or creation of an additional collection just for concurrency is needed.

Real projects and performance impact

Much of the paper is naturally concerned with proving that their type system actually does what it says it does. As mentioned above, at least 2/3 of the paper is devoted to lemmas and large swaths of notation. For programmers, the more interesting part is the penultimate section that discusses the extension to C# and the experiences in using it for larger projects.

A source-level variant of this system, as an extension to C#, is in use by a large project at Microsoft, as their primary programming language. The group has written several million lines of code, including: core libraries (including collections with polymorphism over element permissions and data-parallel operations when safe), a webserver, a high level optimizing compiler, and an MPEG decoder.

Several million lines of code is, well, it's an enormous amount of code. I'm not sure how many programmers they have or how they're counting lines or how efficiently they write their code, but millions of lines of code suggests generated code of some kind. Still, taken with the next statement on performance, that much code more than proves that the type system is viable.

These and other applications written in the source language are performance-competitive with established implementations on standard benchmarks; we mention this not because our language design is focused on performance, but merely to point out that heavy use of reference immutability, including removing mutable static/global state, has not come at the cost of performance in the experience of the Microsoft team.

Not only is performance not impacted, but the nature of the typing extensions allows the compiler to know much more about which values and collections can be changed, which affects how aggressively this data can be cached or inlined.

In fact, the prototype compiler exploits reference immutability information for a number of otherwise-unavailable compiler optimizations. [...] Reference immutability enables some new optimizations in the compiler and runtime system. For example, the concurrent GC can use weaker read barriers for immutable data. The compiler can perform more code motion and caching, and an MSIL-to-native pass can freeze immutable data into the binary.

Incremental integration ("unstrict" blocks)

In the current implementation, there is an unstrict block that allows the team at Microsoft to temporarily turn off the new type system and to ignore safety checks. This is a pragmatic approach which allows the software to be run before it has been proven 100% parallel-safe. This is still better than having no provably safe blocks at all. Their goal is naturally to remove as many of these blocks as possible -- and, in fact, this requirement drives further refinement of the type system and library.

We continue to work on driving the number of unstrict blocks as low as possible without over-complicating the type systems use or implementation.

The project is still a work-in-progress but has seen quite a few iterations, which is promising. The paper was written in 2012; it would be very interesting to take it for a test drive in a CTP.

Other provable-language projects

A related project at Microsoft Research Spec# contributed a lot of basic knowledge about provable programs. The authors even state that the "[...] type system grew naturally from a series of efforts at safe parallelism. [...] The earliest version was simply copying Spec#s [Pure] method attribute, along with a set of carefully designed task-and data-parallelism libraries." Spec#, in turn, is a "[...] formal language for API contracts (influenced by JML, AsmL, and Eiffel), which extends C# with constructs for non-null types, preconditions, postconditions, and object invariants".

Though the implementation of this permissions-based type system may have started with Spec#, the primary focus of that project was more a valiant attempt to bring Design-by-Contract principles (examples and some discussion here) to the .NET world via C#. Though spec# has downloadable code, the project hasn't really been updated in years. This is a shame, as support for Eiffel[^1] in .NET, mentioned above as one of the key influences of spec#, was dropped by ISE Eiffel long ago.

Spec#, in turn, was mostly replaced by Microsoft Research's Contracts project (an older version of which was covered in depth in Microsoft Code Contracts: Not with a Ten-foot Pole). The Contracts project seems to be alive and well: the most recent release is from October, 2012. I have not checked it out since my initial thumbs-down review (linked above) but did note in passing that the implementation is still (A) library-only and (B) does not support Visual Studio 2012.

The library-only restriction is particularly galling, as such an implementation can lead to repeated code and unwieldy anti-patterns. As documented in the Contracts FAQ, the current implementation of the "tools take care of enforcing consistency and proper inheritance of contracts" but this is presumably accomplished with compiler errors that require the programmer to include contracts from base methods in overrides.

The seminal work Object-oriented Software Construction by Bertrand Meyer (vol. II in particular) goes into tremendous detail on a type system that incorporates contracts directly. The type system discussed in this article covers only parallel safety: null-safety and other contracts are not covered at all. If you're at all interested in these types of language extensions, the vol.2 of OOSC is a great read. The examples are all in Eiffel but should be relatively accessible. Though some features -- generics, notably but also tuples, once routines and agents -- have since made their way into C# and other more commonly used languages, many others -- such as contracts, anchored types (contravariance is far too constrained in C# to allow them), covariant return types, covariance everywhere, multiple inheritance, explicit feature removal, loop variants and invariants, etc. -- are still not available. Subsequent interesting work has also been done on extensions that allow creation of provably null-safe programs, something also addressed in part by Microsoft Research's Contracts project.