1 2 3 4 5 6 7 8 9 10 11
Networking Event 2016.1

imageOn Wednesday, Encodo had its first networking event of the year. Our very own Sebastian Greulach presented Code Review Best Practices. A bunch of our friends and colleagues from the area showed up for a lively discussion that, together with the presentation, lasted over 90 minutes.

We heard from people working with remote teams -- off- and near-shored -- as well as people working locally in both small and large teams and for small to large companies. We discussed various review styles, from formal to informal to nonexistent as well as the differences in managing and reviewing code for projects versus products. Naturally, we also covered tool support and where automation makes sense and where face-to-face human interaction is still better.

The discussion continued over a nice meal prepared on our outdoor grill. We even had a lot more vegetables this time! Thanks to lovely weather, we were able to spend some time outside and Pascal demonstrated his L337 drone-flying skills -- but even he couldn't save it from a rain gutter when a propeller came off mid-flight.

Thanks to everyone who helped make it happen and thanks to everyone who showed up!

API Design: The Road Not Taken

Unwritten code requires no maintenance and introduces no cognitive load.

As I was working on another part of Quino the other day, I noticed that the oft-discussed registration and configuration methods1 were a bit clunkier than I'd have liked. To whit, the methods that I tended to use together for configuration had different return types and didn't allow me to freely mix calls fluently.

The difference between Register and Use

The return type for Register methods is IServiceRegistrationHandler and the return type for Use methods is IApplication (a descendant), The Register* methods come from the IOC interfaces, while the application builds on top of this infrastructure with higher-level Use* configuration methods.

This forces developers to write code in the following way to create and configure an application.

public IApplication CreateApplication()
{
  var result =
    new Application()
    .UseStandard()
    .UseOtherComponent();

  result.
    .RegisterSingle<ICodeHandler, CustomCodeHandler>()
    .Register<ICodePacket, FSharpCodePacket>();

  return result;
}

That doesn't look too bad, though, does it? It doesn't seem like it would cramp anyone's style too much, right? Aren't we being a bit nitpicky here?

That's exactly why Quino 2.0 was released with this API. However, here we are, months later, and I've written a lot more configuration code and it's really starting to chafe that I have to declare a local variable and sort my method invocations.

So I think it's worth addressing. Anything that disturbs me as the writer of the framework -- that gets in my way or makes me write more code than I'd like -- is going to disturb the users of the framework as well.

Whether they're aware of it or not.

Developers are the Users of a Framework

In the best of worlds, users will complain about your crappy API and make you change it. In the world we're in, though, they will cheerfully and unquestioningly copy/paste the hell out of whatever examples of usage they find and cement your crappy API into their products forever.

Do not underestimate how quickly calls to your inconvenient API will proliferate. In my experience, programmers really tend to just add a workaround for whatever annoys them instead of asking you to fix the problem at its root. This is a shame. I'd rather they just complained vociferously that the API is crap rather than using it and making me support it side-by-side with a better version for usually feels like an eternity.

Maybe it's because I very often have control over framework code that I will just not deal with bad patterns or repetitive code. Also I've become very accustomed to having a wall of tests at my beck and call when I bound off on another initially risky but in-the-end rewarding refactoring.

If you're not used to this level of control, then you just deal with awkward APIs or you build a workaround as a band-aid for the symptom rather than going after the root cause.

Better Sooner than Later

So while the code above doesn't trigger warning bells for most, once I'd written it a dozen times, my fingers were already itching to add [Obsolete] on something.

I am well-aware that this is not a simple or cost-free endeavor. However, I happen to know that there aren't that many users of this API yet, so the damage can be controlled.

If I wait, then replacing this API with something better later will take a bunch of versions, obsolete warnings, documentation and re-training until the old API is finally eradicated. It's much better to use your own APIs -- if you can -- before releasing them into the wild.

Another more subtle reason why the API above poses a problem is that it's more difficult to discover, to learn. The difference in return types will feel arbitrary to product developers. Code-completion is less helpful than it could be.

It would be much nicer if we could offer an API that helped users discover it at their own pace instead of making them step back and learn new concepts. Ideally, developers of Quino-based applications shouldn't have to know the subtle difference between the IOC and the application.

A Better Way

Something like the example below would be nice.

return
  new Application()
  .UseStandard()
  .RegisterSingle<ICodeHandler, CustomCodeHandler>()
  .UseOtherComponent()
  .Register<ICodePacket, FSharpCodePacket>();

Right? Not a gigantic change, but if you can imagine how a user would write that code, it's probably a lot easier and more fluid than writing the first example. In the second example, they would just keep asking code-completion for the next configuration method and it would just be there.

Attempt #1: Use a Self-referencing Generic Parameter

In order to do this, I'd already created an issue in our tracker to parameterize the IServiceRegistrationHandler type in order to be able to pass back the proper return type from registration methods.

I'll show below what I mean, but I took a crack at it recently because I'd just watched the very interesting video Fun with Generics by Benjamin Hodgson, which starts off with a technique identical to the one I'd planned to use -- and that I'd already used successfully for the IQueryCondition interface.2

Let's redefine the IServiceRegistrationHandler interface as shown below,

public interface IServiceRegistrationHandler<TSelf>
{
  TSelf Register<TService, TImplementation>()
      where TService : class
      where TImplementation : class, TService;

  // ...
}

Can you see how we pass the type we'd like to return as a generic type parameter? Then the descendants would be defined as,

public interface IApplication : IServiceRegistrationHandler<IApplication>
{
}

In the video, Hodgson notes that the technique has a name in formal notation, "F-bounded quantification" but that a snappier name comes from the C++ world, "curiously recurring template pattern". I've often called it a self-referencing generic parameter, which seems to be a popular search term as well.

This is only the first step, though. The remaining work is to update all usages of the formerly non-parameterized interface IServiceRegistrationHandler. This means that a lot of extension methods like the one below

public static IServiceRegistrationHandler RegisterCoreServices(
  [NotNull] this IServiceRegistrationHandler handler)
{
}

will now look like this:

public static TSelf RegisterCoreServices<TSelf>(
[NotNull] this IServiceRegistrationHandler<TSelf> handler)
  where TSelf : IServiceRegistrationHandler<TSelf>
{
}

This makes defining such methods more complex (again).3 in my attempt at implementing this, Visual Studio indicated 170 errors remaining after I'd already updated a couple of extension methods.

Attempt #2: Simple Extension Methods

Instead of continuing down this path, we might just want to follow the pattern we established in a few other places, by defining both a Register method, which uses the IServiceRegistrationHandler, and a Use method, which uses the IApplication

Here's an example of the corresponding "Use" method:

public static IApplication UseCoreServices(
  [NotNull] this IApplication application)
{
  if (application == null) { throw new ArgumentNullException("application"); }

  application
    .RegisterCoreServices()
    .RegisterSingle(application.GetServices())
    .RegisterSingle(application);

  return application;
}

Though the technique involves a bit more boilerplate, it's easy to write and understand (and reason about) these methods. As mentioned in the initial sentence of this article, the cognitive load is lower than the technique with generic parameters.

The only place where it would be nice to have an IApplication return type is from the Register* methods defined on the IServiceRegistrationHandler itself.

We already decided that self-referential generic constraints would be too messy. Instead, we could define some extension methods that return the correct type. We can't name the method the same as the one that already exists on the interface4, though, so let's prepend the word Use, as shown below:

IApplication UseRegister<TService, TImplementation>(
  [NotNull] this IApplication application)
      where TService : class
      where TImplementation : class, TService;
{
  if (application == null) { throw new ArgumentNullException("application"); }

  application.Register<TService, TImplementation>();

  return application;
}

That's actually pretty consistent with the other configuration methods. Let's take it for a spin and see how it feels. Now that we have an alternative way of registering types fluently without "downgrading" the result type from IApplication to IServiceRegistrationHandler, we can rewrite the example from above as:

return
  new Application()
  .UseStandard()
  .UseRegisterSingle<ICodeHandler, CustomCodeHandler>()
  .UseOtherComponent()
  .UseRegister<ICodePacket, FSharpCodePacket>();

Instead of increasing cognitive load by trying to push the C# type system to places it's not ready to go (yet), we use tiny methods to tweak the API and make it easier for users of our framework to write code correctly.5


Perhaps an example is in order:

interface IA 
{
  IA RegisterSingle<TService, TConcrete>();
}

interface IB : IA { }

static class BExtensions
{
  static IB RegisterSingle<TService, TConcrete>(this IB b) { return b; }

  static IB UseStuff(this IB b) { return b; }
}

Let's try to call the method from BExtensions:

public void Configure(IB b)
{
  b.RegisterSingle<IFoo, Foo>().UseStuff();
}

The call to UseStuff cannot be resolved because the return type of the matched RegisterSingle method is the IA of the interface method not the IB of the extension method. There is a solution, but you're not going to like it (I know I don't).

public void Configure(IB b)
{
  BExtensions.RegisterSingle<IFoo, Foo>(b).UseStuff();
}

You have to specify the extension-method class's name explicitly, which engenders awkward fluent chaining -- you'll have to nest these calls if you have more than one -- but the desired method-resolution was obtained.

But at what cost? The horror...the horror.


  1. See Encodos configuration library for Quino Part 1, Part 2 and Part 3 as well as API Design: Running and Application Part 1 and Part 2 and, finally, Starting up an application, in detail.

  2. The video goes into quite a bit of depth on using generics to extend the type system in the direction of dependent types. Spoiler alert: he doesn't make it because the C# type system can't be abused in this way, but the journey is informative.

  3. As detailed in the links in the first footnote, I'd just gotten rid of this kind of generic constraint in the configuration calls because it was so ugly and offered little benefit.

  4. If you define an extension method for a descendant type that has the same name as a method of an ancestor interface, the method-resolution algorithm for C# will never use it. Why? Because the directly defined method matches the name and all the types and is a "stronger" match than an extension method.

  5. The final example does not run against Quino 2.2, but will work in an upcoming version of Quino, probably 2.3 or 2.4.

v2.2: Winform fixes and Query Improvements

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.

Highlights

  • Lots of bug fixes and improvements for the Winform UI and German translations with the release of Punchclock on this version. (QNO-5162, QNO-5159, QNO-5158, QNO-5157, QNO-5156, QNO-5140, QNO-5155, QNO-5145, QNO-5111, QNO-5107, QNO-5106, QNO-5104, QNO-5015)
  • DateTimeExtensions.GetDayOfWeek() had a leap-day bug (QNO-5051)
  • Fixed how the hash code for GenericObjects is calculated, which fixes sorting issues in grids, specifically for non-persisted or transient objects (QNO-5137)
  • Improvements to the IAccessControl API for getting groups and users and testing membership (QNO-5133)
  • Add support for query aliases (e.g. for joining the same table multiple times) (QNO-531) This changes the API surface only minimally. Applications can pass an alias when calling the Join method, as shown below,
query.Join(Metadata.Project.Deputy, alias: "deputy")

You can find more examples of aliased queries in the TestAliasedQuery(), TestJoinAliasedTables(), TestJoinChildTwice() defined in the QueryTests testing fixture.

  • Add a standalone IQueryAnalyzer for optimizations and in-memory mini-drivers (QNO-4830)

Breaking changes

  • ISchemaManager has been removed. Instead, you should retrieve the interface you were looking for from the IOC. The possible interfaces you might need are IImportHandler, IMappingBuilder, IPlanBuilder or ISchemaCommandFactory.

  • ISchemaManagerSettings.GetAuthorized() has been moved to ISchemaManagerAuthorizer.

  • The hash-code fix for GenericObjects may have an effect on the way your application sorts objects.The IParticipantManager (base interface of IAccessControl) no longer has a single method called GetGroups(IParticipant). This method was previously used to get the groups to which a user belongs and the child groups of a given group. This confusing double duty for the API led to an incorrect implementation for both usages. Instead, there are now two methods:

    • IEnumerable<IGroup> GetGroups(IUser user): Gets the groups for the given user
    • IEnumerable<IGroup> GetChildGroups(IGroup group): Gets the child groups for the given group

The old method has been removed from the interface because (A) it never worked correctly anyway and (B) it conflicts with the new API.

Voxxed Zürich 2016: Notes

This first-ever Voxxed Zürich was hosted at the cinema in the SihlCity shopping center in Zürich on March 3rd. All presentations were in English. The conference was relatively small -- 333 participants -- and largely vendor-free. The overal technical level of the presentations and participants was quite high. I had a really nice time and enjoyed a lot of the presentations.

There was a nice common thread running through all of the presentations, starting with the Keynote. There's a focus on performance and reliability through immutabiliy, sequences, events, actors, delayed execution (lambdas, which are relatively new to Java), instances in the cloud, etc. It sounds very BUZZWORDY, but instead it came as a very technically polished conference that reminded me of how many good developers there are trying to do the right thing. Looking forward to next year; hopefully Encodo can submit a presentation.

You can take a look at the VoxxedDays Zürich -- Schedule. The talks that I visited are included below, with links to the presentation page, the video on YouTube and my notes and impressions. YMMV.

Keynote: Life beyond the Illusion of the Present

Life beyond the Illusion of the Present -- Jonas Bonér

media

Notes

  • He strongly recommended reading The Network is reliable by Peter Bailis.
  • This talk is about event-driven, CQRS programming.
  • Focus on immutable state, very much like JoeDuffy, etc. transactional accrual of facts.
  • Never delete data, annotate with more facts.
  • The reality at any point can be calculated for a point in time by aggregating facts up to that point. Like the talk I once wrote up some notes about (Runaway Complexity in Big Data, and a Plan to Stop It by Nathan Marz).
  • Everything else is a performance optimization. Database views, tables are all caches on the transaction log. Stop throwing the log away, though.
  • Define smaller atomic units. Not a whole database. Smaller. Consistency boundary. Services?
  • Availability trumps consistency. Use causal consistency through mechanisms other than time stamps. Local partial better than global.
  • He talked about data-flow programming; fingers crossed that we get some language support in C# 7
  • Akka (Akka.NET) is the main product.

Kotlin - Ready for production

Kotlin - Ready for production -- Hadi Hariri

media

  • Used at JetBrains, open-source. 14k+ users. It's not a ground-breaking language. They tried Scala and Scala was the first language they tried to use (Java already being off the table) but they didn't like it, so they invented Kotlin.

  • Interoperable with Java (of course). Usable from all sorts of systems, but intelliJ Idea has first-class support.

  • Much less code, less maintenance. Encapsulates some concepts like "data classes" which do what they're supposed for DTO definitions.

    • Inferred type on declarations. No nulls. Null-safe by design. Opt-in for nulls.
    • Implicit casts as well
    • Interface delegation
    • Lazy delegation
    • Deconstruction
    • Global infix operators; very expressive
    • Also defaults to/focuses on immutability
    • Algebraic data types/ data flow
    • Anglo is statically typed XML views for android
  • JavaScript target exists and is the focus of work. Replacement for TypeScript?

Reactive Apps with Akka and AngularJS

Reactive Apps with Akka and AngularJS -- Heiko Seeberger

media

  • He strongly recommended reading the reactive manifesto
  • Responsive: timely response / non-functional / also under load / scale up/down/out
  • Resilient: fail early
  • Message-driven: async message-passing is a way of getting reactive/responsive. Automatic decoupling leads to better error-handling, no data loss
  • Akka provides support for:
    • Actor-based model (actors are services); watch video from Channel Nine
    • Akka HTTP Server is relatively new
    • Akka is written in Scala
    • There's a Scala DSL for defining the controller (define routes)
    • The Scala compiler is pure crap. Sooooo slooooowww (62 seconds for 12 files)

During his talk, he took us through the following stages of building a scalable, resilient actor-based application with Akka.

  • First he started with static HTML
  • Then he moved on to something connected to AKKA, but not refreshing
  • W3C Server-sent events is unidirectional channel from the server to the client. He next used this to have instant refresh on the client; not available on IE. Probably used by SignalR (or whatever replaced it)? Nothing is typed, though, just plain old JavaScript
  • Then he set up sharding
  • Then persistence (Cassandra, Kafka)

AKKA Distributed Data

  • Deals with keeping replicas consistent without central coordination
  • Conflict-free replicated data types
  • Fully distributed, has pub/sub semantics
  • Uses the Gossip protocol
  • Support various consistency strategies
  • Using AKKA gives you automated scaling support (unlike the SignalR demo Urs and I did over 2 years ago, but that was a chat app as well)

AKKA Cluster Sharding

  • Partitioning of actors/services across clusters
  • Supports various strategies
  • Default strategy is to distribute unbalanced actors to new shards
  • The ShardRegion is another actor that manages communication with sharded actors (entities). This introduces a new level of indirection, which must be honored in the code (?)

AKKA Persistence

  • Event-sourcing: validate commands, journal events, apply the event after persistence.
  • Application is applied to local state only after the journal/persistence has indicated that the command was journaled
  • On recovery, events are replayed
  • Supports snapshotting (caching points in time)
  • Requires a change to the actor/entity to use it. All written in Scala.

Akka looks pretty good. It guarantees the ordering because ACTORS. Any given actor only exists on any shard once. If a shard goes down, the actor is recreated on a different shard, and filled with information from the persistent store to "recreate" the state of that actor.

DDD (Domain-Driven Design) and the actor model. Watch Hewitt, Meijer and Szyperski: The Actor Model (everything you wanted to know, but were afraid to ask).

Code is on GitHub: seeberger/reactive_flows

Lambda core - hardcore

Lambda core - hardcore -- Jarek Ratajski

media

Focus on immutability and no side-effects. Enforced by the lambda calculus. Pretty low-level talk about lambda calculus. Interesting, but not applicable. He admitted as much at the top of the talk.

Links:

expect("poo").length.toBe(1)

expect("poo").length.toBe(1) -- Philip Hofstetter1

media

This was a talk about expectations of the length of a character. The presenter was very passionate about his talk and went into an incredible amount of detail.

  • What is a string? This is the kind of stuff every programmer needs to know.2
  • String is not a collection of bytes. It's a sequence of graphemes. string <> char[]
  • UTF-16 is crap. What about the in-memory representation? Why in God's name did Python 3 use UTF32? Unicode Transformation format.
  • What is the length of a string? ä is how many? Single character (diuresis included) or a with combining diuresis?
  • In-memory representation in Java and C# are UCS-2 (UNICODE 1); stuck in 1996, before Unicode 2.0 came out. This leaks into APIs because of how strings are returned ... string APIs use UTF-16, encoding with surrogate pairs to get to characters outside of the BMP (understood by convention, but not by the APIs that expect UTF-16 ... which has no idea what surrogate pairs are ... and counting algorithms, find, etc. won't work).
  • ECMAScript hasn't really fixed this, either. substr() can break strings charAt() is still available and has no idea about code points. Does this apply to ES6? String-equality doesn't work for the diuresis above.
  • So we're stuck with server-side. Who does it right? Perl. Swift. Python. Ruby. Python went through hell with backwards compatibility but with 3.3 they're doing OK again. Ruby strings are a tuple of encoding and data. All of the others have their string libraries dealing in graphemes. How did Perl always get it right? Perl has three methods for asking questions about length, in graphemes, code points or bytes
  • What about those of us using JavaScript? C#? Java? There are external libraries that we should be using. Not just for DateTime, but for string-handling as well. Even EcmaScript15 still uses code points rather than graphemes, so the count varies depending on how the grapheme is constructed.
  • Security concerns: certificate authorities have to be aware of homographs (e.g. a character that looks like another one, but has a different encoding/byte sequence).
  • He recommended the book Unicode explained by Jukka K. Korpela.

How usability fits in UX - it's no PICNIC

How usability fits in UX - it's no PICNIC -- Myriam Jessier

media

What should a UI be?

  1. Functional
  2. Reliable
  3. Usable
  4. Convenient
  5. Pleasurable

Also nice to have:

  1. Desirable
  2. Delightful
  3. memorable
  4. Learnable
  5. 3 more

Book recommendation: Don't make me think by Steve Krug

  • Avoid mindless and unambiguous clicks. Don't count clicks, count useless shit you need to do.
  • Let the words go. People's attention will wander.
  • UX is going to be somewhat subjective. Don't try to please everyone.
  • OMG She uses hyphens correctly.
  • She discussed the difference between UX, CX, UI.
  • Personas are placeholders for your users. See Personapp to get started working with personas.

Guidelines:

  • Consistent and standardized UI
  • Guide the user (use visual cues, nudging)
  • Make the CallToAction (CTA) interactive objects obvious
  • Give feedback on progress, interaction
  • Never make a user repeat something they already told you. You're software, you should have eidetic memory
  • Always have default values in forms (e.g. show the expected format)
  • Explain how the inputed information will be used (e.g. for marketing purposes)
  • No more "reset" button or mass-delete buttons. Don't make it possible/easy to wipe out all someone's data
  • Have clear and explanatory error or success messages (be encouraging)
  • Include a clear and visual hierarchy and navigation

Guidelines for mobile:

  • Make sure it works on all phones

  • Give incentives for sharing and purpose (engagement rates make marketing happy. CLICK THE BUTTON)

  • Keep usability and conversion in mind (not necessarily money, but you actually want people to be using your app correctly)

  • Usability (can you use your app on the lowest screen-brightness?)

  • ...and more...

  • Make it pretty (some people don't care, e.g. She very clearly said that she's not aesthetically driven, it's not her field; other people do care. A lot).

  • Give all the information a customer needs to purchase

  • Design for quick movement (no lag)

  • Do usability testing through video

  • Leverage expectations. Fit in to the environment. Search is on the left? Behind a button? Do that. Don't make a new way of searching.

  • If you offer a choice, then make them as mutually exclusive as possible. When a company talks to itself (e.g. industry jargon), then users get confused

  • The registration process should be commensurate to the thing that you're registering for

  • Small clickable ads on mobile. Make click targets appropriate.

  • Don't blame negative feedback on "fear of change". It's probably you. If people don't like it, then it might not be user-friendly. The example with Twitter's star vs. heart. It's interesting how we let the world frame our interactions. Why not both? Too complex? Would people really be confused by two buttons? One to "like" and one for "read later"?

Suggested usability testing tools:

  • Crazy Egg is $9 per month for heatmaps.
  • Qualaroo
  • Optimizely (A/B testing)
  • Usabilia
  • Userfeel
  • Trymyui

React - A trip to Russia isn't all it seems

React - A trip to Russia isn't all it seems -- Josh Sephton[^3]

media

This talk was about Web UI frameworks and how his team settled on React.

  • Angular too "all or nothing".
  • Backbone has no data-binding.
  • React looks good. Has its own routing for SPAs. Very component-heavy. Everything's a component. Nothing new here so far.
  • They built their React to replace a Wordpress-based administration form
  • Stateful components are a bad idea
  • React components are like self-contained actors/services
  • They started with Flux, but ended up with Redux. We're using Redux in our samples. I'm eyeballing how to integrate Akka.Net (although I'm not sure if that has anything to do with this.
  • ReactNative: write once, use on any device
  • Kind of superficial and kinda short but I knew all about this in React already

The reactor programming model for composable distributed computing

The reactor programming model for composable distributed computing -- Aleksandar Prokopec[^4]

media

  • Reactive programming, with events as sequences of event objects
  • Events are equivalent to a list/sequence/streams (enumerable in C#)
  • This talk is also about managing concurrency
  • There must be a boundary between outer concurrent events vs. how your application works on them
  • That's why most UI toolkits are single-threaded
  • Asynchronous is the antonym of concurrency (at least in the dictionary)
  • Filter the stream of events to compress them to frames, then render and log, so the events come in, are marshaled through the serializing bottleneck and are then dispatched asynchronously to different tasks
  • Reactor lets clients create their own channels (actors) from which they read events and which they register with a server so that it can publish
  • Akka supports setting up these things, Reactor is another implementation?
  • Dammit I want destructuring of function results (C# 7?)
  • It's very easy to build client/server and broadcast and even ordered synchronization using UIDs (or that pattern mentioned by Jonas in the keynote) The UID needs to be location-specific, though. That's not sufficient either, what you need is client-specific. For this, you need special data structures to store the data in a way that edits are automatically correctly ordered. Events sent for these changes make the events are ordered correctly
  • What is the CRDT? We just implemented an online collaborative editor: composes nicely and provides a very declarative, safe and scalable way of defining software. This is just a function (feeds back into the idea of lambdas here, actually, immutability, encapsulation)
  • Reactors


  1. I am aware of the irony that the emoji symbol for "poo" is not supported on this blogging software. That was basically the point of the presentation -- that encoding support is difficult to get right. There's an issue for it: Add support for UTF8 as the default encoding.

  2. In my near-constant striving to be the worst conversational partner ever, I once gave a similar encoding lesson to my wife on a two-hour walk around a lake when she dared ask why mails sometimes have those "stupid characters" in them.

Finovate 2016: Bank2Things

image

image

At the beginning of the year, we worked on an interesting project that dipped into IOT (Internet of Things). The project was to create use cases for Crealogix's banking APIs in the real world. Concretely, we wanted to show how a customer could use these APIs in their own workflows. The use cases were to provide proof of the promise of flexibility and integrability offered by well-designed APIs.

Watch 7--minute video of the presentation

The Use Cases

Football Club Treasurer

imageThe first use case is for the treasurer of a local football club. The treasurer wants to be notified whenever an annual club fee is transferred from a member. The club currently uses a Google Spreadsheet to track everything, but it's updated manually. It would be really nice if the banking API could connected -- via some scripting "glue" -- to update the spreadsheet directly, without user intervention. The treasurer would just see the most current numbers whenever he opened the spreadsheet.

The spreadsheet is in addition to the up-to-date view of payments in the banking app. The information is also available there, but not necessarily in the form that he or she would like. Linking automatically to the spreadsheet is the added value.

Chore & Goal Tracker

imageimageImagine a family with a young son who wants to buy a drone. He would have to earn it by doing chores. Instead of tracking this manually, the boy's chores would be tabulated automatically, moving money from the parents' account to his own as he did chores. Additionally, a lamp in the boy's room would glow a color indicating how close he was to his goal. The parents wanted to track the boy's progress in a spreadsheet, tracking the transfers as they would have had they not had any APIs.

The idea is to provided added value to the boy, who can record his chores by pressing a button and see his progress by looking at a lamp's color. The parents get to stay in their comfort zone, working with a spreadsheet as usual, but having the data automatically entered in the spreadsheet.

The Plan

It's a bit of a stretch, but it sufficed to ground the relatively abstract concept of banking APIs in an example that non-technical people could follow.

So we needed to pull quite a few things together to implement these scenarios.

  • A lamp that can be controlled via API
  • A button that can trigger an API
  • A spreadsheet accessibly via API
  • An API that can transfer money between accounts
  • "Glue" logic that binds these APIs together

The Lamp

imageimage We looked at two lamps:

Either of these -- just judging from their websites -- would be sufficient to utterly and completely change our lives. The Hue looked like it was going to turn us into musicians, so we went with Lifx, which only threatened to give us horn-rimmed glasses and a beard (and probably skinny jeans and Chuck Taylor knockoffs).

Yeah, we think the marketing for what is, essentially, a light-bulb, is just a touch overblown. Still, you can change the color of the light bulb with a SmartPhone app, or control it via API (which is what we wanted to do).

The Button

The button sounds simple. You'd think that, in 2016, these things would be as ubiquitous as AOL CDs were in the 1990s. You'd be wrong.

imageThere's a KickStarter project called Flic that purports to have buttons that send signals over a wireless connection. They cost about CHF20. Though we ordered some, we never saw any because of manufacturing problems. If you thought the hype and marketing for a light bulb were overblown, then you're sure to enjoy how Flic presents a button.

We quickly moved along a parallel track to get buttons that can be pressed in real life rather than just viewed from several different angles and in several different colors online.

imageAmazon has what they have called "Dash" buttons that customers can press to add predefined orders to their one-click shopping lists. The buttons are bound to certain household products that you tend to purchase cyclically: toilet paper, baby wipes, etc.

They sell them dirt-cheap -- $5 -- but only to Amazon Prime customers -- and only to customers in the U.S. Luckily, we knew someone in the States willing to let us use his Amazon Prime account to deliver them, naturally only to a domestic address, from which they would have to be forwarded to us here in Switzerland.

That we couldn't use them to order toilet paper in the States didn't bother us -- we were planning to hack them anyway.

These buttons showed up after a long journey and we started trapping them in our own mini-network so that we could capture the signal they send and interpret it as a trigger. This was not ground-breaking stuff, but we really wanted the demonstrator to be able to press a physical button on stage to trigger the API that would cascade other APIs and so on.

Of course we could have just hacked the whole thing so that someone presses a button on a screen somewhere -- and we programmed this as a backup plan -- but the physicality of pressing a button was the part of the demonstration that was intended to ground the whole idea for non-technical users.1

The Spreadsheet

imageimageIf you're going to use an API to modify a spreadsheet, then that spreadsheet has to be available online somewhere. The spreadsheet application in Google Docs is a good candidate.

The API allows you to add or modify existing data, but that's pretty much it. When you make changes, they show up immediately, with no ceremony. That, unfortunately, doesn't make for a very nice-looking demo.

Google Docs also offers a Javascript-like scripting language that let's you do more. We wanted to not only insert rows, we wanted charts to automatically update and move down the page to accommodate the new row. All animated, thank you very much.

This took a couple pages of scripting and a good amount of time. It's also no longer a solution that an everyday user is likely to make themselves. And, even though we pushed as hard as we could, we also didn't get everything we wanted. The animation is very jerky (watch the video linked above) but gets the job done.

The Glue

imageSo we've got a bunch of pieces that are all capable of communicating in very similar ways. The final step is to glue everything together with a bit of script. There are several services available online, like IFTTT -- If This Then That -- that allow you to code simple logic to connect signals to actions.

In our system, we had the following signals:

  • Transfer was made to a bank account
  • Button was pressed

and the following actions:

  • Insert data into Google Spreadsheet
  • Set color of lamp

The Crealogix API and UI

imageimageimageSo we're going to betray a tiny secret here. Although the product demonstrated on-stage did actually do what it said, it didn't do it using the Crealogix API to actually transfer money. That's the part that we were actually selling and it's the part we ended up faking/mocking out because the actual transfer is beside the point. Setting up bank accounts is not so easy, and the banks take umbrage at creating them for fake purposes.

Crealogix could have let us use fake testing accounts, but even that would have been more work than it was worth: if we're already faking, why not just fake in the easiest way possible by skipping the API call to Crealogix and only updating the spreadsheet?

Likewise, the entire UI that we included in the product was mocked up to include only the functionality required by the demonstration. You can see an example here -- of the login screen -- but other screens are linked throughout this article. Likewise, the Bank2Things screen shown above and to the left is a mockup.

Wrapup

So what did Encodo actually contribute?

  • We used the Crealogix UX and VSG to mock up all of the app screens that you seen linked in this article. We did all of the animation and logic and styling.
  • We built two Google Spreadsheets and hooked them up to everything else
  • We hooked up the Lifx lamp API into our system
  • We hacked the Amazon Dash buttons to communicate in our own network instead of beaming home to the mothership
  • We built a web site to handle any mocking/faking that needed to be done for the demo and through which the devices communicated
  • We provided a VM (Virtual Machine) on which everything ran (other than the Google Spreadsheets)

As last year -- when we helped Crealogix create the prototype for their BankClip for Finovate 2015 -- we had a lot of fun investigating all of these cutting-edge technologies and putting together a custom solution in time for Finovate 2016.



  1. As it turns out, if you watch the 7--minute video of the presentation, nowhere do you actually see a button. Maybe they could see them from the audience.

Mini-applications and utilities with Quino

In several articles last year1, I went into a lot of detail about the configuration and startup for Quino applications. Those posts discuss a lot about what led to the architecture Quino has for loading up an application.

imageSome of you might be wondering: what if I want to start up and run an application that doesn't use Quino? Can I build applications that don't use any fancy metadata because they're super-simple and don't even need to store any data? Those are the kind of utility applications I make all the time; do you have anything for me, you continue to wonder?

As you probably suspected from the leading question: You're in luck. Any functionality that doesn't need metadata is available to you without using any of Quino. We call this the "Encodo" libraries, which are the base on which Quino is built. Thanks to the fundamental changes made in Quino 2, you have a wealth of functionality available in just the granularity you're looking for.

Why use a Common Library?

Instead of writing such small applications from scratch -- and we know we could write them -- why would we want to leverage existing code? What are the advantages of doing this?

  • Writing code that is out of scope takes time away from writing code that is in scope.
  • Code you never write has no bugs.
  • It also doesn't require maintenance or documentation.
  • While library code is not guaranteed to be bug-free, it's probably much better off than the code you just wrote seconds ago.
  • Using a library increases the likelihood of having robust, reliable and extendible code for out-of-scope application components.
  • One-off applications tend to be maintainable only by the originator. Applications using a common library can be maintained by anyone familiar with that library.
  • Without a library, common mistakes must be fixed in all copies, once for each one-off application.
  • The application can benefit from bug fixes and improvements made to the library.
  • Good practices and patterns are encouraged/enforced by the library.

What are potential disadvantages?

  • The library might compel a level of complexity that makes it take longer to create the application than writing it from scratch
  • The library might force you to use components that you don't want.
  • The library might hamstring you, preventing innovation.

A developer unfamiliar with a library -- or one who is too impatient to read up on it -- will feel these disadvantages more acutely and earlier.

Two Sample Applications

Let's take a look at some examples below to see how the Encodo/Quino libraries stack up. Are we able to profit from the advantages without suffering from the disadvantages?

We're going to take a look at two simple applications:

  1. An application that loads settings for Windows service-registration. We built this for a customer product.
  2. The Quino Code Generator that we use to generate metadata and ORM classes from the model

Windows Service Installer

The actual service-registration part is boilerplate generated by Microsoft Visual Studio2, but we'd like to replace the hard-coded strings with customized data obtained from a configuration file. So how do we get that data?

  • The main requirement is that the user should be able to indicate which settings to use when registering the Windows service.
  • The utility could read them in from the command line, but it would be nicer to read them from a configuration file.

That doesn't sound that hard, right? I'm sure you could just whip something together with an XMLDocument and some hard-coded paths and filenames that would do the trick.3 It might even work on the first try, too. But do you really want to bother with all of that? Wouldn't you rather just get the scaffolding for free and focus on the part where you load your settings?

Getting the Settings

The following listing shows the main application method, using the Encodo/Quino framework libraries to do the heavy lifting.

[NotNull]
public static ServiceSettings LoadServiceSettings()
{
  ServiceSettings result = null;
  var transcript = new ApplicationManager().Run(
    CreateServiceConfigurationApplication,
    app => result = app.GetInstance<ServiceSettings>()
  );

  if (transcript.ExitCode != ExitCodes.Ok)
  {
    throw new InvalidOperationException(
      "Could not read the service settings from the configuration file." + 
      new SimpleMessageFormatter().GetErrorDetails(transcript.Messages)
    );
  }

  return result;
}

If you've been following along in the other articles (see first footnote below), then this structure should be very familiar. We use an ApplicationManager() to execute the application logic, creating the application with CreateServiceConfigurationApplication and returning the settings configured by the application in the second parameter (the "run" action). If anything went wrong, we get the details and throw an exception.

You can't see it, but the library provides debug/file logging (if you enable it), debug/release mode support (exception-handling, etc.) and everything is customizable/replaceable by registering with an IOC.

Configuring the Settings Loader

Soooo...I can see where we're returning the ServiceSettings, but where are they configured? Let's take a look at the second method, the one that creates the application.

private static IApplication CreateServiceConfigurationApplication()
{
  var application = new Application();
  application
    .UseSimpleInjector()
    .UseStandard()
    .UseConfigurationFile("service-settings.xml")
    .Configure<ServiceSettings>(
      "service", 
      (settings, node) =>
      {
        settings.ServiceName = node.GetValue("name", settings.ServiceName);
        settings.DisplayName = node.GetValue("displayName", settings.DisplayName);
        settings.Description = node.GetValue("description", settings.Description);
        settings.Types = node.GetValue("types", settings.Types);
      }
    ).RegisterSingle<ServiceSettings>();

  return application;
}
  1. First, we create a standard Application, defined in the Encodo.Application assembly. What does this class do? It does very little other than manage the main IOC (see articles linked in the first footnote for details).
  2. The next step is to choose an IOC, which we do by calling UseSimpleInjector(). Quino includes support for the SimpleInjector IOC out of the box. As you can see, you must include this support explicitly, so you're also free to assign your own IOC (e.g. one using Microsoft's Unity). SimpleInjector is very lightweight and super-fast, so there's no downside to using it.
  3. Now we have an application with an IOC that doesn't have any registrations on it. How do we get more functionality? By calling methods like UseStandard(), defined in the Encodo.Application.Standard assembly. Since I know that UseStandard() pulls in what I'm likely to need, I'll just use that.4
  4. The next line tells the application the name of the configuration file to use.5
  5. The very next line is already application-specific code, where we configure the ServiceSettings object that we want to return. For that, there's a Configure method that returns an object from the IOC along with a specific node from the configuration data. This method is called only if everything started up OK.
  6. The final call to RegisterSingle makes sure that the ServiceSettings object created by the IOC is a singleton (it would be silly to configure one instance and return another, unconfigured one).

Basically, because this application is so simple, it has already accomplished its goal by the time the standard startup completes. At the point that we would "run" this application, the ServiceSettings object is already configured and ready for use. That's why, in LoadServiceSettings(), we can just get the settings from the application with GetInstance() and exit immediately.

Code Generator

The code generator has a bit more code, but follows the same pattern as the simple application above. In this case, we use the command line rather than the configuration file to get user input.

Execution

The main method defers all functionality to the ApplicationManager, passing along two methods, one to create the application, the other to run it.

internal static void Main()
{
  new ApplicationManager().Run(CreateApplication, GenerateCode);
}

Configuration

As before, we first create an Application, then choose the SimpleInjector and some standard configuration and registrations with UseStandard(), UseMetaStandardServices() and UseMetaTools().6

We set the application title to "Quino Code Generator" and then include objects with UseSingle() that will be configured from the command line and used later in the application.7 And, finally, we add our own ICommandSet to the command-line processor that will configure the input and output settings. We'll take a look at that part next.

private static IApplication CreateApplication(
  IApplicationCreationSettings applicationCreationSettings)
{
  var application = new Application();

  return
    application
    .UseSimpleInjector()
    .UseStandard()
    .UseMetaStandardServices()
    .UseMetaTools()
    .UseTitle("Quino Code Generator")
    .UseSingle(new CodeGeneratorInputSettings())
    .UseSingle(new CodeGeneratorOutputSettings())
    .UseUnattendedCommand()
    .UseCommandSet(CreateGenerateCodeCommandSet(application))
    .UseConsole();
}

Command-line Processing

The final bit of the application configuration is to see how to add items to the command-line processor.

Basically, each command set consists of required values, optional values and zero or more switches that are considered part of a set.

The one for i simply sets the value of inputSettings.AssemblyFilename to whatever was passed on the command line after that parameter. Note that it pulls the inputSettings from the application to make sure that it sets the values on the same singleton reference as will be used in the rest of the application.

The code below shows only one of the code-generator--specific command-line options.8

private static ICommandSet CreateGenerateCodeCommandSet(
  IApplication application)
{
  var inputSettings = application.GetSingle<CodeGeneratorInputSettings>();
  var outputSettings = application.GetSingle<CodeGeneratorOutputSettings>();

  return new CommandSet("Generate Code")
  {
    Required =
    {
      new OptionCommandDefinition<string>
      {
        ShortName = "i",
        LongName = "in",
        Description = Resources.Program_ParseCommandLineArgumentIn,
        Action = value => inputSettings.AssemblyFilename = value
      },
      // And others...
    },
  };
}

Code-generation

Finally, let's take a look at the main program execution for the code generator. It shouldn't surprise you too much to see that the logic consists mostly of getting objects from the IOC and telling them to do stuff with each other.9

I've highlighted the code-generator--specific objects in the code below. All other objects are standard library tools and interfaces.

private static void GenerateCode(IApplication application)
{
  var logger = application.GetLogger();
  var inputSettings = application.GetInstance<CodeGeneratorInputSettings>();

  if (!inputSettings.TypeNames.Any())
  {
    logger.Log(Levels.Warning, "No types to generate.");
  }
  else
  {
    var modelLoader = application.GetInstance<IMetaModelLoader>();
    var metaCodeGenerator = application.GetInstance<IMetaCodeGenerator>();
    var outputSettings = application.GetInstance<CodeGeneratorOutputSettings>();
    var modelAssembly = AssemblyTools.LoadAssembly(
      inputSettings.AssemblyFilename, logger
    );

    outputSettings.AssemblyDetails = modelAssembly.GetDetails();

    foreach (var typeName in inputSettings.TypeNames)
    {
      metaCodeGenerator.GenerateCode(
        modelLoader.LoadModel(modelAssembly, typeName), 
        outputSettings,
        logger
      );
    }
  }
}

So that's basically it: no matter how simple or complex your application, you configure it by indicating what stuff you want to use, then use all of that stuff once the application has successfully started. The Encodo/Quino framework provides a large amount of standard functionality. It's yours to use as you like and you don't have to worry about building it yourself. Even your tiniest application can benefit from sophisticated error-handling, command-line support, configuration and logging without lifting a finger.


var fileService = new ServiceInstaller();
fileService.StartType = ServiceStartMode.Automatic;
fileService.DisplayName = "Quino Sandbox";
fileService.Description = "Demonstrates a Quino-based service.";
fileService.ServiceName = "Sandbox.Services";

See the ServiceInstaller.cs file in the Sandbox.Server project in Quino 2.1.2 and higher for the full listing.

<?xml version="1.0" encoding="utf-8" ?>
<config>
  <service>
    <name>Quino.Services</name>
    <displayName>Quino Utility</displayName>
    <description>The application to run all Quino backend services.</description>
    <types>All</types>
  </service>
</config>

But that method is just a composition of over a dozen other methods. If, for whatever reason (perhaps dependencies), you don't want all of that functionality, you can just call the subset of methods that you do want. For example, you could call UseApplication() from the Encodo.Application assembly instead. That method includes only the support for:

    * Processing the command line (`ICommandSetManager`)
    * Locating external files (`ILocationManager`)
    * Loading configuration data from file (`IConfigurationDataLoader`)
    * Debug- and file-based logging (`IExternalLoggerFactory`)
    * and interacting with the `IApplicationManager`.

If you want to go even lower than that, you can try UseCore(), defined in the Encodo.Core assembly and then pick and choose the individual components yourself. Methods like UseApplication() and UseStandard() are tried and tested defaults, but you're free to configure your application however you want, pulling from the rich trove of features that Quino offers.

You'll notice that I didn't use Configure<ILocationManager>() for this particular usage. That's ordinarily the way to go if you want to make changes to a singleton before it is used. However, if you want to change where the application looks for configuration files, then you have to change the location manager before it's used any other configuration takes place. It's a special object that is available before the IOC has been fully configured. To reiterate from other articles (because it's important), the order of operations we're interested in here are:

     1. Create application (this is where you call `Use*()` to build the application)
     2. Get the location manager to figure out the path for `LocationNames.Configuration`
     3. Load the configuration file
     4. Execute all remaining actions, including those scheduled with calls to `Configure()`

If you want to change the configuration-file location, then you have to get in there before the startup starts running -- and that's basically during application construction. Alternatively, you could also call UseConfigurationDataLoader() to register your own object to actually load configuration data and do whatever the heck you like in there, including returning constant data. :-)


  1. See Encodos configuration library for Quino Part 1, Part 2 and Part 3 as well as API Design: Running and Application Part 1 and Part 2 and, finally, Starting up an application, in detail.

  2. That boilerplate looks like this:

  3. The standard implementation of Quino's ITextKeyValueNodeReader supports XML, but it would be trivial to create and register a version that supports JSON (QNO-4993) or YAML. The configuration file for the utility looks like this:

  4. If you look at the implementation of the UseStandard method10, it pulls in a lot of stuff, like support for BCrypt, enhanced CSV and enum-value parsing and standard configuration for various components (e.g. the file log and command line). It's called "Standard" because it's the stuff we tend to use in a lot of applications.

  5. By default, the application will look for this file next to the executable. You can configure this as well, by getting the location manager with GetLocationManager() and setting values on it.

  6. The metadata-specific analog to UseStandard() is UseMetaStandard(), but we don't call that. Instead, we call UseMetaStandardServices(). Why? The answer is that we want the code generator to be able to use some objects defined in Quino, but the code generator itself isn't a metadata-based application. We want to include the IOC registrations required by metadata-based applications without adding any of the startup or shutdown actions. Many of the standard Use*() methods included in the base libraries have analogs like this. The Use*Services() analogs are also very useful in automated tests, where you want to be able to create objects but don't want to add anything to the startup.

  7. Wait, why didn't we call RegisterSingle()? For almost any object, we could totally do that. But objects used during the first stage of application startup -- before the IOC is available -- must go in the other IOC, accessed with SetSingle() and GetSingle().

  8. The full listing is in Program.cs in the Quino.CodeGenerator project in any 2.x version of Quino.

  9. Note that, once the application is started, you can use GetInstance() instead of GetSingle() because the IOC is now available and all singletons are mirrored from the startup IOC to the main IOC. In fact, once the application is started, it's recommended to use GetInstance() everywhere, for consistency and to prevent the subtle distinction between IOCs -- present only in the first stage of startup -- from bleeding into your main program logic.

  10. If you have the Quino source code handy, you can look it up there, but if you have ReSharper installed, you can just F12 on UseStandard() to decompile the method. In the latest DotPeek, the extension methods are even displayed much more nicely in decompiled form.

Git: Managing local commits and branches

At Encodo, we've got a relatively long history with Git. We've been using it exclusively for our internal source control since 2010.1

Git Workflows

GitWhen we started with Git at Encodo, we were quite cautious. We didn't change what had already worked for us with Perforce.2 That is: all developers checked in to a central repository on a mainline or release branch. We usually worked with the mainline and never used personal or feature branches.

Realizing the limitation of this system, we next adopted an early incarnation GitFlow, complete with command-line support for it. A little while later, we switched to our own streamlined version of GitFlow without a dev branch, which we published in an earlier version of the Encodo Git Handbook.3

We're just now testing the waters of Pull Requests instead of direct commits to master and feature branches. Before we can make this move, though, we need to raise the comfort level that all of our developers have toward creating branches and manipulating commits. We need to take the magic and fear out of Git -- but that's a pushed commit4 -- and learn how to view Git more as a toolbox that we can make for us rather than a mysterious process to whose whims we must comply.5

General Rules

Before we get started, let's lay down some ground rules for working with Git and source control, in general.

  • Use branches
  • Don't use too many branches at once
  • Make small pull requests
  • Use no more than a few unpushed commits
  • Get regular reviews

As you can see, the rules describe a process of incremental changes. If you stick to them, you'll have much less need for the techniques described below. In case of emergency, though, let's demystify some of what Git does.

If you haven't done so already, you should really take a look at some documentation of how Git actually works. There are two sources I can recommend:

  • The all-around excellent and extremely detailed Official Git Documentation. It's well-written and well-supplied with diagrams, but quite detailed.
  • The Encodo Git Handbook summarizes the details of Git we think are important, as well as setting forth best practices and a development process.

Examples

All examples and screenshots are illustrated with the SmartGit log UI.

Before you do any of the manipulation shown below, **always make sure your working tree has been cleared**. That means there are no pending changes in it. Use the `stash` command to put pending changes to the side.

Moving branches

In SmartGit, you can grab any local branch marker and drag it to a new location. SmartGit will ask what you want to do with the dropped branch marker, but you'll almost always just want to set it to the commit on which you dropped it.

This is a good way of easily fixing the following situation:

  1. You make a bunch of commits on the master branch
  2. You get someone to review these local commits
  3. They approve the commits, but suggest that you make a pull request instead of pushing to master. A good reason for this might be that both the developer and the face-to-face reviewer think another reviewer should provide a final stamp of approval (i.e. the other reviewer is the expert in an affected area)

In this case, the developer has already moved their local master branch to a newer commit. What to do?

Create a pull-request branch

Create and check out a pull-request branch (e.g. mvb/serviceImprovements).

image

image

Set master to the origin/master

Move the local master branch back to origin/master. You can do this in two ways:

  • Check out the master branch and then reset to the origin/master branch or...
  • Just drag the local master branch to the origin/master commit.

image

Final: branches are where they belong

In the end, you've got a local repository that looks as if you'd made the commits on the pull-request branch in the first place. The master branch no longer has any commits to push.

image

Moving & joining commits

SmartGit supports drag&drop move for local commits. Just grab a commit and drop it to where you'd like to have it in the list. This will often work without error. In some cases, like when you have a lot of commits addressing the same areas in the same files, SmartGit will detect a merge conflict and will be unable to move the commit automatically. In these cases, I recommend that you either:

  • Give up. It's probably not that important that the commits are perfect.
  • Use the techniques outlined in the long example below instead.

You can also "join" -- also called "squash" in Git parlance -- any adjoining commits into a single commit. A common pattern you'll see is for a developer to make changes in response to a reviewer's comments and save them in a new commit. The developer can then move that commit down next to the original commit from which the changes stemmed and join the commits to "repair" the original commit after review. You can at the same time edit the commit message to include the reviewer's name. Nice, right?

Here's a quick example:

Initial: three commits

We have three commits, but the most recent one should be squashed with the first one.

image

Move a commit

Select the most recent commit and drag it to just above the commit with which you want to join it. This operation might fail.6

image

Squash selected commits

Select the two commits (it can be more) and squash/join them. This operation will not fail.

image

Final: two commits

When you're done, you should see two commits: the original one has now been "repaired" with the additional changes you made during the review. The second one is untouched and remains the top commit.

image

Diffing commits

You can squash/join commits when you merge or you can squash/join commits when you cherry-pick. If you've got a bunch of commits that you want to combine, cherry-pick those commits but don't commit them.

You can also this technique to see what has changed between two branches. There are a lot of ways to do this, and a lot of guides will show you how to execute commands on the command line to do this.

In particular, Git allows you to easily display the list of commits between two other commits as well as showing the combined differences in all of those commits in a patch format. The patch format isn't very easy to use for diffing from a GUI client, though. Most of our users know how to use the command line, but use SmartGit almost exclusively nonetheless -- because it's faster and more intuitive.

So, imagine you've made several commits to a feature or release branch and want to see what would be merged to the master branch. It would be nice to see the changes in the workspace as a potential commit on master so you can visually compare the changes as you would a new commit.

Here's a short, visual guide on how to do that.

Select commits to cherry-pick

Check out the target branch (master in this example) and then select the commits you want to diff against it.

image

Do not commit

When you cherry-pick, leave the changes to accumulate in the working tree. If you commit them, you won't be able to diff en bloc as you'd like.

image

Final: working tree

The working tree now contains the differences in the cherry-picked commits.

image

Now you can diff files to your heart's content to verify the changes.

Working-tree files

Once you have changes in the working tree that are already a part of other commits, you might be tempted to think you have to revert the changes because they're already committed, right?

You of course don't have to do that. You can let the original commits die on the vine and make new ones, as you see fit.

Suppose after looking at the differences between our working branch and the master branch, you decide you want to integrate them. You can do this in several ways.

  1. You could clear the working tree7, then merge the other branch to master to integrate those changes in the original commits.
  2. Or you could create one or more new commits out of the files in the workspace and commit those to master. You would do this if the original commits had errors or incomplete comments or had the wrong files in them.
  3. Or you could clear the working tree and re-apply the original commits by cherry-picking and committing them. Now you have copies of those commits and you can edit the messages to your heart's content.

Even if you don't merge the original commits as in option (1) above, and you create new commits with options (2) and (3), you can still merge the branch so that Git is aware that all work from that branch has been included in master. You don't have to worry about applying the same work twice. Git will normally detect that the changes to be applied are exactly the same and will merge automatically. If not, you can safely just resolve any merge conflicts by selecting the master side.8

An example of reorganizing commits

Abandon hope, all ye who enter here. If you follow the rules outlined above, you will never get into the situation described in this section. That said...when you do screw something up locally, this section might give you some idea of how to get out of it. Before you do anything else, though, you should consider how you will avoid repeating the mistake that got you here. You can only do things like this with local commits or commits on private branches.

The situation in this example is as follows:

  • The user has made some local commits and reviewed them, but did not push them.
  • Other commits were made, including several merge commits from other pull requests.
  • The new commits still have to be reviewed, but the reviewer can no longer sign the commits because they are rendered immutable by the merge commits that were applied afterward.
  • It's difficult to review these commits face-to-face and absolutely unconscionable to create a pull request out of the current local state of the master branch.
  • The local commits are too confusing for a reviewer to follow.

The original mess

So, let's get started. The situation to clean up is shown in the log-view below.

image

Pin the local commits

Branches in Git are cheap. Local ones even more so. Create a local branch to pin the local commits you're interested in into the view. The log view will automatically hide commits that aren't referenced by either a branch or a tag.9

image

Choose your commits

Step one: find the commits that you want to save/re-order/merge.

image

The diagram below shows the situation without arrows. There are 17 commits we want, interspersed with 3 merge commits that we don't want.10

image

Reset local master

Check out the master branch and reset it back to the origin.

image

Cherry-pick commits

Cherry-pick and commit the local commits that you want to apply to master. This will make copies of the commits on pin.

image

Master branch with 17 commits

When you're done, everything should look nice and neat, with 17 local commits on the master branch. You're now ready to get a review for the handful of commits that haven't had them yet.11

image

Delete the temporary branch

You now have copies of the commits on your master branch, so you no longer care about the pin branch or any of the commits it was holding in the view. Delete it.

image

That pesky merge

Without the pin, the old mess is no longer displayed in the log view. Now I'm just missing the merge from the pull request/release branch. I just realized, though: if I merge on top of the other commits, I can no longer edit those commits in any way. When I review those commits and the reviewer wants me to fix something, my hands will be just as tied as they were in the original sitution.

image

Inserting a commit

If the tools above worked once, they'll work again. You do not have to go back to the beginning, you do not have to dig unreferenced commits out of the Git reflog.

Instead, you can create the pin branch again, this time to pin your lovely, clean commits in place while you reset the master branch (as before) and apply the merge as the first commit.

image

Rebase pin onto master

Now we have a local master branch with a single merge commit that is not on the origin. We also have a pin branch with 17 commits that are not on the origin.

Though we could use cherry-pick to copy the individual commits from pin to master, we'll instead rebase the commits. The rebase operation is more robust and was made for these situations.12

image

pin is ready

We're almost done. The pin branch starts with the origin/master, includes a merge commit from the pull request and then includes 17 commits on top of that. These 17 commits can be edited, squashed and changed as required by the review.

image

Fast-forward master

Now you can switch to the master branch, merge the pin branch (you can fast-forward merge) and then delete the pin branch. You're done!

image

Conclusion

I hope that helps take some of the magic out of Git and helps you learn to make it work for you rather than vice versa. With just a few simple tools -- along with some confidence that you're not going to lose any work -- you can do pretty much anything with local commits.13

h/t to Dani and Fabi for providing helpful feedback.


If you look closely, you can even see two immediately subsequent merges where I merged the branch and committed it. I realized there was a compile error and undid the commit, added the fixes and re-committed. However, the re-commit was no longer a merge commit so Git "forgot" that the pull-request branch had been merged. So I had to merge it again in order to recapture that information.

This is going to happen to everyone who works more than casually with Git, so isn't it nice to know that you can fix it? No-one has to know.


  1. Over five years counts as a long time in this business.

  2. I haven't looked at their product palette in a while. They look to have gotten considerably more enterprise-oriented. The product palette is now split up between the Helix platform, Helix versioning services, Helix Gitswarm and more.

  3. But which we've removed from the most recent version, 3.0.

  4. This is often delivered in a hushed tone with a note of fervent belief that having pushed a commit to the central repository makes it holy. Having pushed a commit to the central repository on master or a release branch is immutable, but everything else can be changed. This is the reason we're considering a move to pull requests: it would make sure that commits become immutable only when they are ready rather than as a side-effect of wanting to share code with another developer.

  5. In all cases, when you manipulate commits -- especially merge commits -- you should minimally verify that everything still builds and optimally make sure that tests run green.

  6. If the commits over which you're moving contain changes that conflict with the ones in the commit to be moved, Git will not be able to move that commit without help. In that case, you'll either have to (A) give up or (B) use the more advanced techniques shown in the final example in this blog.

  7. That is, in fact, what I did when preparing this article. Since I'm not afraid of Git, I manipulated my local workspace, safe in the knowledge that I could just revert any changes I made without losing work.

  8. How do we know this? Because we just elected to create our own commits for those changes. Any merge conflicts that arise are due to the commits you expressly didn't want conflicting with the ones that you do, which you've already committed to master.

  9. You can elect to show all commits, but that would then show a few too many unwanted commits lying around as you cherry-pick, merge and rebase to massage the commits to the way you'd like them. Using a temporary branch tells SmartGit which commits you're interested in showing in the view.

  10. Actually, we do want to merge all changes from the pull-request branch but we don't want to do it in the three awkward commits that we used as we were working. While it was important at the time that the pull-request be merged in order to test, we want to do it in one smooth merge-commit in the final version.

  11. You may be thinking: what if I want to push the commits that have been reviewed to master and create a pull request for the remaining commits? Then you should take a look in the section above, called Moving branches, where we do exactly that.

  12. Why? As you saw above, when you cherry-pick, you have to be careful to get the right commits and apply them in the right order. The situation we currently have is exactly what rebase was made for. The rebase command will get the correct commits and apply them in the correct order to the master branch. If there are merge conflicts, you can resolve them with the client and the rebase automatically picks up where you left off. If you elect to cherry-pick the commits instead and the 8th out of 17 commits fails to merge properly, it's up to you to pick up where you left off after solving the merge conflict. The rebase is the better choice in this instance.

  13. Here comes the caveat: within reason. If you're got merge commits that you have to keep because they cost a lot of blood, sweat and tears to create and validate, then don't cavalierly throw them away. Be practical about the "prettiness" of your commits. If you really would like commit #9 to be between commits #4 and #5, but SmartGit keeps telling you that there is a conflict when trying to move that commit, then reconsider how important that move is. Generally, you should just forget about it because there's only so much time you should spend massaging commits. This article is about making Git work for you, but don't get obsessive about it.

v2.1.1 & v2.1.2: Bug fixes for web authentication, logging and services

The summary below describes major new features, items of note and breaking changes. The full list of issues for 2.1.1 and full list of issues for 2.1.2 are available for those with access to the Encodo issue tracker.

Highlights

  • Improved configuration, logging and error-handling for Windows services. (QNO-4992, QNO-5043, QNO-5057, QNO-5076, QNO-5077, QNO-5109)
  • Schema-based validation is once again applied. Without these validators, it was possible to make a model without the required meta-ids. During migration, this caused odd behavior. (QNO-5118)
  • Use TPL and async/await for services (QNO-5113)
  • Added new GetList(IEnumerable<IMetaRelation>) method to help products avoid lazy-loading (QNO-5113)
  • Reduce traffic for the EventLogger and MailLogger (QNO-5080)
  • Improve usability and error-reporting in the Quino Migrator

Breaking changes

  • The ConfigureDataProviderActionBase has been replaced with ConfigureDataProviderAction.
  • The standard implementations for IFeedback and IStatusFeedback as well as the other special-purpose feedbacks (e.g. IIncidentReporterSubmitterFeedback, ISchemaMigratorFeedback) have all been updated to require an IFeedbackLogger or IStatusLogger in the constructors. This was done to ensure that messages sent to feedbacks are logged, as noted in the highlights above. If you've implemented your own feedbacks, you'll have to accommodate the new constructors.
Profiling: that critical 3% (Part II)

image In part I of this series, we discussed some core concepts of profiling. In that article, we not only discussed the problem at hand, but also how to think about not only fixing performance problems, but reducing the likelihood that they get out of hand in the first place.

In this second part, we'll go into detail and try to fix the problem.

Reevaluating the Requirements

Since we have new requirements for an existing component, it's time to reconsider the requirements for all stakeholders. In terms of requirements, the IScope can be described as follows:

  1. Hold a list of objects in LIFO order
  2. Hold a list of key/value pairs with a unique name as the key
  3. Return the value/reference for a key
  4. Return the most appropriate reference for a given requested type. The most appropriate object is the one that was added with exactly the requested type. If no such object was added, then the first object that conforms to the requested type is returned
  5. These two piles of objects are entirely separate: if an object is added by name, we do not expect it to be returned when a request for an object of a certain type is made

There is more detail, but that should give you enough information to understand the code examples that follow.

Usage Patterns

There are many ways of implementing the functional requirements listed above. While you can implement the feature with only requirements, it's very helpful to know usage patterns when trying to optimize code.

Therefore, we'd like to know exactly what kind of contract our code has to implement -- and to not implement any more than was promised.

Sometimes a hopeless optimization task gets a lot easier when you realize that you only have to optimize for a very specific situation. In that case, you can leave the majority of the code alone and optimize a single path through the code to speed up 95% of the calls. All other calls, while perhaps a bit slow, will at least still be yield the correct results.

And "optimized" doesn't necessarily mean that you have to throw all of your language's higher-level constructs out the window. Once your profiling tool tells you that a particular bit of code has introduced a bottleneck, it often suffices to just examine that particular bit of code more closely. Just picking the low-hanging fruit will usually be more than enough to fix the bottleneck.1

Create scopes faster2

I saw in the profiler that creating the ExpressionContext had gotten considerably slower. Here's the code in the constructor.

foreach (var value in values.Where(v => v != null))
{
  Add(value);
}

I saw a few potential problems immediately.

  • The call to Add() had gotten more expensive in order to return the most appropriate object from the GetInstances() method
  • The Linq replaced a call to AddRange()

The faster version is below:

var scope = CurrentScope;
for (var i = 0; i < values.Length; i++)
{
  var value = values[i];
  if (value != null)
  {
    scope.AddUnnamed(value);
  }
}

Why is this version faster? The code now uses the fact that we know we're dealing with an indexable list to avoid allocating an enumerator and to use non-allocating means of checking null. While the Linq code is highly optimized, a for loop is always going to be faster because it's guaranteed not to allocate anything. Furthermore, we now call AddUnnamed() to use the faster registration method because the more involved method is never needed for these objects.

The optimized version is less elegant and harder to read, but it's not terrible. Still, you should use these techniques only if you can prove that they're worth it.

Optimizing CurrentScope

Another minor improvement is that the call to retrieve the scope is made only once regardless of how many objects are added. On the one hand, we might expect only a minor improvement since we noted above that most use cases only ever add one object anyway. On the other, however, we know that we call the constructor 20 million times in at least one test, so it's worth examining.

The call to CurrentScope gets the last element of the list of scopes. Even something as innocuous as calling the Linq extension method Last() can get more costly than it needs to be when your application calls it millions of times. Of course, Microsoft has decorated its Linq calls with all sorts of compiler hints for inlining and, of course, if you decompile, you can see that the method itself is implemented to check whether the target of the call is a list and use indexing, but it's still slower. There is still an extra stack frame (unless inlined) and there is still a type-check with as.

Replacing a call to Last() with getting the item at the index of the last position in the list is not recommended in the general case. However, making that change in a provably performance-critical area shaved a percent or two off a test run that takes about 45 minutes. That's not nothing.

protected IScope CurrentScope
{
  get { return _scopes.Last(); }
}
protected IScope CurrentScope
{
  get { return _scopes[_scopes.Count - 1]; }
}

That takes care of the creation & registration side, where I noticed a slowdown when creating the millions of ExpressionContext objects needed by the data driver in our product's test suite.

Get objects faster

Let's now look at the evaluation side, where objects are requested from the context.

The offending, slow code is below:

public IEnumerable<TService> GetInstances<TService>()
{
  var serviceType = typeof(TService);
  var rawNameMatch = this[serviceType.FullName];

  var memberMatches = All.OfType<TService>();
  var namedMemberMatches = NamedMembers.Select(
    item => item.Value
  ).OfType<TService>();

  if (rawNameMatch != null)
  {
    var nameMatch = (TService)rawNameMatch;

    return
      nameMatch
      .ToSequence()
      .Union(namedMemberMatches)
      .Union(memberMatches)
      .Distinct(ReferenceEqualityComparer<TService>.Default);
  }

  return namedMemberMatches.Union(memberMatches);
}

As you can readily see, this code isn't particularly concerned about performance. It is, however, relatively easy to read and to figure out the logic behind returning objects, though. As long as no-one really needs this code to be fast -- if it's not used that often and not used in tight loops -- it doesn't matter. What matters more is legibility and maintainability.

But we now know that we need to make it faster, so let's focus on the most-likely use cases. I know the following things:

  • Almost all Scope instances are created with a single object in them and no other objects are ever added.
  • Almost all object-retrievals are made on such single-object scopes
  • Though the scope should be able to return all matching instances, sorted by the rules laid out in the requirements, all existing calls get the FirstOrDefault() object.

These extra bits of information will allow me to optimize the already-correct implementation to be much, much faster for the calls that we're likely to make.

The optimized version is below:

public IEnumerable<TService> GetInstances<TService>()
{
  var members = _members;

  if (members == null)
  {
    yield break;
  }

  if (members.Count == 1)
  {
    if (members[0] is TService)
    {
      yield return (TService)members[0];
    }

    yield break;
  }

  object exactTypeMatch;
  if (TypedMembers.TryGetValue(typeof(TService), out exactTypeMatch))
  {
    yield return (TService)exactTypeMatch;
  }

  foreach (var member in members.OfType<TService>())
  {
    if (!ReferenceEquals(member, exactTypeMatch))
    {
      yield return member;
    }
  }
}

Given the requirements, the handful of use cases and decent naming, you should be able to follow what's going on above. The code contains many more escape clauses for common and easily handled conditions, handling them in an allocation-free manner wherever possible.

  1. Handle empty case
  2. Handle single-element case
  3. Return exact match
  4. Return all other matches3

You'll notice that returning a value added by-name is not a requirement and has been dropped. Improving performance by removing code for unneeded requirements is a perfectly legitimate solution.

Test Results

And, finally, how did we do? I created tests for the following use cases:

  • Create scope with multiple objects
  • Get all matching objects in an empty scope
  • Get first object in an empty scope
  • Get all matching objects in a scope with a single object
  • Get first object in a scope with a single object
  • Get all matching objects in a scope with multiple objects
  • Get first object in a scope with multiple objects

Here are the numbers from the automated tests.

image

image

  • Create scope with multiple objects -- 12x faster
  • Get all matching objects in an empty scope -- almost 2.5x faster
  • Get first object in an empty scope -- almost 3.5x faster
  • Get all matching objects in a scope with a single object -- over 3x faster
  • Get first object in a scope with a single object -- over 3.25x faster
  • Get all matching objects in a scope with multiple objects -- almost 3x faster
  • Get first object in a scope with multiple objects -- almost 2.25x faster

This looks amazing but remember: while the optimized solution may be faster than the original, all we really know is that we've just managed to claw our way back from the atrocious performance characteristics introduced by a recent change. We expect to see vast improvements versus a really slow version.

Since I know that these calls showed up as hotspots and were made millions of times in the test, the performance improvement shown by these tests is enough for me to deploy a pre-release of Quino via TeamCity, upgrade my product to that version and run the tests again. Wish me luck4



  1. The best approach at this point is to create issues for the other performance investigations you could make. For example, I opened an issue called Optimize allocations in the data handlers (start with IExpressionContexts), documented everything I had analyzed and quickly got back to the issue on which I'd started.

  2. For those with access to the Quino Git repository, the diffs shown below come from commit a825d5030ce6f65a452e1db85a308e1351288b96.

  3. If you're following along very, very carefully, you'll recall at this point that the requirement stated above is that objects are returned in LIFO order. The faster version of the code returns objects in FIFO order. You can't tell that the original, slow version did guarantee LIFO ordering, but only because the call to get All members contained a hidden call to the Linq call Reverse(), which slowed things down even more! I removed the call to reverse all elements because (A) I don't actually have any tests for the LIFO requirement nor (B) do I have any other code that expects it to happen. I wasn't about to make the code even more complicated and possibly slower just to satisfy a purely theoretical requirement. That's the kind of behavior that got me into this predicament in the first place.

  4. Spoiler alert: it worked. ;-) The fixes cut the testing time from about 01:30 to about 01:10 for all tests on the build server, so we won back the lost 25%.

Profiling: that critical 3% (Part I)

An oft-quoted bit of software-development sagacity is

Premature optimization is the root of all evil.

As is so often the case with quotes -- especially those on the Internet1 -- this one has a slightly different meaning in context. The snippet above invites developers to overlook the word "premature" and interpret the received wisdom as "you don't ever need to optimize."

Instead, Knuth's full quote actually tells you how much of your code is likely to be affected by performance issues that matter (highlighted below).

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

An Optimization Opportunity in Quino2

In other articles, I'd mentioned that we'd upgraded several solutions to Quino 2 in order to test that the API was solid enough for a more general release. One of these products is both quite large and has a test suite of almost 1500 tests. The product involves a lot of data-import and manipulation and the tests include several scenarios where Quino is used very intensively to load, process and save data.

These tests used to run in a certain amount of time, but started taking about 25% longer after the upgrade to Quino 2.

Measuring Execution Speed

Before doing anything else -- making educated guesses as to what the problem could be, for example -- we measure. At Encodo, we use JetBrains DotTrace to collect performance profiles.

There is no hidden secret: the standard procedure is to take a measurement before and after the change and to compare them. However, so much had changed from Quino 1.13 to Quino 2 -- e.g. namespaces and type names had changed -- that while DotTrace was able to show some matches, the comparisons were not as useful as usual.

A comparison between codebases that hadn't changed so much is much easier, but I didn't have that luxury.

Tracking the Problem

Even excluding the less-than-optimal comparison, it was an odd profile. Ordinarily, one or two issues stick out right away, but the slowness seemed to suffuse the entire test run. Since the direct profiling comparison was difficult, I downloaded test-speed measurements as CSV from TeamCity for the product where we noticed the issue.

How much slower, you might ask? The test that I looked at most closely took almost 4 minutes (236,187ms) in the stable version, but took 5:41 in the latest build.

image

This test was definitely one of the largest and longest tests, so it was particularly impacted. Most other tests that imported and manipulated data ranged anywhere from 10% to 30% slower.

When I looked for hot-spots, the profile unsurprisingly showed me that database access took up the most time. The issue was more subtle: while database-access still used the most time, it was using a smaller percentage of the total time. Hot-spot analysis wasn't going to help this time. Sorting by absolute times and using call counts in the tracing profiles yielded better clues.

The tests were slower when saving and also when loading data. But I knew that the ORM code itself had barely changed at all. And, since the product was using Quino so heavily, the stack traces ran quite deep. After a lot of digging, I noticed that creating the ExpressionContext to hold an object while evaluating expressions locally seemed to be taking longer than before. This was my first, real clue.

Once I was on the trail, I found that when evaluating calls (getting objects) that used local evaluation, it was also always slower.

Don't Get Distracted

imageOnce you start looking for places where performance is not optimal, you're likely to start seeing them everywhere. However, as noted above, 97% of them are harmless.

To be clear, we're not optimizing because we feel that the framework is too slow but because we've determined that the framework is now slower than it used to be and we don't know why.

Even after we've finished restoring the previous performance (or maybe even making it a little better), we might still be able to easily optimize further, based on other information that we gleaned during our investigation.

But we want to make sure that we don't get distracted and start trying to FIX ALL THE THINGS instead of just focusing on one task at a time. While it's somewhat disturbing that we seem to be created 20 million ExpressionContext objects in a 4-minute test, that is also how we've always done it, and no-one has complained about the speed up until now.

Sure, if we could reduce that number to only 2 million, we might be even faster3, but the point is that that we used to be faster on the exact same number of calls -- so fix that first.

A Likely Culprit: Scope

I found a likely candidate in the Scope class, which implements the IScope interface. This type is used throughout Quino, but the two use-cases that affect performance are:

  1. As a base for the ExpressionContext, which holds the named values and objects to be used when evaluating the value of an IExpression. These expressions are used everywhere in the data driver.
  2. As a base for the poor-man's IOC used in Stage 2 of application execution.4

The former usage has existed unchanged for years; its implementation is unlikely to be the cause of the slowdown. The latter usage is new and I recall having made a change to the semantics of which objects are returned by the Scope in order to make it work there as well.

How could this happen?

You may already be thinking: smooth move, moron. You changed the behavior of a class that is used everywhere for a tacked-on use case. That's definitely a valid accusation to make.

In my defense, my instinct is to reuse code wherever possible. If I already have a class that holds a list of objects and gives me back the object that matches a requested type, then I will use that. If I discover that the object that I get back isn't as predictable as I'd like, then I improve the predictability of the API until I've got what I want. If the improvement comes at no extra cost, then it's a win-win situation. However, this time I paid for the extra functionality with degraded performance.

Where I really went wrong was that I'd made two assumptions:

  1. I assumed that all other usages were also interested in improved predictability.
  2. I assumed that all other usages were not performance-critical. When I wrote the code you'll see below, I distinctly remember thinking: it's not fast, but it'll do and I'll make it faster if it becomes a problem. Little did I know how difficult it would be to find the problem.

Preventing future slippage

Avoid changing a type shared by different systems without considering all stakeholder requirements.

I think a few words on process here are important. Can we improve the development process so that this doesn't happen again? One obvious answer would be to avoid changing a type shared by different systems without considering all stakeholder requirements. That's a pretty tall order, though. Including this in the process will most likely lead to less refactoring and improvement out of fear of breaking something.

We discussed above how completely reasonable assumptions and design decisions led to the performance degradation. So we can't be sure it won't happen again. What we would like, though, is to be notified quickly when there is performance degradation, so that it appears as a test failure.

Notify quickly when there is performance degradation

Our requirements are captured by tests. If all of the tests pass, then the requirements are satisfied. Performance is a non-functional requirement. Where we could improve Quino is to include high-level performance tests that would sound the alarm the next time something like this happens.[^5]

Enough theory: in part II, we'll describe the problem in detail and take a crack at improving the speed. See you there.



  1. In fairness, the quote is at least properly attributed. It really was Donald Knuth who wrote it.

  2. By "opportunity", of course, I mean that I messed something up that made Quino slower in the new version.

  3. See the article Quino 2: Starting up an application, in detail for more information on this usage.

  4. I'm working on this right now, in issue Add standard performance tests for release 2.1.