1 2
A tuple-inference bug in the Swift 3.0.1 compiler

I encountered some curious behavior while writing a service-locator interface (protocol) in Swift. I've reproduced the issue in a stripped-down playground1 and am almost certain I've found a bug in the Swift 3.0.1 compiler included in XCode 8.2.1.

A Simple, Generic Function

We'll start off with a very basic example, shown below.

image

The example above shows a very simple function, generic in its single parameter with a required argument label a:. As expected, the compiler determines the generic type T to be Int.

I'm not a big fan of argument labels for such simple functions, so I like to use the _ to free the caller from writing the label, as shown below.

image

As you can see, the result of calling the function is unchanged.

Or Maybe Not So Simple?

Let's try calling the function with some other combinations of parameters and see what happens.

image

If you're coming from another programming language, it might be quite surprising that the Swift compiler happily compiles every single one of these examples. Let's take them one at a time.

  • int: This works as expected
  • odd: This is the call that I experienced in my original code. At the time, I was utterly mystified how Swift -- a supposedly very strictly typed language -- allowed me to call a function with a single parameter with two parameters. This example's output makes it more obvious what's going on here: Swift interpreted the two parameters as a Tuple. Is that correct, though? Are the parentheses allowed to serve double-duty both as part of the function-call expression and as part of the tuple expression?
  • tuple: With two sets of parentheses, it's clear that the compiler interprets T as tuple (Int, Int).
  • labels: The issue with double-duty parentheses isn't limited to anonymous tuples. The compiler treats what looks like two labeled function-call parameters as a tuple with two Ints labeled a: and b:.
  • nestedTuple: The compiler seems to be playing fast and loose with parentheses inside of a function call. The compiler sees the same type for the parameter with one, two and three sets of parentheses.2 I would have expected the type to be ((Int, Int)) instead.
  • complexTuple: As with tuple, the compiler interprets the type for this call correctly.

Narrowing Down the Issue

The issue with double-duty parentheses seems to be limited to function calls without argument labels. When I changed the function definition to require a label, the compiler choked on all of the calls, as expected. To fix the problem, I added the argument label for each call and you can see the results below.

image

  • int: This works as expected
  • odd: With an argument label, instead of inferring the tuple type (Int, Int), the compiler correctly binds the label to the first parameter 1. The second parameter 2 is marked as an error.
  • tuple: With two sets of parentheses, it's clear that the compiler interprets T as tuple (Int, Int).
  • labels: This example behaves the same as odd, with the second parameter b: 2 flagged as an error.
  • nestedTuple: This example works the same as tuple, with the compiler ignoring the extra set of parentheses, as it did without an argument label.
  • complexTuple: As with tuple, the compiler interprets the type for this call correctly.

Swift Grammar

I claimed above that I was pretty sure that we're looking at a compiler bug here. I took a closer look at the productions for tuples and functions defined in The Swift Programming Language (Swift 3.0.1) manual available from Apple.

First, let's look at tuples:

image

As expected, a tuple expression is created by surrounding zero or more comma-separated expressions (with optional identifiers) in parentheses. I don't see anything about folding parentheses in the grammar, so it's unclear why (((1))) produces the same type as (1). Using parentheses makes it a bit difficult to see what's going on with the types, so I'm going to translate to C# notation.

  • () => empty tuple3
  • (1) => Tuple<int>
  • ((1)) => Tuple<Tuple<int>>
  • ...and so on.

This seems to be a separate issue from the second, but opposite, problem: instead of ignoring parentheses, the compiler allows one set of parentheses to simultaneously denote the argument clause of a single-arity function call and an argument of type Tuple encompassing all parameters.

A look at the grammar of a function call shows that the parentheses are required.

image

Nowhere did I find anything in the grammar that would allow the kind of folding I observed in the compiler, as shown in the examples above. I'm honestly not sure how that would be indicated in grammar notation.

Conclusion

Given how surprising the result is, I can't imagine this is anything but a bug. Even if it can be shown that the Swift compiler is correctly interpreting these cases, it's confusing that the type-inference is different with and without labels.


func test<T>(_ a: T) -> String
{
  return String(describing: type(of: T.self))
}

var int = test(1)
var odd = test(1, 2)
var tuple = test((1, 2))
var labels = test(a: 1, b: 2)
var nestedTuple = test((((((1, 2))))))
var complexTuple = test((1, (2, 3)))

  1. The X-Code playground is a very decent REPL for this kind of example. Here's the code I used, if you want to play around on your own.

  2. I didn't include the examples, but the type is unchanged with four, five and six sets of parentheses. The compiler treats them as semantically irrelevant, though the Swift grammar doesn't allow for this, as far as I could tell from the BNF in the official manual.

  3. This is apparently legal in Swift, but I can't divine its purpose in an actual program

Two more presentations: Web tools & Quino upgrade

Check out two new talks on our web site:

Networking Event: How Encodo builds web applications

At our last networking event, Urs presented our latest tech stack. We've been working productively with this stack for most of this year and feel we've finally stabilized on something we can use for a while. Urs discusses the technologies and libraries (TypeScript, Less, React, MobX) as well as tools (Visual Studio Code, WebStorm).

Quino: from 1.13 to 4.x

Since Quino 1.13 came out in December of 2014, we've come a long way. This presentation shows just how far we've come and provides customers with information about the many, many improvements as well as a migration path.

Thoughts on .NET Standard 2.0

Check out two new talks on our web site:

Microsoft recently published a long blog article Introducing .NET Standard. The author Immo Landwerth appeared on a weekly videocast called The week in .NET to discuss and elaborate. I distilled all of this information into a presentation for Encodo's programmers and published it to our web site, TechTalk: .NET Standard 2.0. I hope it helps!

Also, Sebastian has taken a Tech Talk that he did for a networking event earlier this year, Code Review Best Practices, on the road to Germany, as Die Wahrheit über Code Reviews So klappt's!

Tabs vs. Spaces ... and how many?

Encodo has long been a two-space indent shop. Section 4.1 of the Encodo C# Handbook writes that "[a]n indent is two spaces; it is never a tab.", even though "[t]he official C# standard [...] is four spaces." and that, should you have a problem with that, you should "deal with it."

Although we use our own standards by default, we use a customer's standards if they've defined their own. A large part of our coding is now done with four spaces. Some of us have gotten so accustomed to this that four spaces started looking better than two. That, combined with recent publicity for the topic1, led me to ask the developers at Encodo what they thought.

  • Urs was open to the idea of using tabs because then "everyone can use whatever he likes and we can avoid the unnecessary discussion about the ideal value (why does it have to be an even value? I want 3 spaces!!!) Further, we might be able to save some disk space ;)"
  • Sebastian was emphatically not open to the idea of tabs because "Tabs is just a lie. There are never only tabs. I've seen multiple projects with tabs, there are always spaces as well and this breaks the formatting depending on your settings."
  • Wadim pointed out that "the tab key produces a character that is used for indentation" -- heavily hinting that people who use spaces are doing it wrong -- and then backed up Urs by suggesting 3 spaces per tab.
  • Fabi cited Death to the Space Infidels! by Jeff Atwood, "What does matter is that you, and everyone else on your team, sticks with those conventions and uses them consistently," then expressed a preference for two spaces, but agreeing that four might be easier since that's the standard used by other companies.
  • Remo backed up Sebastian in saying that tabs are bad, writing that "I have worked on projects where we tried to use tabs. But this always ended up in chaos somehow." Two or four is fine -- the longer you work with one, the odder the other one looks. "Personally I think using 2 or 4 spaces takes some time getting used to it. After that, both are well suited to read code with a slight advantage for 4 spaces because the "column" widths are wider and it's therefore easier to find the closing braces when scanning vertically (our screens are really wide - so the loss of valuable space is no longer an argument)."
  • Pascal was along the same lines as Fabi. He made a good point for spaces, writing "I personally prefer spaces since it takes the whole configuration in all the tools out of the picture at once."
  • Robin also pleaded for consistency above all, writing "I like tabs more" and "I'm used to a width of 2 spaces".
  • Marco see the advantage of tabs for customization, but understands that it will probably lead to time wasted converting whitespace. He's accustomed to 2 spaces and Encodo has a ton of code with two spaces. Although Fabi says he sees a lot of code with four-space indents, Marco's seen a lot of code with two-space indents.

So, with the rewrite of the Encodo C# Handbook in progress, what will be our recommendation going forward?

Let's summarize the opinions above:

  • Consistency is paramount (Fabi, Pascal, Robin,...pretty much everyone)
  • Using tabs has, in the past, inevitably led to a mix of tabs and spaces (Marco, Sebastian, Remo)
  • An indent of 3 spaces would be nice (Urs, Wadim)
  • Everyone else likes either a four-space indent or two while others don't really care either way. Nobody wants eight2.

So, we have emphatic arguments against switching to tabs instead of spaces. Although there are good arguments for a 4-space indent, there are also strong arguments for a 2-space indent. There's no real pressure to switch the indent.

Encodo's 2009 recommendation stands: we indent with two spaces. Deal with it.3


# EditorConfig is awesome: http://EditorConfig.org

# top-most EditorConfig file
root = true

# Unix-style newlines with a newline ending every file
[*]
indent_style = space
indent_size = 2

  1. If you're watching Silicon Valley, then you probably already know what prompted this discussion. The most recent episode had Richard of Pied Piper break of a relationship with a girl because she uses spaces instead of tabs.

  2. As Richard of Pied Piper recommended, which is just insanity.

  3. We use the EditorConfig plugin with all of our IDEs to keep settings for different solutions and products set correctly. The config file for Quino looks like this:

ABD: Improving the Aspect-modeling API for Quino

Overview

We discussed ABD in a recent article ABD: Refactoring and refining an API. To cite from that article,

[...] the most important part of code is to think about how youre writing it and what youre building. You shouldnt write a single line without thinking of the myriad ways in which it must fit into existing code and the established patterns and practices.

With that in mind, I saw another teaching opportunity this week and wrote up my experience designing an improvement to an existing API.

Requirements

Before we write any code, we should know what we're doing.1

  • We use aspects (IMetaAspects) in Quino to add domain-specific metadata (e.g. the IVisibleAspect controls element visibility)
  • Suppose we have such an aspect with properties A1...AN. When we set property A1 to a new value, we want to retain the values of properties A2...AN (i.e. we don't want to discard previously set values)
  • The current pattern is to call FindOrAddAspect(). This method does what it advertises: If an aspect with the requested type already exists, it is returned; otherwise, an instance of that type is created, added and returned. The caller gets an instance of the requested type (e.g. IVisibleAspect).
  • Any properties on the requested type that you want to change must have setters.
  • If the requested type is an interface, then we end up defining our interface as mutable.
  • Other than when building the metadata, every other use of these interfaces should not make changes.
  • We would like to be able to define the interface as read-only (no setters) and make the implementation mutable (has setters). Code that builds the metadata uses both the interface and the implementation type.

Although we're dealing concretely with aspects in Quino metadata, the pattern and techniques outlined below apply equally well to other, similar domains.

The current API

A good example is the IClassCacheAspect. It exposes five properties, four of which are read-only. You can modify the property (OrderOfMagnitude) through the interface. This is already not good, as we are forced to work with the implementation type in order to change any property other than OrderOfMagnitude.

The current way to address this issue would be to make all of the properties settable on the interface. Then we could use the FindOrAddAspect() method with the IClassCacheAspect. For example,

var cacheAspect = 
  Element.Classes.Person.FindOrAddAspect<IClassCacheAspect>(
    () => new ClassCacheAspect()
  );
cacheAspect.OrderOfMagnitude = 7;
cacheAspect.Capacity = 1000;

For comparison, if the caller were simply creating the aspect instead of getting a possibly-already-existing version, then it would just use an object initializer.

var cacheAspect = Element.Classes.Person.Aspects.Add(
  new ClassCacheAspect()
  {
    OrderOfMagnitude = 7;
    Capacity = 1000;
  }
}

This works nicely for creating the initial aspect. But it causes an error if an aspect of that type had already been added. Can we design a single method with all the advantages?

The new API

A good way to approach a new is to ask: How would we want the method to look if we were calling it?

Element.Classes.Person.SetCacheAspectValues(
  a =>
  {
    a.OrderOfMagnitude = 7;
    a.Capacity = 1000;
  }
);

If we only want to change a single property, we can use a one-liner:

Element.Classes.Person.SetCacheAspectValues(a => a.Capacity = 1000);

Nice. That's even cleaner and has fewer explicit dependencies than creating the aspect ourselves.

Making it work for one aspect type

Now that we know what we want the API to look like, let's see if it's possible to provide it. We request an interface from the list of aspects but want to use an implementation to set properties. The caller has to indicate how to create the instance if it doesn't already exist, but what if it does exist? We can't just upcast it because there is no guarantee that the existing aspect is the same implementation.

These are relatively lightweight objects and the requirement above is that the property values on the existing aspect are set on the returned aspect, not that the existing aspect is preserved.

What if we just provided a mechanism for copying properties from an existing aspect onto the new version?

var cacheAspect = new ClassCacheAspect();
var existingCacheAspect =
  Element.Classes.Person.Aspects.FirstOfTypeOrDefault<IClassCacheAspect>();
if (existingCacheAspect != null)
{
  result.OrderOfMagnitude = existingAspect.OrderOfMagnitude;
  result.Capacity = existingAspect.Capacity;
  // Set all other properties
}

// Set custom values
cacheAspect.OrderOfMagnitude = 7;
cacheAspect.Capacity = 1000;

This code does exactly what we want and doesn't require any setters on the interface properties. Let's pack this away into the API we defined above. The extension method is:

public static ClassCacheAspect SetCacheAspectValues(
  this IMetaClass metaClass,
  Action<ClassCacheAspect> setValues)
{
  var result = new ClassCacheAspect();
  var existingCacheAspect =
    metaClass.Aspects.FirstOfTypeOrDefault<IClassCacheAspect>();
  if (existingCacheAspect != null)
  {
    result.OrderOfMagnitude = existingAspect.OrderOfMagnitude;
    result.Capacity = existingAspect.Capacity;
    // Set all other properties
  }

  setValues(result);

  return result;
}

So that takes care of the boilerplate for the IClassCacheAspect. It hard-codes the implementation to ClassCacheAspect, but let's see how big a restriction that is once we've generalized below.

Generalize the aspect type

We want to see if we can do anything about generalizing SetCacheAspectValues() to work for other aspects.

Let's first extract the main body of logic and generalize the aspects.

public static TConcrete SetAspectValues<TService, TConcrete>(
  this IMetaClass metaClass,
  Action<TConcrete, TService> copyValues,
  Action<TConcrete> setValues
)
  where TConcrete : new, TService
  where TService : IMetaAspect
{
  var result = new TConcrete();
  var existingAspect = metaClass.Aspects.FirstOfTypeOrDefault<TService>();
  if (existingAspect != null)
  {
    copyValues(result, existingAspect);
  }

  setValues(result);

  return result;
}

Remove constructor restriction

This isn't bad, but we've required that the TConcrete parameter implement a default constructor. Instead, we could require an additional parameter for creating the new aspect.

public static TConcrete SetAspectValues<TService, TConcrete>(
  this IMetaClass metaClass,
  Func<TConcrete> createAspect,
  Action<TConcrete, TService> copyValues,
  Action<TConcrete> setValues
)
  where TConcrete : TService
  where TService : IMetaAspect
{
  var result = createAspect();
  var existingAspect = metaClass.Aspects.FirstOfTypeOrDefault<TService>();
  if (existingAspect != null)
  {
    copyValues(result, existingAspect);
  }

  setValues(result);

  return result;
}

Just pass in the new aspect to use

Wait, wait, wait. We not only don't need to the new generic constraint, we also don't need the createAspect lambda parameter, do we? Can't we just pass in the object instead of passing in a lambda to create the object and then calling it immediately?

public static TConcrete SetAspectValues<TService, TConcrete>(
  this IMetaClass metaClass,
  TConcrete aspect,
  Action<TConcrete, TService> copyValues,
  Action<TConcrete> setValues
)
  where TConcrete : TService
  where TService : IMetaAspect
{
  var existingAspect = metaClass.Aspects.FirstOfTypeOrDefault<TService>();
  if (existingAspect != null)
  {
    copyValues(aspect, existingAspect);
  }

  setValues(aspect);

  return aspect;
}

That's a bit more logical and intuitive, I think.

Redefine original method

We can now redefine our original method in terms of this one:

public static ClassCacheAspect SetAspectValues(
  this IMetaClass metaClass,
  Action<ClassCacheAspect> setValues)
{
  return metaClass.UpdateAspect(
    new ClassCacheAspect(),
    (aspect, existingAspect) =>
    {
      result.OrderOfMagnitude = existingAspect.OrderOfMagnitude;
      result.Capacity = existingAspect.Capacity;
      // Set all other properties
    },
    setValues
  );
}

Generalize copying values

Can we somehow generalize the copying behavior? We could make a wrapper that expects an interface on the TService that would allow us to call CopyFrom(existingAspect).

public static TConcrete SetAspectValues<TService, TConcrete>(
  this IMetaClass metaClass,
  TConcrete aspect,
  Action<TConcrete> setValues
)
  where TConcrete : TService, ICopyTarget
  where TService : IMetaAspect
{
  return metaClass.UpdateAspect<TService, TConcrete>(
    aspect,
    (aspect, existingAspect) => aspect.CopyFrom(existingAspect),
    setValues
  );
}

What does the ICopyTarget interface look like?

public interface ICopyTarget
{
  void CopyFrom(object other);
}

This is going to lead to type-casting code at the start of every implementation to make sure that the other object is the right type. We can avoid that by using a generic type parameter instead.

public interface ICopyTarget<T>
{
  void CopyFrom(T other);
}

That's better. How would we use it? Here's the definition for ClassCacheAspect:

public class ClassCacheAspect : IClassCacheAspect, ICopyTarget<IClassCacheAspect>
{
  public void CopyFrom(IClassCacheAspect otherAspect)
  {
    OrderOfMagnitude = otherAspect.OrderOfMagnitude;
    Capacity = otherAspect.Capacity;
    // Set all other properties
  }
}

Since the final version of ICopyTarget has a generic type parameter, we need to adjust the extension method. But that's not a problem because we already have the required generic type parameter in the outer method.

public static TConcrete SetAspectValues<TService, TConcrete>(
  this IMetaClass metaClass,
  TConcrete aspect,
  Action<TConcrete> setValues
)
  where TConcrete : TService, ICopyTarget<TService>
  where TService : IMetaAspect
{
  return metaClass.UpdateAspect(
    aspect,
    (aspect, existingAspect) => aspect.CopyFrom(existingAspect),
    setValues
  );
}

Final implementation

Assuming that the implementation of ClassCacheAspect implements ICopyTarget as shown above, then we can rewrite the cache-specific extension method to use the new extension method for ICopyTargets.

public static ClassCacheAspect SetCacheAspectValues(
  this IMetaClass metaClass,
  Action<ClassCacheAspect> setValues)
{
  return metaClass.UpdateAspect<IClassCacheAspect, ClassCacheAspect>(
    new ClassCacheAspect(),
    setValues
  );
}

This is an extension method, so any caller that wants to use its own IClassCacheAspect could just copy/paste this one line of code and use its own aspect.

Conclusion

This is actually pretty neat and clean:

  • We have a pattern where all properties on the interface are read-only
  • We have a pattern where an aspect can indicate how its values are to be copied from another instance. This is basically boilerplate, but must be written only once per aspect -- and it can be located right in the implementation itself rather than in an extension method.
  • A caller building metadata passes in a single lambda to set values. Existing values are handled automatically.
  • Adding support for more aspects is straightforward and involves very little boilerplate.


  1. You would think that would be axiomatic. You'd be surprised.

ABD: Refactoring and refining an API

We've been doing more internal training lately and one topic that we've started to tackle is design for architecture/APIs. Even if you're not officially a software architect -- designing and building entire systems from scratch -- every developer designs code, on some level.

[A]lways [B]e [D]esigning

There are broad guidelines about how to format and style code, about how many lines to put in a method, about how many parameters to use, and so on. We strive for Clean Code(tm).

But the most important part of code is to think about how you're writing it and what you're building. You shouldn't write a single line without thinking of the myriad ways in which it must fit into existing code and the established patterns and practices.

We've written about this before, in the two-part series called "Questions to consider when designing APIs" (Part I and Part II). Those two articles comprise a long list of aspects of a design to consider.

First make a good design, then compromise to fit project constraints.

Your project defines the constraints under which you can design. That is, we should still have our designer caps on, but the options available are much more strictly limited.

But, frustrating as that might be, it doesn't mean you should stop thinking. A good designer figures out what would be optimal, then adjusts the solution to fit the constraints. Otherwise, you'll forget what you were compromising from -- and your design skills either erode or never get better.

We've been calling this concept ABD -- Always Be Designing.1 Let's take a closer, concrete look, using a recent issue in the schema migration for Quino. Hopefully, this example illustrates how even the tiniest detail is important.2

A bug in the schema migrator

We detected the problem when the schema migration generated an invalid SQL statement.

ALTER TABLE "punchclock__timeentry" ALTER COLUMN "personid" SET DEFAULT ;

As you can see, the default value is missing. It seems that there are situations where the code that generates this SQL is unable to correctly determine that a default value could not be calculated.

The code that calculates the default value is below.

result = Builder.GetExpressionPayload(
  null,
  CommandFormatHints.DefaultValue,
  new ExpressionContext(prop),
  prop.DefaultValueGenerator
);

To translate, there is a Builder that produces a payload. We're using that builder to get the payload (SQL, in this case) that corresponds to the DefaultValueGenerator expression for a given property, prop.

This method is an extension method of the IDataCommandBuilder, reproduced below in full, with additional line-breaks for formatting:

public static string GetExpressionPayload<TCommand>(
  this IDataCommandBuilder<TCommand> builder,
  [CanBeNull] TCommand command,
  CommandFormatHints hints, 
  IExpressionContext context,
  params IExpression[] expressions)
{
  if (builder == null) { throw new ArgumentNullException("builder"); }
  if (context == null) { throw new ArgumentNullException("context"); }
  if (expressions == null) { throw new ArgumentNullException("expressions"); }

  return builder.GetExpressionPayload(
    command,
    hints,
    context,
    expressions.Select(
      e => new ExecutableQueryItem<IExecutableExpression>(new ExecutableExpression(e))
    )
  );
}

This method does no more than to package each item in the expressions parameter in an ExecutableQueryItem and call the interface method.

The problem isn't immediately obvious. It stems from the fact that each ExecutableQueryItem can be marked as Handled. The extension method ignores this feature, and always returns a result. The caller is unaware that the result may correspond to an only partially handled expression.

Is there a quick fix?

Our first instinct is, naturally, to try to figure out how we can fix the problem.3 In the code above, we could keep a reference to the executable items and then check if any of them were unhandled, like so:

var executableItems = expressions.Select(
  e => new ExecutableQueryItem<IExecutableExpression>(new ExecutableExpression(e))
);
var result = builder.GetExpressionPayload(command, hints, context, executableItems);

if (executableItems.Unhandled().Any())
{
  // Now what?
}

return result;
}

We can detect if at least one of the input expressions could not be mapped to SQL. But we don't know what to do with that information.

  • Do we throw an exception? No, we can't just do that. None of the callers are expecting an exception, so that's an API change.4
  • Do we return null? What can we return to indicate that the input expressions could not be mapped? Here we have the same problem as with throwing an exception: all callers assume that the result can be mapped.

So there's no quick fix. We have to change an API. We have to design.

Part of the result is missing

As with most bugs, the challenge lies not in knowing how to fix the bug, but in how to fix the underlying design problem that led to the bug. The problem is actually not in the extension method, but in the method signature of the interface method.

Instead of a single result, there are actually two results for this method call:

  • Can the given expressions be mapped to a string (the target representation)?
  • If so, what is that text?

Instead of a Get method, this is a classic TryGet method.

How to Introduce the Change

If this code is already in production, then you have to figure out how to introduce the bug fix without breaking existing code. If you already have consumers of your API, you can't just change the signature and cause a compile error when they upgrade. You have to decorate the existing method with [Obsolete] and make a new interface method.

So we don't change the existing method and instead add the method TryGetExpressionPayload() to IDataCommandBuilder.

What are the parameters?

Now, let's figure out what the parameters are going to be.

The method called by the extension method above has a slightly different signature.5

string GetExpressionPayload(
  [CanBeNull] TCommand command, 
  CommandFormatHints hints,
  [NotNull] IExpressionContext context,
  [NotNull] IEnumerable<ExecutableQueryItem<IExecutableExpression>> expressions
);

That last parameter is a bit of a bear. What does it even mean? The signature of the extension method deals with simple IExpression objects -- I know what those are. But what are ExecutableQueryItems and IExecutableExpressions?

As an author and maintainer of the data driver, I know that these objects are part of the internal representation of a query as it is processed. But as a caller of this method, I'm almost never going to have a list of these objects, am I?

Let's find out.

Me: Hey, ReSharper, how many callers of that method are there in the entire Quino source? ReSharper: Just one, Dave.6

So, we defined an API with a signature that's so hairy no-one calls it except through an extension method that makes the signature more palatable. And it introduces a bug. Lovely.

We've now figured out that our new method should accept a sequence of IExpression objects instead of ExecutableQueryItem objects.

How's the signature looking so far?

bool TryGetExpressionPayload(
  [CanBeNull] TCommand command, 
  CommandFormatHints hints,
  [NotNull] IExpressionContext context,
  [NotNull] IEnumerable<IExpression> expressions,
  out string payload
);

Are We Done?

Not quite. There are two things that are still wrong with this signature, both important.

Fix the Result Type

One problem is that the rest of the IDataCommandBuilder<TCommand> deals with a generic payload type and this method only works for builders where the target representation is a string. The Mongo driver, for example, uses MongoStorePayload and MongoRetrievePayload objects instead of strings and throws a NotSupportedException for this API.

That's not very elegant, but the Mongo driver was forced into that corner by the signature. Can we do better? The API would currently require Mongo to always return false because our Mongo driver doesn't know how to map anything to a string. But it could map to one of the aforementioned object representations.

If we change the out parameter type from a string to an object, then any driver, regardless of payload representation, has at least the possibility of implementing this API correctly.

Fix parameters

Another problem is that the order of parameters does not conform to the code style for Encodo.

  • We prefer to place all non-nullable parameters first. Otherwise, a call that passes null as the first parameter looks strange. The command can be null, so it should move after the two non-nullable parameters. If we move it all the way to the end, we can even make it optional.
  • Also, primitives should come after the references. (So hints should be third.)
  • Also, semantically, the call is getting the payload for the expressions not the context. The first parameter should be the target of the method; the rest of the parameters provide context for that input.
  • The original method accepted params IExpression[]. Using params allows a caller to provide zero or more expressions, but it's only allowed on the terminal parameter. Instead, we'll accept an IEnumerable<IExpression>, which is more standard for the Quino library anyway.

The final method signature is below.

bool TryGetExpressionPayload(
  [NotNull] IEnumerable<IExpression> expressions,
  [NotNull] IExpressionContext context,
  CommandFormatHints hints,
  out object payload,
  [CanBeNull] TCommand command = default(TCommand)
);

Our API in Action

The schema migration called the original API like this:

result = Builder.GetExpressionPayload(
  null,
  CommandFormatHints.DefaultValue,
  new ExpressionContext(prop),
  prop.DefaultValueGenerator
);

return true;

The call with the new API -- and with the bug fixed -- is shown below. The only non-functional addition is that we have to call ToSequence() on the first parameter (highlighted). Happily, though, we've fixed the bug and only include a default value in the field definition if one can actually be calculated.

object payload;
if (Builder.TryGetExpressionPayload(
  prop.DefaultValueGenerator.ToSequence(),
  new ExpressionContext(prop),
  CommandFormatHints.DefaultValue,
  out payload)
)
{
  result = payload as string ?? payload.ToString();

  return true;
}

One More Design Decision...

A good rule of thumb is that if you find yourself explaining something in detail, it might still be too complicated. In that light, the call to ToSequence() is a little distracting.7 It would be nice to be able to map a single expression without having to pack it into a sequence.

So we have one more design decision to make: where do we add that method call? Directly to the interface, right? But the method for a single expression can easily be expressed in terms of the method we already have (as we saw above). It would be a shame if every implementor of the interface was forced to produce this boilerplate.

Since we're using C#, we can instead extend the interface with a static method, as shown below (again, with more line breaks for this article):

public static bool TryGetExpressionPayload<TCommand>(
  [NotNull] this IDataCommandBuilder<TCommand> builder, // Extend the builder
  [NotNull] IExpression expression,
  [NotNull] IExpressionContext context,
  CommandFormatHints hints,
  out object payload,
  [CanBeNull] TCommand command = default(TCommand)
)
{
  return builder.TryGetExpressionPayload(
    expression.ToSequence(),
    context,
    hints,
    out payload,
    command
  );
}

We not only avoided cluttering the interface with another method, but now a caller with a single expression doesn't have to create a sequence for it8, as shown in the final version of the call below.

object payload;
if (Builder.TryGetExpressionPayload(
  prop.DefaultValueGenerator,
  new ExpressionContext(prop),
  CommandFormatHints.DefaultValue,
  out payload)
)
{
  result = payload as string ?? payload.ToString();

  return true;
}

Conclusion

We saw in this post how we always have our designer/architect cap on, even when only fixing bugs. We took a look at a quick-fix and then backed out and realized that we were designing a new solution. Then we covered, in nigh-excruciating detail, our thought process as we came up with a new solution.

Many thanks to Dani for the original design and Sebastian for the review!



  1. This is a bit of a riff on ABC -- Always Be Closing -- as popularized by Alec Baldwin in the movie Glengarry Glen Ross.

  2. Also, understand that it took much longer to write this blog post and itemize each individual step of how we thought about the issue. In reality, we took only a couple of minutes to work through this chain of reasoning and come up with the solution we wanted. It was only after we'd finished designing that I realized that this was a good example of ABD.

  3. Actually, our first instinct is to make sure that there is a failing test for this bug. But, this article deals with how to analyze problems and design fixes, not how to make sure that the code you write is tested. That's super-important, too, though, just so you know. Essential, even.

  4. Even though C# doesn't include the exceptions thrown in the signature of a method, as Java does. Where the Java version is fraught with issues, see the "Recoverable Errors: Type-Directed Exceptions" chapter of Midori: The Error Model by Joe Duffy for a really nice proposal/implementation of a language feature that includes expected exceptions in the signature of a method.

  5. Which is why we defined the extension method in the first place.

  6. I'm fully aware that my name isn't Dave. It's just what ReSharper calls me. Old-school reference.

  7. This was pointed out, by the way, by a reviewer of this blog post and escaped the notice of both designers and the code-reviewer. API design is neither easy nor is it done on the first try. It's only finished after multiple developers have tried it out. Then, you'll probably be able to live with it.

  8. Most developers would have used new [] { expression }, which I think is kind of ugly.

Networking Event 2016.1

imageOn Wednesday, Encodo had its first networking event of the year. Our very own Sebastian Greulach presented Code Review Best Practices. A bunch of our friends and colleagues from the area showed up for a lively discussion that, together with the presentation, lasted over 90 minutes.

We heard from people working with remote teams -- off- and near-shored -- as well as people working locally in both small and large teams and for small to large companies. We discussed various review styles, from formal to informal to nonexistent as well as the differences in managing and reviewing code for projects versus products. Naturally, we also covered tool support and where automation makes sense and where face-to-face human interaction is still better.

The discussion continued over a nice meal prepared on our outdoor grill. We even had a lot more vegetables this time! Thanks to lovely weather, we were able to spend some time outside and Pascal demonstrated his L337 drone-flying skills -- but even he couldn't save it from a rain gutter when a propeller came off mid-flight.

Thanks to everyone who helped make it happen and thanks to everyone who showed up!

Voxxed Zürich 2016: Notes

This first-ever Voxxed Zürich was hosted at the cinema in the SihlCity shopping center in Zürich on March 3rd. All presentations were in English. The conference was relatively small -- 333 participants -- and largely vendor-free. The overal technical level of the presentations and participants was quite high. I had a really nice time and enjoyed a lot of the presentations.

There was a nice common thread running through all of the presentations, starting with the Keynote. There's a focus on performance and reliability through immutabiliy, sequences, events, actors, delayed execution (lambdas, which are relatively new to Java), instances in the cloud, etc. It sounds very BUZZWORDY, but instead it came as a very technically polished conference that reminded me of how many good developers there are trying to do the right thing. Looking forward to next year; hopefully Encodo can submit a presentation.

You can take a look at the VoxxedDays Zürich -- Schedule. The talks that I visited are included below, with links to the presentation page, the video on YouTube and my notes and impressions. YMMV.

Keynote: Life beyond the Illusion of the Present

Life beyond the Illusion of the Present -- Jonas Bonér

media

Notes

  • He strongly recommended reading The Network is reliable by Peter Bailis.
  • This talk is about event-driven, CQRS programming.
  • Focus on immutable state, very much like JoeDuffy, etc. transactional accrual of facts.
  • Never delete data, annotate with more facts.
  • The reality at any point can be calculated for a point in time by aggregating facts up to that point. Like the talk I once wrote up some notes about (Runaway Complexity in Big Data, and a Plan to Stop It by Nathan Marz).
  • Everything else is a performance optimization. Database views, tables are all caches on the transaction log. Stop throwing the log away, though.
  • Define smaller atomic units. Not a whole database. Smaller. Consistency boundary. Services?
  • Availability trumps consistency. Use causal consistency through mechanisms other than time stamps. Local partial better than global.
  • He talked about data-flow programming; fingers crossed that we get some language support in C# 7
  • Akka (Akka.NET) is the main product.

Kotlin - Ready for production

Kotlin - Ready for production -- Hadi Hariri

media

  • Used at JetBrains, open-source. 14k+ users. It's not a ground-breaking language. They tried Scala and Scala was the first language they tried to use (Java already being off the table) but they didn't like it, so they invented Kotlin.

  • Interoperable with Java (of course). Usable from all sorts of systems, but intelliJ Idea has first-class support.

  • Much less code, less maintenance. Encapsulates some concepts like "data classes" which do what they're supposed for DTO definitions.

    • Inferred type on declarations. No nulls. Null-safe by design. Opt-in for nulls.
    • Implicit casts as well
    • Interface delegation
    • Lazy delegation
    • Deconstruction
    • Global infix operators; very expressive
    • Also defaults to/focuses on immutability
    • Algebraic data types/ data flow
    • Anglo is statically typed XML views for android
  • JavaScript target exists and is the focus of work. Replacement for TypeScript?

Reactive Apps with Akka and AngularJS

Reactive Apps with Akka and AngularJS -- Heiko Seeberger

media

  • He strongly recommended reading the reactive manifesto
  • Responsive: timely response / non-functional / also under load / scale up/down/out
  • Resilient: fail early
  • Message-driven: async message-passing is a way of getting reactive/responsive. Automatic decoupling leads to better error-handling, no data loss
  • Akka provides support for:
    • Actor-based model (actors are services); watch video from Channel Nine
    • Akka HTTP Server is relatively new
    • Akka is written in Scala
    • There's a Scala DSL for defining the controller (define routes)
    • The Scala compiler is pure crap. Sooooo slooooowww (62 seconds for 12 files)

During his talk, he took us through the following stages of building a scalable, resilient actor-based application with Akka.

  • First he started with static HTML
  • Then he moved on to something connected to AKKA, but not refreshing
  • W3C Server-sent events is unidirectional channel from the server to the client. He next used this to have instant refresh on the client; not available on IE. Probably used by SignalR (or whatever replaced it)? Nothing is typed, though, just plain old JavaScript
  • Then he set up sharding
  • Then persistence (Cassandra, Kafka)

AKKA Distributed Data

  • Deals with keeping replicas consistent without central coordination
  • Conflict-free replicated data types
  • Fully distributed, has pub/sub semantics
  • Uses the Gossip protocol
  • Support various consistency strategies
  • Using AKKA gives you automated scaling support (unlike the SignalR demo Urs and I did over 2 years ago, but that was a chat app as well)

AKKA Cluster Sharding

  • Partitioning of actors/services across clusters
  • Supports various strategies
  • Default strategy is to distribute unbalanced actors to new shards
  • The ShardRegion is another actor that manages communication with sharded actors (entities). This introduces a new level of indirection, which must be honored in the code (?)

AKKA Persistence

  • Event-sourcing: validate commands, journal events, apply the event after persistence.
  • Application is applied to local state only after the journal/persistence has indicated that the command was journaled
  • On recovery, events are replayed
  • Supports snapshotting (caching points in time)
  • Requires a change to the actor/entity to use it. All written in Scala.

Akka looks pretty good. It guarantees the ordering because ACTORS. Any given actor only exists on any shard once. If a shard goes down, the actor is recreated on a different shard, and filled with information from the persistent store to "recreate" the state of that actor.

DDD (Domain-Driven Design) and the actor model. Watch Hewitt, Meijer and Szyperski: The Actor Model (everything you wanted to know, but were afraid to ask).

Code is on GitHub: seeberger/reactive_flows

Lambda core - hardcore

Lambda core - hardcore -- Jarek Ratajski

media

Focus on immutability and no side-effects. Enforced by the lambda calculus. Pretty low-level talk about lambda calculus. Interesting, but not applicable. He admitted as much at the top of the talk.

Links:

expect("poo").length.toBe(1)

expect("poo").length.toBe(1) -- Philip Hofstetter1

media

This was a talk about expectations of the length of a character. The presenter was very passionate about his talk and went into an incredible amount of detail.

  • What is a string? This is the kind of stuff every programmer needs to know.2
  • String is not a collection of bytes. It's a sequence of graphemes. string <> char[]
  • UTF-16 is crap. What about the in-memory representation? Why in God's name did Python 3 use UTF32? Unicode Transformation format.
  • What is the length of a string? ä is how many? Single character (diuresis included) or a with combining diuresis?
  • In-memory representation in Java and C# are UCS-2 (UNICODE 1); stuck in 1996, before Unicode 2.0 came out. This leaks into APIs because of how strings are returned ... string APIs use UTF-16, encoding with surrogate pairs to get to characters outside of the BMP (understood by convention, but not by the APIs that expect UTF-16 ... which has no idea what surrogate pairs are ... and counting algorithms, find, etc. won't work).
  • ECMAScript hasn't really fixed this, either. substr() can break strings charAt() is still available and has no idea about code points. Does this apply to ES6? String-equality doesn't work for the diuresis above.
  • So we're stuck with server-side. Who does it right? Perl. Swift. Python. Ruby. Python went through hell with backwards compatibility but with 3.3 they're doing OK again. Ruby strings are a tuple of encoding and data. All of the others have their string libraries dealing in graphemes. How did Perl always get it right? Perl has three methods for asking questions about length, in graphemes, code points or bytes
  • What about those of us using JavaScript? C#? Java? There are external libraries that we should be using. Not just for DateTime, but for string-handling as well. Even EcmaScript15 still uses code points rather than graphemes, so the count varies depending on how the grapheme is constructed.
  • Security concerns: certificate authorities have to be aware of homographs (e.g. a character that looks like another one, but has a different encoding/byte sequence).
  • He recommended the book Unicode explained by Jukka K. Korpela.

How usability fits in UX - it's no PICNIC

How usability fits in UX - it's no PICNIC -- Myriam Jessier

media

What should a UI be?

  1. Functional
  2. Reliable
  3. Usable
  4. Convenient
  5. Pleasurable

Also nice to have:

  1. Desirable
  2. Delightful
  3. memorable
  4. Learnable
  5. 3 more

Book recommendation: Don't make me think by Steve Krug

  • Avoid mindless and unambiguous clicks. Don't count clicks, count useless shit you need to do.
  • Let the words go. People's attention will wander.
  • UX is going to be somewhat subjective. Don't try to please everyone.
  • OMG She uses hyphens correctly.
  • She discussed the difference between UX, CX, UI.
  • Personas are placeholders for your users. See Personapp to get started working with personas.

Guidelines:

  • Consistent and standardized UI
  • Guide the user (use visual cues, nudging)
  • Make the CallToAction (CTA) interactive objects obvious
  • Give feedback on progress, interaction
  • Never make a user repeat something they already told you. You're software, you should have eidetic memory
  • Always have default values in forms (e.g. show the expected format)
  • Explain how the inputed information will be used (e.g. for marketing purposes)
  • No more "reset" button or mass-delete buttons. Don't make it possible/easy to wipe out all someone's data
  • Have clear and explanatory error or success messages (be encouraging)
  • Include a clear and visual hierarchy and navigation

Guidelines for mobile:

  • Make sure it works on all phones

  • Give incentives for sharing and purpose (engagement rates make marketing happy. CLICK THE BUTTON)

  • Keep usability and conversion in mind (not necessarily money, but you actually want people to be using your app correctly)

  • Usability (can you use your app on the lowest screen-brightness?)

  • ...and more...

  • Make it pretty (some people don't care, e.g. She very clearly said that she's not aesthetically driven, it's not her field; other people do care. A lot).

  • Give all the information a customer needs to purchase

  • Design for quick movement (no lag)

  • Do usability testing through video

  • Leverage expectations. Fit in to the environment. Search is on the left? Behind a button? Do that. Don't make a new way of searching.

  • If you offer a choice, then make them as mutually exclusive as possible. When a company talks to itself (e.g. industry jargon), then users get confused

  • The registration process should be commensurate to the thing that you're registering for

  • Small clickable ads on mobile. Make click targets appropriate.

  • Don't blame negative feedback on "fear of change". It's probably you. If people don't like it, then it might not be user-friendly. The example with Twitter's star vs. heart. It's interesting how we let the world frame our interactions. Why not both? Too complex? Would people really be confused by two buttons? One to "like" and one for "read later"?

Suggested usability testing tools:

  • Crazy Egg is $9 per month for heatmaps.
  • Qualaroo
  • Optimizely (A/B testing)
  • Usabilia
  • Userfeel
  • Trymyui

React - A trip to Russia isn't all it seems

React - A trip to Russia isn't all it seems -- Josh Sephton[^3]

media

This talk was about Web UI frameworks and how his team settled on React.

  • Angular too "all or nothing".
  • Backbone has no data-binding.
  • React looks good. Has its own routing for SPAs. Very component-heavy. Everything's a component. Nothing new here so far.
  • They built their React to replace a Wordpress-based administration form
  • Stateful components are a bad idea
  • React components are like self-contained actors/services
  • They started with Flux, but ended up with Redux. We're using Redux in our samples. I'm eyeballing how to integrate Akka.Net (although I'm not sure if that has anything to do with this.
  • ReactNative: write once, use on any device
  • Kind of superficial and kinda short but I knew all about this in React already

The reactor programming model for composable distributed computing

The reactor programming model for composable distributed computing -- Aleksandar Prokopec[^4]

media

  • Reactive programming, with events as sequences of event objects
  • Events are equivalent to a list/sequence/streams (enumerable in C#)
  • This talk is also about managing concurrency
  • There must be a boundary between outer concurrent events vs. how your application works on them
  • That's why most UI toolkits are single-threaded
  • Asynchronous is the antonym of concurrency (at least in the dictionary)
  • Filter the stream of events to compress them to frames, then render and log, so the events come in, are marshaled through the serializing bottleneck and are then dispatched asynchronously to different tasks
  • Reactor lets clients create their own channels (actors) from which they read events and which they register with a server so that it can publish
  • Akka supports setting up these things, Reactor is another implementation?
  • Dammit I want destructuring of function results (C# 7?)
  • It's very easy to build client/server and broadcast and even ordered synchronization using UIDs (or that pattern mentioned by Jonas in the keynote) The UID needs to be location-specific, though. That's not sufficient either, what you need is client-specific. For this, you need special data structures to store the data in a way that edits are automatically correctly ordered. Events sent for these changes make the events are ordered correctly
  • What is the CRDT? We just implemented an online collaborative editor: composes nicely and provides a very declarative, safe and scalable way of defining software. This is just a function (feeds back into the idea of lambdas here, actually, immutability, encapsulation)
  • Reactors


  1. I am aware of the irony that the emoji symbol for "poo" is not supported on this blogging software. That was basically the point of the presentation -- that encoding support is difficult to get right. There's an issue for it: Add support for UTF8 as the default encoding.

  2. In my near-constant striving to be the worst conversational partner ever, I once gave a similar encoding lesson to my wife on a two-hour walk around a lake when she dared ask why mails sometimes have those "stupid characters" in them.

Finovate 2016: Bank2Things

image

image

At the beginning of the year, we worked on an interesting project that dipped into IOT (Internet of Things). The project was to create use cases for Crealogix's banking APIs in the real world. Concretely, we wanted to show how a customer could use these APIs in their own workflows. The use cases were to provide proof of the promise of flexibility and integrability offered by well-designed APIs.

Watch 7--minute video of the presentation

The Use Cases

Football Club Treasurer

imageThe first use case is for the treasurer of a local football club. The treasurer wants to be notified whenever an annual club fee is transferred from a member. The club currently uses a Google Spreadsheet to track everything, but it's updated manually. It would be really nice if the banking API could connected -- via some scripting "glue" -- to update the spreadsheet directly, without user intervention. The treasurer would just see the most current numbers whenever he opened the spreadsheet.

The spreadsheet is in addition to the up-to-date view of payments in the banking app. The information is also available there, but not necessarily in the form that he or she would like. Linking automatically to the spreadsheet is the added value.

Chore & Goal Tracker

imageimageImagine a family with a young son who wants to buy a drone. He would have to earn it by doing chores. Instead of tracking this manually, the boy's chores would be tabulated automatically, moving money from the parents' account to his own as he did chores. Additionally, a lamp in the boy's room would glow a color indicating how close he was to his goal. The parents wanted to track the boy's progress in a spreadsheet, tracking the transfers as they would have had they not had any APIs.

The idea is to provided added value to the boy, who can record his chores by pressing a button and see his progress by looking at a lamp's color. The parents get to stay in their comfort zone, working with a spreadsheet as usual, but having the data automatically entered in the spreadsheet.

The Plan

It's a bit of a stretch, but it sufficed to ground the relatively abstract concept of banking APIs in an example that non-technical people could follow.

So we needed to pull quite a few things together to implement these scenarios.

  • A lamp that can be controlled via API
  • A button that can trigger an API
  • A spreadsheet accessibly via API
  • An API that can transfer money between accounts
  • "Glue" logic that binds these APIs together

The Lamp

imageimage We looked at two lamps:

Either of these -- just judging from their websites -- would be sufficient to utterly and completely change our lives. The Hue looked like it was going to turn us into musicians, so we went with Lifx, which only threatened to give us horn-rimmed glasses and a beard (and probably skinny jeans and Chuck Taylor knockoffs).

Yeah, we think the marketing for what is, essentially, a light-bulb, is just a touch overblown. Still, you can change the color of the light bulb with a SmartPhone app, or control it via API (which is what we wanted to do).

The Button

The button sounds simple. You'd think that, in 2016, these things would be as ubiquitous as AOL CDs were in the 1990s. You'd be wrong.

imageThere's a KickStarter project called Flic that purports to have buttons that send signals over a wireless connection. They cost about CHF20. Though we ordered some, we never saw any because of manufacturing problems. If you thought the hype and marketing for a light bulb were overblown, then you're sure to enjoy how Flic presents a button.

We quickly moved along a parallel track to get buttons that can be pressed in real life rather than just viewed from several different angles and in several different colors online.

imageAmazon has what they have called "Dash" buttons that customers can press to add predefined orders to their one-click shopping lists. The buttons are bound to certain household products that you tend to purchase cyclically: toilet paper, baby wipes, etc.

They sell them dirt-cheap -- $5 -- but only to Amazon Prime customers -- and only to customers in the U.S. Luckily, we knew someone in the States willing to let us use his Amazon Prime account to deliver them, naturally only to a domestic address, from which they would have to be forwarded to us here in Switzerland.

That we couldn't use them to order toilet paper in the States didn't bother us -- we were planning to hack them anyway.

These buttons showed up after a long journey and we started trapping them in our own mini-network so that we could capture the signal they send and interpret it as a trigger. This was not ground-breaking stuff, but we really wanted the demonstrator to be able to press a physical button on stage to trigger the API that would cascade other APIs and so on.

Of course we could have just hacked the whole thing so that someone presses a button on a screen somewhere -- and we programmed this as a backup plan -- but the physicality of pressing a button was the part of the demonstration that was intended to ground the whole idea for non-technical users.1

The Spreadsheet

imageimageIf you're going to use an API to modify a spreadsheet, then that spreadsheet has to be available online somewhere. The spreadsheet application in Google Docs is a good candidate.

The API allows you to add or modify existing data, but that's pretty much it. When you make changes, they show up immediately, with no ceremony. That, unfortunately, doesn't make for a very nice-looking demo.

Google Docs also offers a Javascript-like scripting language that let's you do more. We wanted to not only insert rows, we wanted charts to automatically update and move down the page to accommodate the new row. All animated, thank you very much.

This took a couple pages of scripting and a good amount of time. It's also no longer a solution that an everyday user is likely to make themselves. And, even though we pushed as hard as we could, we also didn't get everything we wanted. The animation is very jerky (watch the video linked above) but gets the job done.

The Glue

imageSo we've got a bunch of pieces that are all capable of communicating in very similar ways. The final step is to glue everything together with a bit of script. There are several services available online, like IFTTT -- If This Then That -- that allow you to code simple logic to connect signals to actions.

In our system, we had the following signals:

  • Transfer was made to a bank account
  • Button was pressed

and the following actions:

  • Insert data into Google Spreadsheet
  • Set color of lamp

The Crealogix API and UI

imageimageimageSo we're going to betray a tiny secret here. Although the product demonstrated on-stage did actually do what it said, it didn't do it using the Crealogix API to actually transfer money. That's the part that we were actually selling and it's the part we ended up faking/mocking out because the actual transfer is beside the point. Setting up bank accounts is not so easy, and the banks take umbrage at creating them for fake purposes.

Crealogix could have let us use fake testing accounts, but even that would have been more work than it was worth: if we're already faking, why not just fake in the easiest way possible by skipping the API call to Crealogix and only updating the spreadsheet?

Likewise, the entire UI that we included in the product was mocked up to include only the functionality required by the demonstration. You can see an example here -- of the login screen -- but other screens are linked throughout this article. Likewise, the Bank2Things screen shown above and to the left is a mockup.

Wrapup

So what did Encodo actually contribute?

  • We used the Crealogix UX and VSG to mock up all of the app screens that you seen linked in this article. We did all of the animation and logic and styling.
  • We built two Google Spreadsheets and hooked them up to everything else
  • We hooked up the Lifx lamp API into our system
  • We hacked the Amazon Dash buttons to communicate in our own network instead of beaming home to the mothership
  • We built a web site to handle any mocking/faking that needed to be done for the demo and through which the devices communicated
  • We provided a VM (Virtual Machine) on which everything ran (other than the Google Spreadsheets)

As last year -- when we helped Crealogix create the prototype for their BankClip for Finovate 2015 -- we had a lot of fun investigating all of these cutting-edge technologies and putting together a custom solution in time for Finovate 2016.



  1. As it turns out, if you watch the 7--minute video of the presentation, nowhere do you actually see a button. Maybe they could see them from the audience.

Git: Managing local commits and branches

At Encodo, we've got a relatively long history with Git. We've been using it exclusively for our internal source control since 2010.1

Git Workflows

GitWhen we started with Git at Encodo, we were quite cautious. We didn't change what had already worked for us with Perforce.2 That is: all developers checked in to a central repository on a mainline or release branch. We usually worked with the mainline and never used personal or feature branches.

Realizing the limitation of this system, we next adopted an early incarnation GitFlow, complete with command-line support for it. A little while later, we switched to our own streamlined version of GitFlow without a dev branch, which we published in an earlier version of the Encodo Git Handbook.3

We're just now testing the waters of Pull Requests instead of direct commits to master and feature branches. Before we can make this move, though, we need to raise the comfort level that all of our developers have toward creating branches and manipulating commits. We need to take the magic and fear out of Git -- but that's a pushed commit4 -- and learn how to view Git more as a toolbox that we can make for us rather than a mysterious process to whose whims we must comply.5

General Rules

Before we get started, let's lay down some ground rules for working with Git and source control, in general.

  • Use branches
  • Don't use too many branches at once
  • Make small pull requests
  • Use no more than a few unpushed commits
  • Get regular reviews

As you can see, the rules describe a process of incremental changes. If you stick to them, you'll have much less need for the techniques described below. In case of emergency, though, let's demystify some of what Git does.

If you haven't done so already, you should really take a look at some documentation of how Git actually works. There are two sources I can recommend:

  • The all-around excellent and extremely detailed Official Git Documentation. It's well-written and well-supplied with diagrams, but quite detailed.
  • The Encodo Git Handbook summarizes the details of Git we think are important, as well as setting forth best practices and a development process.

Examples

All examples and screenshots are illustrated with the SmartGit log UI.

Before you do any of the manipulation shown below, **always make sure your working tree has been cleared**. That means there are no pending changes in it. Use the `stash` command to put pending changes to the side.

Moving branches

In SmartGit, you can grab any local branch marker and drag it to a new location. SmartGit will ask what you want to do with the dropped branch marker, but you'll almost always just want to set it to the commit on which you dropped it.

This is a good way of easily fixing the following situation:

  1. You make a bunch of commits on the master branch
  2. You get someone to review these local commits
  3. They approve the commits, but suggest that you make a pull request instead of pushing to master. A good reason for this might be that both the developer and the face-to-face reviewer think another reviewer should provide a final stamp of approval (i.e. the other reviewer is the expert in an affected area)

In this case, the developer has already moved their local master branch to a newer commit. What to do?

Create a pull-request branch

Create and check out a pull-request branch (e.g. mvb/serviceImprovements).

image

image

Set master to the origin/master

Move the local master branch back to origin/master. You can do this in two ways:

  • Check out the master branch and then reset to the origin/master branch or...
  • Just drag the local master branch to the origin/master commit.

image

Final: branches are where they belong

In the end, you've got a local repository that looks as if you'd made the commits on the pull-request branch in the first place. The master branch no longer has any commits to push.

image

Moving & joining commits

SmartGit supports drag&drop move for local commits. Just grab a commit and drop it to where you'd like to have it in the list. This will often work without error. In some cases, like when you have a lot of commits addressing the same areas in the same files, SmartGit will detect a merge conflict and will be unable to move the commit automatically. In these cases, I recommend that you either:

  • Give up. It's probably not that important that the commits are perfect.
  • Use the techniques outlined in the long example below instead.

You can also "join" -- also called "squash" in Git parlance -- any adjoining commits into a single commit. A common pattern you'll see is for a developer to make changes in response to a reviewer's comments and save them in a new commit. The developer can then move that commit down next to the original commit from which the changes stemmed and join the commits to "repair" the original commit after review. You can at the same time edit the commit message to include the reviewer's name. Nice, right?

Here's a quick example:

Initial: three commits

We have three commits, but the most recent one should be squashed with the first one.

image

Move a commit

Select the most recent commit and drag it to just above the commit with which you want to join it. This operation might fail.6

image

Squash selected commits

Select the two commits (it can be more) and squash/join them. This operation will not fail.

image

Final: two commits

When you're done, you should see two commits: the original one has now been "repaired" with the additional changes you made during the review. The second one is untouched and remains the top commit.

image

Diffing commits

You can squash/join commits when you merge or you can squash/join commits when you cherry-pick. If you've got a bunch of commits that you want to combine, cherry-pick those commits but don't commit them.

You can also this technique to see what has changed between two branches. There are a lot of ways to do this, and a lot of guides will show you how to execute commands on the command line to do this.

In particular, Git allows you to easily display the list of commits between two other commits as well as showing the combined differences in all of those commits in a patch format. The patch format isn't very easy to use for diffing from a GUI client, though. Most of our users know how to use the command line, but use SmartGit almost exclusively nonetheless -- because it's faster and more intuitive.

So, imagine you've made several commits to a feature or release branch and want to see what would be merged to the master branch. It would be nice to see the changes in the workspace as a potential commit on master so you can visually compare the changes as you would a new commit.

Here's a short, visual guide on how to do that.

Select commits to cherry-pick

Check out the target branch (master in this example) and then select the commits you want to diff against it.

image

Do not commit

When you cherry-pick, leave the changes to accumulate in the working tree. If you commit them, you won't be able to diff en bloc as you'd like.

image

Final: working tree

The working tree now contains the differences in the cherry-picked commits.

image

Now you can diff files to your heart's content to verify the changes.

Working-tree files

Once you have changes in the working tree that are already a part of other commits, you might be tempted to think you have to revert the changes because they're already committed, right?

You of course don't have to do that. You can let the original commits die on the vine and make new ones, as you see fit.

Suppose after looking at the differences between our working branch and the master branch, you decide you want to integrate them. You can do this in several ways.

  1. You could clear the working tree7, then merge the other branch to master to integrate those changes in the original commits.
  2. Or you could create one or more new commits out of the files in the workspace and commit those to master. You would do this if the original commits had errors or incomplete comments or had the wrong files in them.
  3. Or you could clear the working tree and re-apply the original commits by cherry-picking and committing them. Now you have copies of those commits and you can edit the messages to your heart's content.

Even if you don't merge the original commits as in option (1) above, and you create new commits with options (2) and (3), you can still merge the branch so that Git is aware that all work from that branch has been included in master. You don't have to worry about applying the same work twice. Git will normally detect that the changes to be applied are exactly the same and will merge automatically. If not, you can safely just resolve any merge conflicts by selecting the master side.8

An example of reorganizing commits

Abandon hope, all ye who enter here. If you follow the rules outlined above, you will never get into the situation described in this section. That said...when you do screw something up locally, this section might give you some idea of how to get out of it. Before you do anything else, though, you should consider how you will avoid repeating the mistake that got you here. You can only do things like this with local commits or commits on private branches.

The situation in this example is as follows:

  • The user has made some local commits and reviewed them, but did not push them.
  • Other commits were made, including several merge commits from other pull requests.
  • The new commits still have to be reviewed, but the reviewer can no longer sign the commits because they are rendered immutable by the merge commits that were applied afterward.
  • It's difficult to review these commits face-to-face and absolutely unconscionable to create a pull request out of the current local state of the master branch.
  • The local commits are too confusing for a reviewer to follow.

The original mess

So, let's get started. The situation to clean up is shown in the log-view below.

image

Pin the local commits

Branches in Git are cheap. Local ones even more so. Create a local branch to pin the local commits you're interested in into the view. The log view will automatically hide commits that aren't referenced by either a branch or a tag.9

image

Choose your commits

Step one: find the commits that you want to save/re-order/merge.

image

The diagram below shows the situation without arrows. There are 17 commits we want, interspersed with 3 merge commits that we don't want.10

image

Reset local master

Check out the master branch and reset it back to the origin.

image

Cherry-pick commits

Cherry-pick and commit the local commits that you want to apply to master. This will make copies of the commits on pin.

image

Master branch with 17 commits

When you're done, everything should look nice and neat, with 17 local commits on the master branch. You're now ready to get a review for the handful of commits that haven't had them yet.11

image

Delete the temporary branch

You now have copies of the commits on your master branch, so you no longer care about the pin branch or any of the commits it was holding in the view. Delete it.

image

That pesky merge

Without the pin, the old mess is no longer displayed in the log view. Now I'm just missing the merge from the pull request/release branch. I just realized, though: if I merge on top of the other commits, I can no longer edit those commits in any way. When I review those commits and the reviewer wants me to fix something, my hands will be just as tied as they were in the original sitution.

image

Inserting a commit

If the tools above worked once, they'll work again. You do not have to go back to the beginning, you do not have to dig unreferenced commits out of the Git reflog.

Instead, you can create the pin branch again, this time to pin your lovely, clean commits in place while you reset the master branch (as before) and apply the merge as the first commit.

image

Rebase pin onto master

Now we have a local master branch with a single merge commit that is not on the origin. We also have a pin branch with 17 commits that are not on the origin.

Though we could use cherry-pick to copy the individual commits from pin to master, we'll instead rebase the commits. The rebase operation is more robust and was made for these situations.12

image

pin is ready

We're almost done. The pin branch starts with the origin/master, includes a merge commit from the pull request and then includes 17 commits on top of that. These 17 commits can be edited, squashed and changed as required by the review.

image

Fast-forward master

Now you can switch to the master branch, merge the pin branch (you can fast-forward merge) and then delete the pin branch. You're done!

image

Conclusion

I hope that helps take some of the magic out of Git and helps you learn to make it work for you rather than vice versa. With just a few simple tools -- along with some confidence that you're not going to lose any work -- you can do pretty much anything with local commits.13

h/t to Dani and Fabi for providing helpful feedback.


If you look closely, you can even see two immediately subsequent merges where I merged the branch and committed it. I realized there was a compile error and undid the commit, added the fixes and re-committed. However, the re-commit was no longer a merge commit so Git "forgot" that the pull-request branch had been merged. So I had to merge it again in order to recapture that information.

This is going to happen to everyone who works more than casually with Git, so isn't it nice to know that you can fix it? No-one has to know.


  1. Over five years counts as a long time in this business.

  2. I haven't looked at their product palette in a while. They look to have gotten considerably more enterprise-oriented. The product palette is now split up between the Helix platform, Helix versioning services, Helix Gitswarm and more.

  3. But which we've removed from the most recent version, 3.0.

  4. This is often delivered in a hushed tone with a note of fervent belief that having pushed a commit to the central repository makes it holy. Having pushed a commit to the central repository on master or a release branch is immutable, but everything else can be changed. This is the reason we're considering a move to pull requests: it would make sure that commits become immutable only when they are ready rather than as a side-effect of wanting to share code with another developer.

  5. In all cases, when you manipulate commits -- especially merge commits -- you should minimally verify that everything still builds and optimally make sure that tests run green.

  6. If the commits over which you're moving contain changes that conflict with the ones in the commit to be moved, Git will not be able to move that commit without help. In that case, you'll either have to (A) give up or (B) use the more advanced techniques shown in the final example in this blog.

  7. That is, in fact, what I did when preparing this article. Since I'm not afraid of Git, I manipulated my local workspace, safe in the knowledge that I could just revert any changes I made without losing work.

  8. How do we know this? Because we just elected to create our own commits for those changes. Any merge conflicts that arise are due to the commits you expressly didn't want conflicting with the ones that you do, which you've already committed to master.

  9. You can elect to show all commits, but that would then show a few too many unwanted commits lying around as you cherry-pick, merge and rebase to massage the commits to the way you'd like them. Using a temporary branch tells SmartGit which commits you're interested in showing in the view.

  10. Actually, we do want to merge all changes from the pull-request branch but we don't want to do it in the three awkward commits that we used as we were working. While it was important at the time that the pull-request be merged in order to test, we want to do it in one smooth merge-commit in the final version.

  11. You may be thinking: what if I want to push the commits that have been reviewed to master and create a pull request for the remaining commits? Then you should take a look in the section above, called Moving branches, where we do exactly that.

  12. Why? As you saw above, when you cherry-pick, you have to be careful to get the right commits and apply them in the right order. The situation we currently have is exactly what rebase was made for. The rebase command will get the correct commits and apply them in the correct order to the master branch. If there are merge conflicts, you can resolve them with the client and the rebase automatically picks up where you left off. If you elect to cherry-pick the commits instead and the 8th out of 17 commits fails to merge properly, it's up to you to pick up where you left off after solving the merge conflict. The rebase is the better choice in this instance.

  13. Here comes the caveat: within reason. If you're got merge commits that you have to keep because they cost a lot of blood, sweat and tears to create and validate, then don't cavalierly throw them away. Be practical about the "prettiness" of your commits. If you really would like commit #9 to be between commits #4 and #5, but SmartGit keeps telling you that there is a conflict when trying to move that commit, then reconsider how important that move is. Generally, you should just forget about it because there's only so much time you should spend massaging commits. This article is about making Git work for you, but don't get obsessive about it.