The article .NET Core, a call to action by Mark Rendle exhorts everyone to "go go go".
I say, "pump the brakes."
Mark says, "The next wave of work must be undertaken by the wider .NET community, both inside and outside Microsoft."
No. The next wave of work must be undertaken by the team building the product. This product is not even Beta yet. They have called the last two releases RC, but they aren't: the API is still changing quite dramatically. For example, the article Announcing .NET Core RC2 and .NET Core SDK Preview 11 lists all sorts of changes and the diff of APIs between RC1 and RC2 is gigantic -- the original article states that "[w]e added over a 1000 new APIs in .NET Core RC2".
What?!?!
That is a huge API-surface change between release candidates. That's why I think these designations are largely incorrect. Maybe they just mean, "hey, if y'all can actually work with this puny footprint, then we'll call it a final release. If not, we'll just add a bunch more stuff until y'all can compile again." Then, yeah, I guess each release is a "candidate".
But then they should just release 1.0 because this whole "RC" business is confusing. What they're really releasing are "alpha" builds. The quality is high, maybe even production-quality, but they're still massive changes vis-a-vis previous builds.
That doesn't sound like "RC" to me. As an example, look at the project-file format, project.json
.
Mark also noted that there are "no project.json files in the repository" for the OData project that comes from Microsoft. That's not too surprising, considering the team behind .NET Core just backed off of the project.json
format considerably, as concisely documented in The Future of project.json in ASP.NET Core by Shawn Wildermuth. The executive summary is that they've decided "to phase out project.json in deference to MSBuild". Anyone who's based any of their projects on the already-available-in-VS-2015 project templates that use that format will have to convert them to whatever the final format is.
Wildermuth also wrote that "Microsoft has decided after the RTM of the ASP.NET Core framework to phase out project.json and use MSBuild for build data. (Emphasis added.)" I was confused (again) but am pretty sure that he's wrong about RTM because, just a couple of days later, MS published an article Announcing ASP.NET Core RC2 -- and I'm pretty sure that RCs come before RTM.
At Encodo, we took a shot at porting the base assembly of Quino to .NET Core. It has only dependencies on framework-provided assemblies in the GAC, so that eliminated any issues with third-party support, but it does provide helper methods for AppDomains
and Reflection, which made a port to .NET Core nontrivial.
Here's a few things we learned that made the port take much longer than we expected.
project.json
works with the command-line tools. Create the project file and compile with dotnet
.project.json
do not work in Visual Studio; you have to choose a single target. Otherwise, the same project that just built on the command line barely loads.#IFDEFs
you use for platform-specific code. So, even if you've gotten everything compiling on the command-line, be prepared to do it all over again differently if you actually want it to work in VS2015.Encodo.Core
are suddenly back in RC2. That means that if we'd waited, we'd have saved a lot of time and ended up in the same place.Encodo.Core
compiling under .NET Core.With so much in flux -- APIs and project format -- we're not ready to invest more time and money in helping MS figure out what the .NET Core target needs. We're going to sit it out until there's an actual RTM. Even at that point, if we make a move, we'll try a small slice of Quino again and see how long it takes. If it's still painful, then we'll wait until the first service pack (as is our usual policy with development tools and libraries).
I understand Mark's argument that "the nature of a package-based ecosystem such as NuGet can mean that Project Z can't be updated until Project Y releases .NET Core packages, and Project Y may be waiting on Project X, and so on". But I just don't, as he says, "trust that what we have now in RC2 is going to remain stable in API terms", so I wouldn't recommend "that OSS project maintainers" do so, either. It's just not ready yet.
If you jump on the .NET Core train now, be prepared to shovel coal. Oh, and you might just have to walk to the next station, too. At noon. Carrying your overseas trunk on your back. Once you get there, though, you might be just in time for the 1.0.1 or 1.0.2 express arriving at the station, where you can get on, you might not even have to buy a new ticket -- and you can get there at the same time as everyone else.
The Mark Renton article states boldly that "Yesterday we finally got our hands on the first Release Candidate of .NET Core [...]" but I don't know what he's talking about. The project just released RC2 and there are even RC3 packages available in the channel already -- but these are totally useless and didn't work at all in our projects.↩
Before taking a look at the roadmap, let's quickly recap how far we've come. An overview of the release schedule shows a steady accretion of features over the years, as driven by customer or project needs.
The list below includes more detail on the releases highlighted in the graphic.1
We took 1.5 years to get to v1. The initial major version was to signify the first time that Quino-based code went into external production.2
After that, it took 6.5 years to get to v2. Although we added several large products that use Quino, we were always able to extend rather than significantly change anything in the core. The second major version was to signify sweeping changes made to address technical debt, to modernize certain components and to prepare for changes coming to the .NET platform.
It took just 5 months to get to v3 for two reasons:
So that's where we've been. Where are we headed?
As you can see above, Quino is a very mature product that satisfies the needs of a wide array of software on all tiers. What more is there to add?
Quino's design has always been driven by a combination of customer requirements and what we anticipated would be customer requirements.
We're currently working on the following features.
Modeling improvements
This work builds on the API changes made to the MetaBuilder
in v3. We're creating a more fluent, modern and extensible API for building metadata. We hope to be able to add these changes incrementally without introducing any breaking changes.6
WPF / VSG
A natural use of the rich metadata in Quino is to generate user interfaces for business entities without have to hand-tool each form. From the POC onward, Quino has included support for generating UIs for .NET Winforms.
Winforms has been replaced on the Windows desktop with WPF and UWP. We've gotten quite far with being able to generate WPF applications from Quino metadata. The screenshots below come from a pre-alpha version of the Sandbox application included in the Quino solution.
You may have noticed the lovely style of the new UI.7 We're using a VSG designed for us by Ergosign, for whom we've done some implementation work in the past.
.NET Core
If you've been following Microsoft's announcements, things are moving quickly in the .NET world. There are whole new platforms available, if you target your software to run on them. We're investigating the next target platforms for Quino. Currently that means getting the core of Quino -- Quino.Meta
and its dependencies -- to compile under .NET Core.
As you can see in the screenshot, we've got one of the toughest assemblies to compile --
Encodo.Core
. After that, we'll try for running some tests under Linux or OS X. The long-term goal is to be able to run Quino-based application and web servers on non-Windows -- and, most importantly, non-IIS -- platforms.8
These changes will almost certainly cause builds using previous versions to break. Look for any additional platform support in an upcoming major-version release.
There were, of course, more minor and patch releases throughout, but those didn't introduce any major new functionality.↩
Punchclock, our time-entry and invoicing software -- and Quino "dogfood (When a developer uses their own code for their own daily needs. Being a user as well as a developer creates the user empathy that is the hallmark of good software.)" product -- had been in use internally at Encodo earlier than that.↩
E.g. splitting the monolithic Encodo
and Quino
assemblies into dozens of new, smaller and much more focused assemblies. Reorganizing configuration around the IOC and rewriting application startup for more than just desktop applications was another sweeping change.↩
One of those breaking changes was to the MetaBuilder
, which started off as a helper class for assembling application metadata, but became a monolithic and unavoidable dependency, even in v2. In v3, we made the breaking changes to remove this component from its central role and will continue to replace its functionality with components that more targeted, flexible and customizable.↩
In the years between v1 and v2, we used the minor-version number to indicate when breaking changes could be made. We also didn't try as hard to avoid breaking changes by gracefully deprecating code. The new approach tries very hard to avoid breaking changes but accepts the consequences when it's deemed necessary by the team.↩
That is, when users upgrade to a version with the newer APIs, they will get obsolete
warnings but their existing code will continue to build and run, as before the upgrade. In this way, customers can smoothly upgrade without breaking any builds.↩
You may also have noticed that the "Sandbox Dialog View" includes a little tag in it for the "XAML Spy", a tool that we use for WPF development. Just so you know the screenshots aren't faked... :-)↩
As with the WPF interface, we're likely to dogfood all of these technologies with Punchclock, our time-tracking and invoicing system written with Quino. The application server and web components that run on Windows could be migrated to run on one of our many Linux machines instead.↩
The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.
IDataSession
and IApplication
now directly implement the IServiceRequestHandler
and helper methods that used to extend IApplication
now extend this interface instead, so calls like GetModel()
can now be executed against an IApplication
or an IDataSession
. Many methods have been moved out of the IServiceRequestHandler
interface to extension methods declared in the Encodo.IOC
namespace. This move will require applications to update the usings
. ReSharper will automatically find the correct namespace and apply it for you.ApplicationExtensions.GetInstance()
has been replaced with a direct implementation of the IServiceRequestHandler
by IApplication
.MetaBuilder.Include()
has been replaced with Dependencies.Include()
CreateModel()
, you can no longer call CreateMainModule()
because the main module is set up automatically. Although the call is marked as obsolete, it can only be combined with the older overload of the CreateModel()
. Using it with the newer overload will cause a runtime error as the main module is added to the model twice.MetaBuilder
have been replaced by AddPath()
. To rewrite a path, use the following style:Builder.AddPath(
Elements.Classes.A.FromOne("Id"),
Elements.Classes.B.ToMany("FileId"),
path => path.SetMetaId(new Guid("...")).SetDeleteRule(MetaPathRule.Cascade),
idx => idx.SetMetaId(new Guid("..."))
);
Encodo published its first C# Handbook and published it to its web site in 2008. At the time, we also published to several other standard places and got some good, positive feedback. Over the next year, I made some more changes and published new versions. The latest version is 1.5.2 and is available from Encodo's web site. Since then, though I've made a few extra notes and corrected a few errors, but never published an official version again.
This is not because Encodo hasn't improved or modernized its coding guidelines, but because of several issues, listed below.
var
advice) or just plain wrong (e.g. var
advice)To address these issues and to accommodate the new requirements, here's what we're going to do:
Convert the entire document from Word to Markdown and put it in a Git repository
Separate the chapters into individual files and keep them shorter and more focused on a single topic
Separate all of the advice and rules into the following piles:
These are the requirements and goals for a new version of the C# handbook.
The immediate next steps are:
I hope to have an initial, modern version ready within the next month or so.
On Wednesday, Encodo had its first networking event of the year. Our very own Sebastian Greulach presented Code Review Best Practices. A bunch of our friends and colleagues from the area showed up for a lively discussion that, together with the presentation, lasted over 90 minutes.
We heard from people working with remote teams -- off- and near-shored -- as well as people working locally in both small and large teams and for small to large companies. We discussed various review styles, from formal to informal to nonexistent as well as the differences in managing and reviewing code for projects versus products. Naturally, we also covered tool support and where automation makes sense and where face-to-face human interaction is still better.
The discussion continued over a nice meal prepared on our outdoor grill. We even had a lot more vegetables this time! Thanks to lovely weather, we were able to spend some time outside and Pascal demonstrated his L337 drone-flying skills -- but even he couldn't save it from a rain gutter when a propeller came off mid-flight.
Thanks to everyone who helped make it happen and thanks to everyone who showed up!
Unwritten code requires no maintenance and introduces no cognitive load.
As I was working on another part of Quino the other day, I noticed that the oft-discussed registration and configuration methods1 were a bit clunkier than I'd have liked. To whit, the methods that I tended to use together for configuration had different return types and didn't allow me to freely mix calls fluently.
Register
and Use
The return type for Register
methods is IServiceRegistrationHandler
and the return type for Use
methods is IApplication
(a descendant), The Register* methods come from the IOC interfaces, while the application builds on top of this infrastructure with higher-level Use* configuration methods.
This forces developers to write code in the following way to create and configure an application.
public IApplication CreateApplication()
{
var result =
new Application()
.UseStandard()
.UseOtherComponent();
result.
.RegisterSingle<ICodeHandler, CustomCodeHandler>()
.Register<ICodePacket, FSharpCodePacket>();
return result;
}
That doesn't look too bad, though, does it? It doesn't seem like it would cramp anyone's style too much, right? Aren't we being a bit nitpicky here?
That's exactly why Quino 2.0 was released with this API. However, here we are, months later, and I've written a lot more configuration code and it's really starting to chafe that I have to declare a local variable and sort my method invocations.
So I think it's worth addressing. Anything that disturbs me as the writer of the framework -- that gets in my way or makes me write more code than I'd like -- is going to disturb the users of the framework as well.
Whether they're aware of it or not.
In the best of worlds, users will complain about your crappy API and make you change it. In the world we're in, though, they will cheerfully and unquestioningly copy/paste the hell out of whatever examples of usage they find and cement your crappy API into their products forever.
Do not underestimate how quickly calls to your inconvenient API will proliferate. In my experience, programmers really tend to just add a workaround for whatever annoys them instead of asking you to fix the problem at its root. This is a shame. I'd rather they just complained vociferously that the API is crap rather than using it and making me support it side-by-side with a better version for usually feels like an eternity.
Maybe it's because I very often have control over framework code that I will just not deal with bad patterns or repetitive code. Also I've become very accustomed to having a wall of tests at my beck and call when I bound off on another initially risky but in-the-end rewarding refactoring.
If you're not used to this level of control, then you just deal with awkward APIs or you build a workaround as a band-aid for the symptom rather than going after the root cause.
So while the code above doesn't trigger warning bells for most, once I'd written it a dozen times, my fingers were already itching to add [Obsolete]
on something.
I am well-aware that this is not a simple or cost-free endeavor. However, I happen to know that there aren't that many users of this API yet, so the damage can be controlled.
If I wait, then replacing this API with something better later will take a bunch of versions, obsolete warnings, documentation and re-training until the old API is finally eradicated. It's much better to use your own APIs -- if you can -- before releasing them into the wild.
Another more subtle reason why the API above poses a problem is that it's more difficult to discover, to learn. The difference in return types will feel arbitrary to product developers. Code-completion is less helpful than it could be.
It would be much nicer if we could offer an API that helped users discover it at their own pace instead of making them step back and learn new concepts. Ideally, developers of Quino-based applications shouldn't have to know the subtle difference between the IOC and the application.
Something like the example below would be nice.
return
new Application()
.UseStandard()
.RegisterSingle<ICodeHandler, CustomCodeHandler>()
.UseOtherComponent()
.Register<ICodePacket, FSharpCodePacket>();
Right? Not a gigantic change, but if you can imagine how a user would write that code, it's probably a lot easier and more fluid than writing the first example. In the second example, they would just keep asking code-completion for the next configuration method and it would just be there.
In order to do this, I'd already created an issue in our tracker to parameterize the IServiceRegistrationHandler
type in order to be able to pass back the proper return type from registration methods.
I'll show below what I mean, but I took a crack at it recently because I'd just watched the very interesting video Fun with Generics by Benjamin Hodgson, which starts off with a technique identical to the one I'd planned to use -- and that I'd already used successfully for the IQueryCondition
interface.2
Let's redefine the IServiceRegistrationHandler
interface as shown below,
public interface IServiceRegistrationHandler<TSelf>
{
TSelf Register<TService, TImplementation>()
where TService : class
where TImplementation : class, TService;
// ...
}
Can you see how we pass the type we'd like to return as a generic type parameter? Then the descendants would be defined as,
public interface IApplication : IServiceRegistrationHandler<IApplication>
{
}
In the video, Hodgson notes that the technique has a name in formal notation, "F-bounded quantification" but that a snappier name comes from the C++ world, "curiously recurring template pattern". I've often called it a self-referencing generic parameter, which seems to be a popular search term as well.
This is only the first step, though. The remaining work is to update all usages of the formerly non-parameterized interface IServiceRegistrationHandler
. This means that a lot of extension methods like the one below
public static IServiceRegistrationHandler RegisterCoreServices(
[NotNull] this IServiceRegistrationHandler handler)
{
}
will now look like this:
public static TSelf RegisterCoreServices<TSelf>(
[NotNull] this IServiceRegistrationHandler<TSelf> handler)
where TSelf : IServiceRegistrationHandler<TSelf>
{
}
This makes defining such methods more complex (again).3 in my attempt at implementing this, Visual Studio indicated 170 errors remaining after I'd already updated a couple of extension methods.
Instead of continuing down this path, we might just want to follow the pattern we established in a few other places, by defining both a Register
method, which uses the IServiceRegistrationHandler
, and a Use
method, which uses the IApplication
Here's an example of the corresponding "Use" method:
public static IApplication UseCoreServices(
[NotNull] this IApplication application)
{
if (application == null) { throw new ArgumentNullException("application"); }
application
.RegisterCoreServices()
.RegisterSingle(application.GetServices())
.RegisterSingle(application);
return application;
}
Though the technique involves a bit more boilerplate, it's easy to write and understand (and reason about) these methods. As mentioned in the initial sentence of this article, the cognitive load is lower than the technique with generic parameters.
The only place where it would be nice to have an IApplication
return type is from the Register*
methods defined on the IServiceRegistrationHandler
itself.
We already decided that self-referential generic constraints would be too messy. Instead, we could define some extension methods that return the correct type. We can't name the method the same as the one that already exists on the interface4, though, so let's prepend the word Use
, as shown below:
IApplication UseRegister<TService, TImplementation>(
[NotNull] this IApplication application)
where TService : class
where TImplementation : class, TService;
{
if (application == null) { throw new ArgumentNullException("application"); }
application.Register<TService, TImplementation>();
return application;
}
That's actually pretty consistent with the other configuration methods. Let's take it for a spin and see how it feels. Now that we have an alternative way of registering types fluently without "downgrading" the result type from IApplication
to IServiceRegistrationHandler
, we can rewrite the example from above as:
return
new Application()
.UseStandard()
.UseRegisterSingle<ICodeHandler, CustomCodeHandler>()
.UseOtherComponent()
.UseRegister<ICodePacket, FSharpCodePacket>();
Instead of increasing cognitive load by trying to push the C# type system to places it's not ready to go (yet), we use tiny methods to tweak the API and make it easier for users of our framework to write code correctly.5
Perhaps an example is in order:
interface IA
{
IA RegisterSingle<TService, TConcrete>();
}
interface IB : IA { }
static class BExtensions
{
static IB RegisterSingle<TService, TConcrete>(this IB b) { return b; }
static IB UseStuff(this IB b) { return b; }
}
Let's try to call the method from BExtensions
:
public void Configure(IB b)
{
b.RegisterSingle<IFoo, Foo>().UseStuff();
}
The call to UseStuff
cannot be resolved because the return type of the matched RegisterSingle
method is the IA
of the interface method not the IB
of the extension method. There is a solution, but you're not going to like it (I know I don't).
public void Configure(IB b)
{
BExtensions.RegisterSingle<IFoo, Foo>(b).UseStuff();
}
You have to specify the extension-method class's name explicitly, which engenders awkward fluent chaining -- you'll have to nest these calls if you have more than one -- but the desired method-resolution was obtained.
But at what cost? The horror...the horror.
See Encodos configuration library for Quino Part 1, Part 2 and Part 3 as well as API Design: Running and Application Part 1 and Part 2 and, finally, Starting up an application, in detail.↩
The video goes into quite a bit of depth on using generics to extend the type system in the direction of dependent types. Spoiler alert: he doesn't make it because the C# type system can't be abused in this way, but the journey is informative.↩
As detailed in the links in the first footnote, I'd just gotten rid of this kind of generic constraint in the configuration calls because it was so ugly and offered little benefit.↩
If you define an extension method for a descendant type that has the same name as a method of an ancestor interface, the method-resolution algorithm for C# will never use it. Why? Because the directly defined method matches the name and all the types and is a "stronger" match than an extension method.↩
The final example does not run against Quino 2.2, but will work in an upcoming version of Quino, probably 2.3 or 2.4.↩
The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.
DateTimeExtensions.GetDayOfWeek()
had a leap-day bug (QNO-5051)GenericObjects
is calculated, which fixes sorting issues in grids, specifically for non-persisted or transient objects (QNO-5137)IAccessControl
API for getting groups and users and testing membership (QNO-5133)alias
when calling the Join
method, as shown below,query.Join(Metadata.Project.Deputy, alias: "deputy")
You can find more examples of aliased queries in the TestAliasedQuery()
, TestJoinAliasedTables()
, TestJoinChildTwice()
defined in the QueryTests
testing fixture.
IQueryAnalyzer
for optimizations and in-memory mini-drivers (QNO-4830)ISchemaManager
has been removed. Instead, you should retrieve the interface you were looking for from the IOC. The possible interfaces you might need are IImportHandler
, IMappingBuilder
, IPlanBuilder
or ISchemaCommandFactory
.
ISchemaManagerSettings.GetAuthorized()
has been moved to ISchemaManagerAuthorizer
.
The hash-code fix for GenericObjects
may have an effect on the way your application sorts objects.The IParticipantManager
(base interface of IAccessControl
) no longer has a single method called GetGroups(IParticipant)
. This method was previously used to get the groups to which a user belongs and the child groups of a given group. This confusing double duty for the API led to an incorrect implementation for both usages. Instead, there are now two methods:
IEnumerable<IGroup> GetGroups(IUser user)
: Gets the groups for the given userIEnumerable<IGroup> GetChildGroups(IGroup group)
: Gets the child groups for the given groupThe old method has been removed from the interface because (A) it never worked correctly anyway and (B) it conflicts with the new API.
This first-ever Voxxed Zürich was hosted at the cinema in the SihlCity shopping center in Zürich on March 3rd. All presentations were in English. The conference was relatively small -- 333 participants -- and largely vendor-free. The overal technical level of the presentations and participants was quite high. I had a really nice time and enjoyed a lot of the presentations.
There was a nice common thread running through all of the presentations, starting with the Keynote. There's a focus on performance and reliability through immutabiliy, sequences, events, actors, delayed execution (lambdas, which are relatively new to Java), instances in the cloud, etc. It sounds very BUZZWORDY, but instead it came as a very technically polished conference that reminded me of how many good developers there are trying to do the right thing. Looking forward to next year; hopefully Encodo can submit a presentation.
You can take a look at the VoxxedDays Zürich -- Schedule. The talks that I visited are included below, with links to the presentation page, the video on YouTube and my notes and impressions. YMMV.
Life beyond the Illusion of the Present -- Jonas Bonér
Kotlin - Ready for production -- Hadi Hariri
Used at JetBrains, open-source. 14k+ users. It's not a ground-breaking language. They tried Scala and Scala was the first language they tried to use (Java already being off the table) but they didn't like it, so they invented Kotlin.
Interoperable with Java (of course). Usable from all sorts of systems, but intelliJ Idea has first-class support.
Much less code, less maintenance. Encapsulates some concepts like "data classes" which do what they're supposed for DTO definitions.
JavaScript target exists and is the focus of work. Replacement for TypeScript?
Reactive Apps with Akka and AngularJS -- Heiko Seeberger
During his talk, he took us through the following stages of building a scalable, resilient actor-based application with Akka.
AKKA Distributed Data
AKKA Cluster Sharding
AKKA Persistence
Akka looks pretty good. It guarantees the ordering because ACTORS. Any given actor only exists on any shard once. If a shard goes down, the actor is recreated on a different shard, and filled with information from the persistent store to "recreate" the state of that actor.
DDD (Domain-Driven Design) and the actor model. Watch Hewitt, Meijer and Szyperski: The Actor Model (everything you wanted to know, but were afraid to ask).
Code is on GitHub: seeberger/reactive_flows
Lambda core - hardcore -- Jarek Ratajski
Focus on immutability and no side-effects. Enforced by the lambda calculus. Pretty low-level talk about lambda calculus. Interesting, but not applicable. He admitted as much at the top of the talk.
Links:
expect("poo").length.toBe(1) -- Philip Hofstetter1
This was a talk about expectations of the length of a character. The presenter was very passionate about his talk and went into an incredible amount of detail.
How usability fits in UX - it's no PICNIC -- Myriam Jessier
What should a UI be?
Also nice to have:
Book recommendation: Don't make me think by Steve Krug
Guidelines:
Guidelines for mobile:
Make sure it works on all phones
Give incentives for sharing and purpose (engagement rates make marketing happy. CLICK THE BUTTON)
Keep usability and conversion in mind (not necessarily money, but you actually want people to be using your app correctly)
Usability (can you use your app on the lowest screen-brightness?)
...and more...
Make it pretty (some people don't care, e.g. She very clearly said that she's not aesthetically driven, it's not her field; other people do care. A lot).
Give all the information a customer needs to purchase
Design for quick movement (no lag)
Do usability testing through video
Leverage expectations. Fit in to the environment. Search is on the left? Behind a button? Do that. Don't make a new way of searching.
If you offer a choice, then make them as mutually exclusive as possible. When a company talks to itself (e.g. industry jargon), then users get confused
The registration process should be commensurate to the thing that you're registering for
Small clickable ads on mobile. Make click targets appropriate.
Don't blame negative feedback on "fear of change". It's probably you. If people don't like it, then it might not be user-friendly. The example with Twitter's star vs. heart. It's interesting how we let the world frame our interactions. Why not both? Too complex? Would people really be confused by two buttons? One to "like" and one for "read later"?
Suggested usability testing tools:
React - A trip to Russia isn't all it seems -- Josh Sephton[^3]
This talk was about Web UI frameworks and how his team settled on React.
The reactor programming model for composable distributed computing -- Aleksandar Prokopec[^4]
I am aware of the irony that the emoji symbol for "poo" is not supported on this blogging software. That was basically the point of the presentation -- that encoding support is difficult to get right. There's an issue for it: Add support for UTF8 as the default encoding.↩
In my near-constant striving to be the worst conversational partner ever, I once gave a similar encoding lesson to my wife on a two-hour walk around a lake when she dared ask why mails sometimes have those "stupid characters" in them.↩
At the beginning of the year, we worked on an interesting project that dipped into IOT (Internet of Things). The project was to create use cases for Crealogix's banking APIs in the real world. Concretely, we wanted to show how a customer could use these APIs in their own workflows. The use cases were to provide proof of the promise of flexibility and integrability offered by well-designed APIs.
Watch 7--minute video of the presentation
The first use case is for the treasurer of a local football club. The treasurer wants to be notified whenever an annual club fee is transferred from a member. The club currently uses a Google Spreadsheet to track everything, but it's updated manually. It would be really nice if the banking API could connected -- via some scripting "glue" -- to update the spreadsheet directly, without user intervention. The treasurer would just see the most current numbers whenever he opened the spreadsheet.
The spreadsheet is in addition to the up-to-date view of payments in the banking app. The information is also available there, but not necessarily in the form that he or she would like. Linking automatically to the spreadsheet is the added value.
Imagine a family with a young son who wants to buy a drone. He would have to earn it by doing chores. Instead of tracking this manually, the boy's chores would be tabulated automatically, moving money from the parents' account to his own as he did chores. Additionally, a lamp in the boy's room would glow a color indicating how close he was to his goal. The parents wanted to track the boy's progress in a spreadsheet, tracking the transfers as they would have had they not had any APIs.
The idea is to provided added value to the boy, who can record his chores by pressing a button and see his progress by looking at a lamp's color. The parents get to stay in their comfort zone, working with a spreadsheet as usual, but having the data automatically entered in the spreadsheet.
It's a bit of a stretch, but it sufficed to ground the relatively abstract concept of banking APIs in an example that non-technical people could follow.
So we needed to pull quite a few things together to implement these scenarios.
Either of these -- just judging from their websites -- would be sufficient to utterly and completely change our lives. The Hue looked like it was going to turn us into musicians, so we went with Lifx, which only threatened to give us horn-rimmed glasses and a beard (and probably skinny jeans and Chuck Taylor knockoffs).
Yeah, we think the marketing for what is, essentially, a light-bulb, is just a touch overblown. Still, you can change the color of the light bulb with a SmartPhone app, or control it via API (which is what we wanted to do).
The button sounds simple. You'd think that, in 2016, these things would be as ubiquitous as AOL CDs were in the 1990s. You'd be wrong.
There's a KickStarter project called Flic that purports to have buttons that send signals over a wireless connection. They cost about CHF20. Though we ordered some, we never saw any because of manufacturing problems. If you thought the hype and marketing for a light bulb were overblown, then you're sure to enjoy how Flic presents a button.
We quickly moved along a parallel track to get buttons that can be pressed in real life rather than just viewed from several different angles and in several different colors online.
Amazon has what they have called "Dash" buttons that customers can press to add predefined orders to their one-click shopping lists. The buttons are bound to certain household products that you tend to purchase cyclically: toilet paper, baby wipes, etc.
They sell them dirt-cheap -- $5 -- but only to Amazon Prime customers -- and only to customers in the U.S. Luckily, we knew someone in the States willing to let us use his Amazon Prime account to deliver them, naturally only to a domestic address, from which they would have to be forwarded to us here in Switzerland.
That we couldn't use them to order toilet paper in the States didn't bother us -- we were planning to hack them anyway.
These buttons showed up after a long journey and we started trapping them in our own mini-network so that we could capture the signal they send and interpret it as a trigger. This was not ground-breaking stuff, but we really wanted the demonstrator to be able to press a physical button on stage to trigger the API that would cascade other APIs and so on.
Of course we could have just hacked the whole thing so that someone presses a button on a screen somewhere -- and we programmed this as a backup plan -- but the physicality of pressing a button was the part of the demonstration that was intended to ground the whole idea for non-technical users.1
If you're going to use an API to modify a spreadsheet, then that spreadsheet has to be available online somewhere. The spreadsheet application in Google Docs is a good candidate.
The API allows you to add or modify existing data, but that's pretty much it. When you make changes, they show up immediately, with no ceremony. That, unfortunately, doesn't make for a very nice-looking demo.
Google Docs also offers a Javascript-like scripting language that let's you do more. We wanted to not only insert rows, we wanted charts to automatically update and move down the page to accommodate the new row. All animated, thank you very much.
This took a couple pages of scripting and a good amount of time. It's also no longer a solution that an everyday user is likely to make themselves. And, even though we pushed as hard as we could, we also didn't get everything we wanted. The animation is very jerky (watch the video linked above) but gets the job done.
So we've got a bunch of pieces that are all capable of communicating in very similar ways. The final step is to glue everything together with a bit of script. There are several services available online, like IFTTT -- If This Then That -- that allow you to code simple logic to connect signals to actions.
In our system, we had the following signals:
and the following actions:
So we're going to betray a tiny secret here. Although the product demonstrated on-stage did actually do what it said, it didn't do it using the Crealogix API to actually transfer money. That's the part that we were actually selling and it's the part we ended up faking/mocking out because the actual transfer is beside the point. Setting up bank accounts is not so easy, and the banks take umbrage at creating them for fake purposes.
Crealogix could have let us use fake testing accounts, but even that would have been more work than it was worth: if we're already faking, why not just fake in the easiest way possible by skipping the API call to Crealogix and only updating the spreadsheet?
Likewise, the entire UI that we included in the product was mocked up to include only the functionality required by the demonstration. You can see an example here -- of the login screen -- but other screens are linked throughout this article. Likewise, the Bank2Things screen shown above and to the left is a mockup.
So what did Encodo actually contribute?
As last year -- when we helped Crealogix create the prototype for their BankClip for Finovate 2015 -- we had a lot of fun investigating all of these cutting-edge technologies and putting together a custom solution in time for Finovate 2016.
As it turns out, if you watch the 7--minute video of the presentation, nowhere do you actually see a button. Maybe they could see them from the audience.↩
In several articles last year1, I went into a lot of detail about the configuration and startup for Quino applications. Those posts discuss a lot about what led to the architecture Quino has for loading up an application.
Some of you might be wondering: what if I want to start up and run an application that doesn't use Quino? Can I build applications that don't use any fancy metadata because they're super-simple and don't even need to store any data? Those are the kind of utility applications I make all the time; do you have anything for me, you continue to wonder?
As you probably suspected from the leading question: You're in luck. Any functionality that doesn't need metadata is available to you without using any of Quino. We call this the "Encodo" libraries, which are the base on which Quino is built. Thanks to the fundamental changes made in Quino 2, you have a wealth of functionality available in just the granularity you're looking for.
Instead of writing such small applications from scratch -- and we know we could write them -- why would we want to leverage existing code? What are the advantages of doing this?
What are potential disadvantages?
A developer unfamiliar with a library -- or one who is too impatient to read up on it -- will feel these disadvantages more acutely and earlier.
Let's take a look at some examples below to see how the Encodo/Quino libraries stack up. Are we able to profit from the advantages without suffering from the disadvantages?
We're going to take a look at two simple applications:
The actual service-registration part is boilerplate generated by Microsoft Visual Studio2, but we'd like to replace the hard-coded strings with customized data obtained from a configuration file. So how do we get that data?
That doesn't sound that hard, right? I'm sure you could just whip something together with an XMLDocument
and some hard-coded paths and filenames that would do the trick.3 It might even work on the first try, too. But do you really want to bother with all of that? Wouldn't you rather just get the scaffolding for free and focus on the part where you load your settings?
The following listing shows the main application method, using the Encodo/Quino framework libraries to do the heavy lifting.
[NotNull]
public static ServiceSettings LoadServiceSettings()
{
ServiceSettings result = null;
var transcript = new ApplicationManager().Run(
CreateServiceConfigurationApplication,
app => result = app.GetInstance<ServiceSettings>()
);
if (transcript.ExitCode != ExitCodes.Ok)
{
throw new InvalidOperationException(
"Could not read the service settings from the configuration file." +
new SimpleMessageFormatter().GetErrorDetails(transcript.Messages)
);
}
return result;
}
If you've been following along in the other articles (see first footnote below), then this structure should be very familiar. We use an ApplicationManager()
to execute the application logic, creating the application with CreateServiceConfigurationApplication
and returning the settings configured by the application in the second parameter (the "run" action). If anything went wrong, we get the details and throw an exception.
You can't see it, but the library provides debug/file logging (if you enable it), debug/release mode support (exception-handling, etc.) and everything is customizable/replaceable by registering with an IOC.
Soooo...I can see where we're returning the ServiceSettings
, but where are they configured? Let's take a look at the second method, the one that creates the application.
private static IApplication CreateServiceConfigurationApplication()
{
var application = new Application();
application
.UseSimpleInjector()
.UseStandard()
.UseConfigurationFile("service-settings.xml")
.Configure<ServiceSettings>(
"service",
(settings, node) =>
{
settings.ServiceName = node.GetValue("name", settings.ServiceName);
settings.DisplayName = node.GetValue("displayName", settings.DisplayName);
settings.Description = node.GetValue("description", settings.Description);
settings.Types = node.GetValue("types", settings.Types);
}
).RegisterSingle<ServiceSettings>();
return application;
}
Application
, defined in the Encodo.Application
assembly. What does this class do? It does very little other than manage the main IOC (see articles linked in the first footnote for details).UseSimpleInjector()
. Quino includes support for the SimpleInjector IOC out of the box. As you can see, you must include this support explicitly, so you're also free to assign your own IOC (e.g. one using Microsoft's Unity). SimpleInjector is very lightweight and super-fast, so there's no downside to using it.UseStandard()
, defined in the Encodo.Application.Standard
assembly. Since I know that UseStandard()
pulls in what I'm likely to need, I'll just use that.4ServiceSettings
object that we want to return. For that, there's a Configure
method that returns an object from the IOC along with a specific node from the configuration data. This method is called only if everything started up OK.RegisterSingle
makes sure that the ServiceSettings
object created by the IOC is a singleton (it would be silly to configure one instance and return another, unconfigured one).Basically, because this application is so simple, it has already accomplished its goal by the time the standard startup completes. At the point that we would "run" this application, the ServiceSettings
object is already configured and ready for use. That's why, in LoadServiceSettings()
, we can just get the settings from the application with GetInstance()
and exit immediately.
The code generator has a bit more code, but follows the same pattern as the simple application above. In this case, we use the command line rather than the configuration file to get user input.
The main method defers all functionality to the ApplicationManager
, passing along two methods, one to create the application, the other to run it.
internal static void Main()
{
new ApplicationManager().Run(CreateApplication, GenerateCode);
}
As before, we first create an Application
, then choose the SimpleInjector and some standard configuration and registrations with UseStandard()
, UseMetaStandardServices()
and UseMetaTools()
.6
We set the application title to "Quino Code Generator" and then include objects with UseSingle()
that will be configured from the command line and used later in the application.7 And, finally, we add our own ICommandSet
to the command-line processor that will configure the input and output settings. We'll take a look at that part next.
private static IApplication CreateApplication(
IApplicationCreationSettings applicationCreationSettings)
{
var application = new Application();
return
application
.UseSimpleInjector()
.UseStandard()
.UseMetaStandardServices()
.UseMetaTools()
.UseTitle("Quino Code Generator")
.UseSingle(new CodeGeneratorInputSettings())
.UseSingle(new CodeGeneratorOutputSettings())
.UseUnattendedCommand()
.UseCommandSet(CreateGenerateCodeCommandSet(application))
.UseConsole();
}
The final bit of the application configuration is to see how to add items to the command-line processor.
Basically, each command set consists of required values, optional values and zero or more switches that are considered part of a set.
The one for i simply sets the value of inputSettings.AssemblyFilename
to whatever was passed on the command line after that parameter. Note that it pulls the inputSettings
from the application to make sure that it sets the values on the same singleton reference as will be used in the rest of the application.
The code below shows only one of the code-generator--specific command-line options.8
private static ICommandSet CreateGenerateCodeCommandSet(
IApplication application)
{
var inputSettings = application.GetSingle<CodeGeneratorInputSettings>();
var outputSettings = application.GetSingle<CodeGeneratorOutputSettings>();
return new CommandSet("Generate Code")
{
Required =
{
new OptionCommandDefinition<string>
{
ShortName = "i",
LongName = "in",
Description = Resources.Program_ParseCommandLineArgumentIn,
Action = value => inputSettings.AssemblyFilename = value
},
// And others...
},
};
}
Finally, let's take a look at the main program execution for the code generator. It shouldn't surprise you too much to see that the logic consists mostly of getting objects from the IOC and telling them to do stuff with each other.9
I've highlighted the code-generator--specific objects in the code below. All other objects are standard library tools and interfaces.
private static void GenerateCode(IApplication application)
{
var logger = application.GetLogger();
var inputSettings = application.GetInstance<CodeGeneratorInputSettings>();
if (!inputSettings.TypeNames.Any())
{
logger.Log(Levels.Warning, "No types to generate.");
}
else
{
var modelLoader = application.GetInstance<IMetaModelLoader>();
var metaCodeGenerator = application.GetInstance<IMetaCodeGenerator>();
var outputSettings = application.GetInstance<CodeGeneratorOutputSettings>();
var modelAssembly = AssemblyTools.LoadAssembly(
inputSettings.AssemblyFilename, logger
);
outputSettings.AssemblyDetails = modelAssembly.GetDetails();
foreach (var typeName in inputSettings.TypeNames)
{
metaCodeGenerator.GenerateCode(
modelLoader.LoadModel(modelAssembly, typeName),
outputSettings,
logger
);
}
}
}
So that's basically it: no matter how simple or complex your application, you configure it by indicating what stuff you want to use, then use all of that stuff once the application has successfully started. The Encodo/Quino framework provides a large amount of standard functionality. It's yours to use as you like and you don't have to worry about building it yourself. Even your tiniest application can benefit from sophisticated error-handling, command-line support, configuration and logging without lifting a finger.
var fileService = new ServiceInstaller();
fileService.StartType = ServiceStartMode.Automatic;
fileService.DisplayName = "Quino Sandbox";
fileService.Description = "Demonstrates a Quino-based service.";
fileService.ServiceName = "Sandbox.Services";
See the ServiceInstaller.cs
file in the Sandbox.Server
project in Quino 2.1.2 and higher for the full listing.
<?xml version="1.0" encoding="utf-8" ?>
<config>
<service>
<name>Quino.Services</name>
<displayName>Quino Utility</displayName>
<description>The application to run all Quino backend services.</description>
<types>All</types>
</service>
</config>
But that method is just a composition of over a dozen other methods. If, for whatever reason (perhaps dependencies), you don't want all of that functionality, you can just call the subset of methods that you do want. For example, you could call UseApplication()
from the Encodo.Application
assembly instead. That method includes only the support for:
* Processing the command line (`ICommandSetManager`)
* Locating external files (`ILocationManager`)
* Loading configuration data from file (`IConfigurationDataLoader`)
* Debug- and file-based logging (`IExternalLoggerFactory`)
* and interacting with the `IApplicationManager`.
If you want to go even lower than that, you can try UseCore()
, defined in the Encodo.Core
assembly and then pick and choose the individual components yourself. Methods like UseApplication()
and UseStandard()
are tried and tested defaults, but you're free to configure your application however you want, pulling from the rich trove of features that Quino offers.
You'll notice that I didn't use Configure<ILocationManager>()
for this particular usage. That's ordinarily the way to go if you want to make changes to a singleton before it is used. However, if you want to change where the application looks for configuration files, then you have to change the location manager before it's used any other configuration takes place. It's a special object that is available before the IOC has been fully configured. To reiterate from other articles (because it's important), the order of operations we're interested in here are:
1. Create application (this is where you call `Use*()` to build the application)
2. Get the location manager to figure out the path for `LocationNames.Configuration`
3. Load the configuration file
4. Execute all remaining actions, including those scheduled with calls to `Configure()`
If you want to change the configuration-file location, then you have to get in there before the startup starts running -- and that's basically during application construction. Alternatively, you could also call UseConfigurationDataLoader()
to register your own object to actually load configuration data and do whatever the heck you like in there, including returning constant data. :-)
See Encodos configuration library for Quino Part 1, Part 2 and Part 3 as well as API Design: Running and Application Part 1 and Part 2 and, finally, Starting up an application, in detail.↩
That boilerplate looks like this:↩
The standard implementation of Quino's ITextKeyValueNodeReader supports XML, but it would be trivial to create and register a version that supports JSON (QNO-4993) or YAML. The configuration file for the utility looks like this:↩
If you look at the implementation of the UseStandard
method10, it pulls in a lot of stuff, like support for BCrypt, enhanced CSV and enum-value parsing and standard configuration for various components (e.g. the file log and command line). It's called "Standard" because it's the stuff we tend to use in a lot of applications.↩
By default, the application will look for this file next to the executable. You can configure this as well, by getting the location manager with GetLocationManager()
and setting values on it.↩
The metadata-specific analog to UseStandard()
is UseMetaStandard()
, but we don't call that. Instead, we call UseMetaStandardServices()
. Why? The answer is that we want the code generator to be able to use some objects defined in Quino, but the code generator itself isn't a metadata-based application. We want to include the IOC registrations required by metadata-based applications without adding any of the startup or shutdown actions. Many of the standard Use*()
methods included in the base libraries have analogs like this. The Use*Services()
analogs are also very useful in automated tests, where you want to be able to create objects but don't want to add anything to the startup.↩
Wait, why didn't we call RegisterSingle()
? For almost any object, we could totally do that. But objects used during the first stage of application startup -- before the IOC is available -- must go in the other IOC, accessed with SetSingle()
and GetSingle()
.↩
The full listing is in Program.cs
in the Quino.CodeGenerator
project in any 2.x version of Quino.↩
Note that, once the application is started, you can use GetInstance()
instead of GetSingle()
because the IOC is now available and all singletons are mirrored from the startup IOC to the main IOC. In fact, once the application is started, it's recommended to use GetInstance()
everywhere, for consistency and to prevent the subtle distinction between IOCs -- present only in the first stage of startup -- from bleeding into your main program logic.↩
If you have the Quino source code handy, you can look it up there, but if you have ReSharper installed, you can just F12 on UseStandard()
to decompile the method. In the latest DotPeek, the extension methods are even displayed much more nicely in decompiled form.↩