From a customer, we got the request to apply a visual style guide (VSG) to a Bootstrap-based application. Since we do have a lot of experience with applying style guides on web applications and styling in general, we accepted the job and started to evaluate the details.
The most recent stable version of Bootstrap is 3.3.6. However, when you go to the Bootstrap website, there is an announcement that Bootstrap 4 "is coming". The current state of Bootstrap 4 is alpha and the last blog post is from December 2015 which is almost half a year ago. It also is not clear, when version 4 finally will be available and stable and so we had to use the old Bootstrap 3 for this project.
But even here, there is some obscurity going on: Bootstrap was initially developed with LESS but for some reason they decided to switch to SASS. Even if we prefer to use LESS at Encodo, we decided to use SASS for this project to be able to upgrade to Bootstrap 4 more easily when it's available. There is also a SASS version of Bootstrap available which we decided to use as the base for this project.
Bootstrap is a GUI library that is intended to be as simple as possible to use for the consuming developer. Unfortunately, this does not mean that it is also simple to create a theme for it or to modify some of the existing components.
There is a customization section on the Bootstrap website that allows you to select the needed components and change some basic thing like colors and a few other options. This might be very nice if you just want to use Bootstrap with your own colors but since we had a style guide with a layout quite different from Bootstrap, we could not use this option.
So we decided to clone the entire Bootstrap library, make our changes and then build our custom Bootstrap version. This makes it possible to add some custom components and change the appearance of existing elements.
Bootstrap provides support for all kinds of browsers, including Internet Explorer down to version 8. While this is nice for developing an application that runs anywhere, it makes the SASS styling code very hard to read and edit. Also, you cannot use modern technologies such as Flexbox that makes the styling a lot easier and is the base of every layout we have created in the recent past.
Another important point is that the modularity of components is not really given. For example, the styles for the button are defined in one file, but there are many other locations where you can find styles for buttons and that modify the appearance of the button based on the container.
Also, the styles are defined "inside-out" which means that the size of a container is defined by its content. Styleguides normally work the other way around. All these points make it hard to change the structure of the page without affecting everything else. Especially when you try to use the original Bootstrap HTML markup that may not match the needs of the desired layout.
To increase the struggles, there is also the complex build- and documentation system used in the Bootstrap project. It might be great that Bootstrap itself is used for the documentation but I cannot understand why there is another CSS file with 1600 lines of code that changes some things especially for the documentation. Of course this messes up our painstakingly crafted Bootstrap styles again. In the end, we had to remove this file from our demo site, which broke styling for some documentation-specific features (like the sidebar menu).
Another point of concern is that Bootstrap uses jQuery plugins for controls that require JavaScript interaction. This might be good for simple websites that just need some basic interaction but is counterproductive for real web applications because the jQuery event handling can interfere with web application frameworks such as React or Angular.
I do not think that Bootstrap is a bad library but it is not really suitable for projects like this. The main use case of Bootstrap is to provide a good-looking layout for a website with little effort and little foreknowledge required. If you just want to put some information in the web and do not really care how it looks as long as it looks good, then Bootstrap is a good option for you.
If you'd like more information about this, then please feel free to contact us!
Encodo has long been a two-space indent shop. Section 4.1 of the Encodo C# Handbook writes that "[a]n indent is two spaces; it is never a tab.", even though "[t]he official C# standard [...] is four spaces." and that, should you have a problem with that, you should "deal with it."
Although we use our own standards by default, we use a customer's standards if they've defined their own. A large part of our coding is now done with four spaces. Some of us have gotten so accustomed to this that four spaces started looking better than two. That, combined with recent publicity for the topic1, led me to ask the developers at Encodo what they thought.
So, with the rewrite of the Encodo C# Handbook in progress, what will be our recommendation going forward?
Let's summarize the opinions above:
So, we have emphatic arguments against switching to tabs instead of spaces. Although there are good arguments for a 4-space indent, there are also strong arguments for a 2-space indent. There's no real pressure to switch the indent.
Encodo's 2009 recommendation stands: we indent with two spaces. Deal with it.3
# EditorConfig is awesome: http://EditorConfig.org
# top-most EditorConfig file
root = true
# Unix-style newlines with a newline ending every file
[*]
indent_style = space
indent_size = 2
If you're watching Silicon Valley, then you probably already know what prompted this discussion. The most recent episode had Richard of Pied Piper break of a relationship with a girl because she uses spaces instead of tabs.↩
As Richard of Pied Piper recommended, which is just insanity.↩
We use the EditorConfig plugin with all of our IDEs to keep settings for different solutions and products set correctly. The config file for Quino looks like this:↩
We discussed ABD in a recent article ABD: Refactoring and refining an API. To cite from that article,
[...] the most important part of code is to think about how youre writing it and what youre building. You shouldnt write a single line without thinking of the myriad ways in which it must fit into existing code and the established patterns and practices.
With that in mind, I saw another teaching opportunity this week and wrote up my experience designing an improvement to an existing API.
Before we write any code, we should know what we're doing.1
IMetaAspects
) in Quino to add domain-specific metadata (e.g. the IVisibleAspect
controls element visibility)FindOrAddAspect()
. This method does what it advertises: If an aspect with the requested type already exists, it is returned; otherwise, an instance of that type is created, added and returned. The caller gets an instance of the requested type (e.g. IVisibleAspect
).Although we're dealing concretely with aspects in Quino metadata, the pattern and techniques outlined below apply equally well to other, similar domains.
A good example is the IClassCacheAspect
. It exposes five properties, four of which are read-only. You can modify the property (OrderOfMagnitude
) through the interface. This is already not good, as we are forced to work with the implementation type in order to change any property other than OrderOfMagnitude
.
The current way to address this issue would be to make all of the properties settable on the interface. Then we could use the FindOrAddAspect()
method with the IClassCacheAspect
. For example,
var cacheAspect =
Element.Classes.Person.FindOrAddAspect<IClassCacheAspect>(
() => new ClassCacheAspect()
);
cacheAspect.OrderOfMagnitude = 7;
cacheAspect.Capacity = 1000;
For comparison, if the caller were simply creating the aspect instead of getting a possibly-already-existing version, then it would just use an object initializer.
var cacheAspect = Element.Classes.Person.Aspects.Add(
new ClassCacheAspect()
{
OrderOfMagnitude = 7;
Capacity = 1000;
}
}
This works nicely for creating the initial aspect. But it causes an error if an aspect of that type had already been added. Can we design a single method with all the advantages?
A good way to approach a new is to ask: How would we want the method to look if we were calling it?
Element.Classes.Person.SetCacheAspectValues(
a =>
{
a.OrderOfMagnitude = 7;
a.Capacity = 1000;
}
);
If we only want to change a single property, we can use a one-liner:
Element.Classes.Person.SetCacheAspectValues(a => a.Capacity = 1000);
Nice. That's even cleaner and has fewer explicit dependencies than creating the aspect ourselves.
Now that we know what we want the API to look like, let's see if it's possible to provide it. We request an interface from the list of aspects but want to use an implementation to set properties. The caller has to indicate how to create the instance if it doesn't already exist, but what if it does exist? We can't just upcast it because there is no guarantee that the existing aspect is the same implementation.
These are relatively lightweight objects and the requirement above is that the property values on the existing aspect are set on the returned aspect, not that the existing aspect is preserved.
What if we just provided a mechanism for copying properties from an existing aspect onto the new version?
var cacheAspect = new ClassCacheAspect();
var existingCacheAspect =
Element.Classes.Person.Aspects.FirstOfTypeOrDefault<IClassCacheAspect>();
if (existingCacheAspect != null)
{
result.OrderOfMagnitude = existingAspect.OrderOfMagnitude;
result.Capacity = existingAspect.Capacity;
// Set all other properties
}
// Set custom values
cacheAspect.OrderOfMagnitude = 7;
cacheAspect.Capacity = 1000;
This code does exactly what we want and doesn't require any setters on the interface properties. Let's pack this away into the API we defined above. The extension method is:
public static ClassCacheAspect SetCacheAspectValues(
this IMetaClass metaClass,
Action<ClassCacheAspect> setValues)
{
var result = new ClassCacheAspect();
var existingCacheAspect =
metaClass.Aspects.FirstOfTypeOrDefault<IClassCacheAspect>();
if (existingCacheAspect != null)
{
result.OrderOfMagnitude = existingAspect.OrderOfMagnitude;
result.Capacity = existingAspect.Capacity;
// Set all other properties
}
setValues(result);
return result;
}
So that takes care of the boilerplate for the IClassCacheAspect
. It hard-codes the implementation to ClassCacheAspect
, but let's see how big a restriction that is once we've generalized below.
We want to see if we can do anything about generalizing SetCacheAspectValues()
to work for other aspects.
Let's first extract the main body of logic and generalize the aspects.
public static TConcrete SetAspectValues<TService, TConcrete>(
this IMetaClass metaClass,
Action<TConcrete, TService> copyValues,
Action<TConcrete> setValues
)
where TConcrete : new, TService
where TService : IMetaAspect
{
var result = new TConcrete();
var existingAspect = metaClass.Aspects.FirstOfTypeOrDefault<TService>();
if (existingAspect != null)
{
copyValues(result, existingAspect);
}
setValues(result);
return result;
}
This isn't bad, but we've required that the TConcrete
parameter implement a default constructor. Instead, we could require an additional parameter for creating the new aspect.
public static TConcrete SetAspectValues<TService, TConcrete>(
this IMetaClass metaClass,
Func<TConcrete> createAspect,
Action<TConcrete, TService> copyValues,
Action<TConcrete> setValues
)
where TConcrete : TService
where TService : IMetaAspect
{
var result = createAspect();
var existingAspect = metaClass.Aspects.FirstOfTypeOrDefault<TService>();
if (existingAspect != null)
{
copyValues(result, existingAspect);
}
setValues(result);
return result;
}
Wait, wait, wait. We not only don't need to the new
generic constraint, we also don't need the createAspect
lambda parameter, do we? Can't we just pass in the object instead of passing in a lambda to create the object and then calling it immediately?
public static TConcrete SetAspectValues<TService, TConcrete>(
this IMetaClass metaClass,
TConcrete aspect,
Action<TConcrete, TService> copyValues,
Action<TConcrete> setValues
)
where TConcrete : TService
where TService : IMetaAspect
{
var existingAspect = metaClass.Aspects.FirstOfTypeOrDefault<TService>();
if (existingAspect != null)
{
copyValues(aspect, existingAspect);
}
setValues(aspect);
return aspect;
}
That's a bit more logical and intuitive, I think.
We can now redefine our original method in terms of this one:
public static ClassCacheAspect SetAspectValues(
this IMetaClass metaClass,
Action<ClassCacheAspect> setValues)
{
return metaClass.UpdateAspect(
new ClassCacheAspect(),
(aspect, existingAspect) =>
{
result.OrderOfMagnitude = existingAspect.OrderOfMagnitude;
result.Capacity = existingAspect.Capacity;
// Set all other properties
},
setValues
);
}
Can we somehow generalize the copying behavior? We could make a wrapper that expects an interface on the TService
that would allow us to call CopyFrom(existingAspect)
.
public static TConcrete SetAspectValues<TService, TConcrete>(
this IMetaClass metaClass,
TConcrete aspect,
Action<TConcrete> setValues
)
where TConcrete : TService, ICopyTarget
where TService : IMetaAspect
{
return metaClass.UpdateAspect<TService, TConcrete>(
aspect,
(aspect, existingAspect) => aspect.CopyFrom(existingAspect),
setValues
);
}
What does the ICopyTarget
interface look like?
public interface ICopyTarget
{
void CopyFrom(object other);
}
This is going to lead to type-casting code at the start of every implementation to make sure that the other
object is the right type. We can avoid that by using a generic type parameter instead.
public interface ICopyTarget<T>
{
void CopyFrom(T other);
}
That's better. How would we use it? Here's the definition for ClassCacheAspect
:
public class ClassCacheAspect : IClassCacheAspect, ICopyTarget<IClassCacheAspect>
{
public void CopyFrom(IClassCacheAspect otherAspect)
{
OrderOfMagnitude = otherAspect.OrderOfMagnitude;
Capacity = otherAspect.Capacity;
// Set all other properties
}
}
Since the final version of ICopyTarget
has a generic type parameter, we need to adjust the extension method. But that's not a problem because we already have the required generic type parameter in the outer method.
public static TConcrete SetAspectValues<TService, TConcrete>(
this IMetaClass metaClass,
TConcrete aspect,
Action<TConcrete> setValues
)
where TConcrete : TService, ICopyTarget<TService>
where TService : IMetaAspect
{
return metaClass.UpdateAspect(
aspect,
(aspect, existingAspect) => aspect.CopyFrom(existingAspect),
setValues
);
}
Assuming that the implementation of ClassCacheAspect
implements ICopyTarget
as shown above, then we can rewrite the cache-specific extension method to use the new extension method for ICopyTargets
.
public static ClassCacheAspect SetCacheAspectValues(
this IMetaClass metaClass,
Action<ClassCacheAspect> setValues)
{
return metaClass.UpdateAspect<IClassCacheAspect, ClassCacheAspect>(
new ClassCacheAspect(),
setValues
);
}
This is an extension method, so any caller that wants to use its own IClassCacheAspect
could just copy/paste this one line of code and use its own aspect.
This is actually pretty neat and clean:
You would think that would be axiomatic. You'd be surprised.↩
We've been doing more internal training lately and one topic that we've started to tackle is design for architecture/APIs. Even if you're not officially a software architect -- designing and building entire systems from scratch -- every developer designs code, on some level.
[A]lways [B]e [D]esigning
There are broad guidelines about how to format and style code, about how many lines to put in a method, about how many parameters to use, and so on. We strive for Clean Code(tm).
But the most important part of code is to think about how you're writing it and what you're building. You shouldn't write a single line without thinking of the myriad ways in which it must fit into existing code and the established patterns and practices.
We've written about this before, in the two-part series called "Questions to consider when designing APIs" (Part I and Part II). Those two articles comprise a long list of aspects of a design to consider.
First make a good design, then compromise to fit project constraints.
Your project defines the constraints under which you can design. That is, we should still have our designer caps on, but the options available are much more strictly limited.
But, frustrating as that might be, it doesn't mean you should stop thinking. A good designer figures out what would be optimal, then adjusts the solution to fit the constraints. Otherwise, you'll forget what you were compromising from -- and your design skills either erode or never get better.
We've been calling this concept ABD -- Always Be Designing.1 Let's take a closer, concrete look, using a recent issue in the schema migration for Quino. Hopefully, this example illustrates how even the tiniest detail is important.2
We detected the problem when the schema migration generated an invalid SQL statement.
ALTER TABLE "punchclock__timeentry" ALTER COLUMN "personid" SET DEFAULT ;
As you can see, the default value is missing. It seems that there are situations where the code that generates this SQL is unable to correctly determine that a default value could not be calculated.
The code that calculates the default value is below.
result = Builder.GetExpressionPayload(
null,
CommandFormatHints.DefaultValue,
new ExpressionContext(prop),
prop.DefaultValueGenerator
);
To translate, there is a Builder
that produces a payload. We're using that builder to get the payload (SQL, in this case) that corresponds to the DefaultValueGenerator
expression for a given property, prop
.
This method is an extension method of the IDataCommandBuilder
, reproduced below in full, with additional line-breaks for formatting:
public static string GetExpressionPayload<TCommand>(
this IDataCommandBuilder<TCommand> builder,
[CanBeNull] TCommand command,
CommandFormatHints hints,
IExpressionContext context,
params IExpression[] expressions)
{
if (builder == null) { throw new ArgumentNullException("builder"); }
if (context == null) { throw new ArgumentNullException("context"); }
if (expressions == null) { throw new ArgumentNullException("expressions"); }
return builder.GetExpressionPayload(
command,
hints,
context,
expressions.Select(
e => new ExecutableQueryItem<IExecutableExpression>(new ExecutableExpression(e))
)
);
}
This method does no more than to package each item in the expressions
parameter in an ExecutableQueryItem
and call the interface method.
The problem isn't immediately obvious. It stems from the fact that each ExecutableQueryItem
can be marked as Handled
. The extension method ignores this feature, and always returns a result. The caller is unaware that the result may correspond to an only partially handled expression.
Our first instinct is, naturally, to try to figure out how we can fix the problem.3 In the code above, we could keep a reference to the executable items and then check if any of them were unhandled, like so:
var executableItems = expressions.Select(
e => new ExecutableQueryItem<IExecutableExpression>(new ExecutableExpression(e))
);
var result = builder.GetExpressionPayload(command, hints, context, executableItems);
if (executableItems.Unhandled().Any())
{
// Now what?
}
return result;
}
We can detect if at least one of the input expressions could not be mapped to SQL. But we don't know what to do with that information.
null
? What can we return to indicate that the input expressions could not be mapped? Here we have the same problem as with throwing an exception: all callers assume that the result can be mapped.So there's no quick fix. We have to change an API. We have to design.
As with most bugs, the challenge lies not in knowing how to fix the bug, but in how to fix the underlying design problem that led to the bug. The problem is actually not in the extension method, but in the method signature of the interface method.
Instead of a single result, there are actually two results for this method call:
Instead of a Get
method, this is a classic TryGet
method.
If this code is already in production, then you have to figure out how to introduce the bug fix without breaking existing code. If you already have consumers of your API, you can't just change the signature and cause a compile error when they upgrade. You have to decorate the existing method with [Obsolete]
and make a new interface method.
So we don't change the existing method and instead add the method TryGetExpressionPayload()
to IDataCommandBuilder
.
Now, let's figure out what the parameters are going to be.
The method called by the extension method above has a slightly different signature.5
string GetExpressionPayload(
[CanBeNull] TCommand command,
CommandFormatHints hints,
[NotNull] IExpressionContext context,
[NotNull] IEnumerable<ExecutableQueryItem<IExecutableExpression>> expressions
);
That last parameter is a bit of a bear. What does it even mean? The signature of the extension method deals with simple IExpression
objects -- I know what those are. But what are ExecutableQueryItems
and IExecutableExpressions
?
As an author and maintainer of the data driver, I know that these objects are part of the internal representation of a query as it is processed. But as a caller of this method, I'm almost never going to have a list of these objects, am I?
Let's find out.
Me: Hey, ReSharper, how many callers of that method are there in the entire Quino source? ReSharper: Just one, Dave.6
So, we defined an API with a signature that's so hairy no-one calls it except through an extension method that makes the signature more palatable. And it introduces a bug. Lovely.
We've now figured out that our new method should accept a sequence of IExpression
objects instead of ExecutableQueryItem
objects.
How's the signature looking so far?
bool TryGetExpressionPayload(
[CanBeNull] TCommand command,
CommandFormatHints hints,
[NotNull] IExpressionContext context,
[NotNull] IEnumerable<IExpression> expressions,
out string payload
);
Not quite. There are two things that are still wrong with this signature, both important.
One problem is that the rest of the IDataCommandBuilder<TCommand>
deals with a generic payload type and this method only works for builders where the target representation is a string. The Mongo driver, for example, uses MongoStorePayload
and MongoRetrievePayload
objects instead of strings and throws a NotSupportedException
for this API.
That's not very elegant, but the Mongo driver was forced into that corner by the signature. Can we do better? The API would currently require Mongo to always return false
because our Mongo driver doesn't know how to map anything to a string. But it could map to one of the aforementioned object representations.
If we change the out
parameter type from a string
to an object
, then any driver, regardless of payload representation, has at least the possibility of implementing this API correctly.
Another problem is that the order of parameters does not conform to the code style for Encodo.
null
as the first parameter looks strange. The command
can be null, so it should move after the two non-nullable parameters. If we move it all the way to the end, we can even make it optional.hints
should be third.)expressions
not the context
. The first parameter should be the target of the method; the rest of the parameters provide context for that input.params IExpression[]
. Using params
allows a caller to provide zero or more expressions, but it's only allowed on the terminal parameter. Instead, we'll accept an IEnumerable<IExpression>
, which is more standard for the Quino library anyway.The final method signature is below.
bool TryGetExpressionPayload(
[NotNull] IEnumerable<IExpression> expressions,
[NotNull] IExpressionContext context,
CommandFormatHints hints,
out object payload,
[CanBeNull] TCommand command = default(TCommand)
);
The schema migration called the original API like this:
result = Builder.GetExpressionPayload(
null,
CommandFormatHints.DefaultValue,
new ExpressionContext(prop),
prop.DefaultValueGenerator
);
return true;
The call with the new API -- and with the bug fixed -- is shown below. The only non-functional addition is that we have to call ToSequence()
on the first parameter (highlighted). Happily, though, we've fixed the bug and only include a default value in the field definition if one can actually be calculated.
object payload;
if (Builder.TryGetExpressionPayload(
prop.DefaultValueGenerator.ToSequence(),
new ExpressionContext(prop),
CommandFormatHints.DefaultValue,
out payload)
)
{
result = payload as string ?? payload.ToString();
return true;
}
A good rule of thumb is that if you find yourself explaining something in detail, it might still be too complicated. In that light, the call to ToSequence()
is a little distracting.7 It would be nice to be able to map a single expression without having to pack it into a sequence.
So we have one more design decision to make: where do we add that method call? Directly to the interface, right? But the method for a single expression can easily be expressed in terms of the method we already have (as we saw above). It would be a shame if every implementor of the interface was forced to produce this boilerplate.
Since we're using C#, we can instead extend the interface with a static method, as shown below (again, with more line breaks for this article):
public static bool TryGetExpressionPayload<TCommand>(
[NotNull] this IDataCommandBuilder<TCommand> builder, // Extend the builder
[NotNull] IExpression expression,
[NotNull] IExpressionContext context,
CommandFormatHints hints,
out object payload,
[CanBeNull] TCommand command = default(TCommand)
)
{
return builder.TryGetExpressionPayload(
expression.ToSequence(),
context,
hints,
out payload,
command
);
}
We not only avoided cluttering the interface with another method, but now a caller with a single expression doesn't have to create a sequence for it8, as shown in the final version of the call below.
object payload;
if (Builder.TryGetExpressionPayload(
prop.DefaultValueGenerator,
new ExpressionContext(prop),
CommandFormatHints.DefaultValue,
out payload)
)
{
result = payload as string ?? payload.ToString();
return true;
}
We saw in this post how we always have our designer/architect cap on, even when only fixing bugs. We took a look at a quick-fix and then backed out and realized that we were designing a new solution. Then we covered, in nigh-excruciating detail, our thought process as we came up with a new solution.
Many thanks to Dani for the original design and Sebastian for the review!
This is a bit of a riff on ABC -- Always Be Closing -- as popularized by Alec Baldwin in the movie Glengarry Glen Ross.↩
Also, understand that it took much longer to write this blog post and itemize each individual step of how we thought about the issue. In reality, we took only a couple of minutes to work through this chain of reasoning and come up with the solution we wanted. It was only after we'd finished designing that I realized that this was a good example of ABD.↩
Actually, our first instinct is to make sure that there is a failing test for this bug. But, this article deals with how to analyze problems and design fixes, not how to make sure that the code you write is tested. That's super-important, too, though, just so you know. Essential, even.↩
Even though C# doesn't include the exceptions thrown in the signature of a method, as Java does. Where the Java version is fraught with issues, see the "Recoverable Errors: Type-Directed Exceptions" chapter of Midori: The Error Model by Joe Duffy for a really nice proposal/implementation of a language feature that includes expected exceptions in the signature of a method.↩
Which is why we defined the extension method in the first place.↩
I'm fully aware that my name isn't Dave. It's just what ReSharper calls me. Old-school reference.↩
This was pointed out, by the way, by a reviewer of this blog post and escaped the notice of both designers and the code-reviewer. API design is neither easy nor is it done on the first try. It's only finished after multiple developers have tried it out. Then, you'll probably be able to live with it.↩
Most developers would have used new [] { expression }
, which I think is kind of ugly.↩
The article .NET Core, a call to action by Mark Rendle exhorts everyone to "go go go".
I say, "pump the brakes."
Mark says, "The next wave of work must be undertaken by the wider .NET community, both inside and outside Microsoft."
No. The next wave of work must be undertaken by the team building the product. This product is not even Beta yet. They have called the last two releases RC, but they aren't: the API is still changing quite dramatically. For example, the article Announcing .NET Core RC2 and .NET Core SDK Preview 11 lists all sorts of changes and the diff of APIs between RC1 and RC2 is gigantic -- the original article states that "[w]e added over a 1000 new APIs in .NET Core RC2".
What?!?!
That is a huge API-surface change between release candidates. That's why I think these designations are largely incorrect. Maybe they just mean, "hey, if y'all can actually work with this puny footprint, then we'll call it a final release. If not, we'll just add a bunch more stuff until y'all can compile again." Then, yeah, I guess each release is a "candidate".
But then they should just release 1.0 because this whole "RC" business is confusing. What they're really releasing are "alpha" builds. The quality is high, maybe even production-quality, but they're still massive changes vis-a-vis previous builds.
That doesn't sound like "RC" to me. As an example, look at the project-file format, project.json
.
Mark also noted that there are "no project.json files in the repository" for the OData project that comes from Microsoft. That's not too surprising, considering the team behind .NET Core just backed off of the project.json
format considerably, as concisely documented in The Future of project.json in ASP.NET Core by Shawn Wildermuth. The executive summary is that they've decided "to phase out project.json in deference to MSBuild". Anyone who's based any of their projects on the already-available-in-VS-2015 project templates that use that format will have to convert them to whatever the final format is.
Wildermuth also wrote that "Microsoft has decided after the RTM of the ASP.NET Core framework to phase out project.json and use MSBuild for build data. (Emphasis added.)" I was confused (again) but am pretty sure that he's wrong about RTM because, just a couple of days later, MS published an article Announcing ASP.NET Core RC2 -- and I'm pretty sure that RCs come before RTM.
At Encodo, we took a shot at porting the base assembly of Quino to .NET Core. It has only dependencies on framework-provided assemblies in the GAC, so that eliminated any issues with third-party support, but it does provide helper methods for AppDomains
and Reflection, which made a port to .NET Core nontrivial.
Here's a few things we learned that made the port take much longer than we expected.
project.json
works with the command-line tools. Create the project file and compile with dotnet
.project.json
do not work in Visual Studio; you have to choose a single target. Otherwise, the same project that just built on the command line barely loads.#IFDEFs
you use for platform-specific code. So, even if you've gotten everything compiling on the command-line, be prepared to do it all over again differently if you actually want it to work in VS2015.Encodo.Core
are suddenly back in RC2. That means that if we'd waited, we'd have saved a lot of time and ended up in the same place.Encodo.Core
compiling under .NET Core.With so much in flux -- APIs and project format -- we're not ready to invest more time and money in helping MS figure out what the .NET Core target needs. We're going to sit it out until there's an actual RTM. Even at that point, if we make a move, we'll try a small slice of Quino again and see how long it takes. If it's still painful, then we'll wait until the first service pack (as is our usual policy with development tools and libraries).
I understand Mark's argument that "the nature of a package-based ecosystem such as NuGet can mean that Project Z can't be updated until Project Y releases .NET Core packages, and Project Y may be waiting on Project X, and so on". But I just don't, as he says, "trust that what we have now in RC2 is going to remain stable in API terms", so I wouldn't recommend "that OSS project maintainers" do so, either. It's just not ready yet.
If you jump on the .NET Core train now, be prepared to shovel coal. Oh, and you might just have to walk to the next station, too. At noon. Carrying your overseas trunk on your back. Once you get there, though, you might be just in time for the 1.0.1 or 1.0.2 express arriving at the station, where you can get on, you might not even have to buy a new ticket -- and you can get there at the same time as everyone else.
The Mark Renton article states boldly that "Yesterday we finally got our hands on the first Release Candidate of .NET Core [...]" but I don't know what he's talking about. The project just released RC2 and there are even RC3 packages available in the channel already -- but these are totally useless and didn't work at all in our projects.↩
Before taking a look at the roadmap, let's quickly recap how far we've come. An overview of the release schedule shows a steady accretion of features over the years, as driven by customer or project needs.
The list below includes more detail on the releases highlighted in the graphic.1
We took 1.5 years to get to v1. The initial major version was to signify the first time that Quino-based code went into external production.2
After that, it took 6.5 years to get to v2. Although we added several large products that use Quino, we were always able to extend rather than significantly change anything in the core. The second major version was to signify sweeping changes made to address technical debt, to modernize certain components and to prepare for changes coming to the .NET platform.
It took just 5 months to get to v3 for two reasons:
So that's where we've been. Where are we headed?
As you can see above, Quino is a very mature product that satisfies the needs of a wide array of software on all tiers. What more is there to add?
Quino's design has always been driven by a combination of customer requirements and what we anticipated would be customer requirements.
We're currently working on the following features.
Modeling improvements
This work builds on the API changes made to the MetaBuilder
in v3. We're creating a more fluent, modern and extensible API for building metadata. We hope to be able to add these changes incrementally without introducing any breaking changes.6
WPF / VSG
A natural use of the rich metadata in Quino is to generate user interfaces for business entities without have to hand-tool each form. From the POC onward, Quino has included support for generating UIs for .NET Winforms.
Winforms has been replaced on the Windows desktop with WPF and UWP. We've gotten quite far with being able to generate WPF applications from Quino metadata. The screenshots below come from a pre-alpha version of the Sandbox application included in the Quino solution.
You may have noticed the lovely style of the new UI.7 We're using a VSG designed for us by Ergosign, for whom we've done some implementation work in the past.
.NET Core
If you've been following Microsoft's announcements, things are moving quickly in the .NET world. There are whole new platforms available, if you target your software to run on them. We're investigating the next target platforms for Quino. Currently that means getting the core of Quino -- Quino.Meta
and its dependencies -- to compile under .NET Core.
As you can see in the screenshot, we've got one of the toughest assemblies to compile --
Encodo.Core
. After that, we'll try for running some tests under Linux or OS X. The long-term goal is to be able to run Quino-based application and web servers on non-Windows -- and, most importantly, non-IIS -- platforms.8
These changes will almost certainly cause builds using previous versions to break. Look for any additional platform support in an upcoming major-version release.
There were, of course, more minor and patch releases throughout, but those didn't introduce any major new functionality.↩
Punchclock, our time-entry and invoicing software -- and Quino "dogfood (When a developer uses their own code for their own daily needs. Being a user as well as a developer creates the user empathy that is the hallmark of good software.)" product -- had been in use internally at Encodo earlier than that.↩
E.g. splitting the monolithic Encodo
and Quino
assemblies into dozens of new, smaller and much more focused assemblies. Reorganizing configuration around the IOC and rewriting application startup for more than just desktop applications was another sweeping change.↩
One of those breaking changes was to the MetaBuilder
, which started off as a helper class for assembling application metadata, but became a monolithic and unavoidable dependency, even in v2. In v3, we made the breaking changes to remove this component from its central role and will continue to replace its functionality with components that more targeted, flexible and customizable.↩
In the years between v1 and v2, we used the minor-version number to indicate when breaking changes could be made. We also didn't try as hard to avoid breaking changes by gracefully deprecating code. The new approach tries very hard to avoid breaking changes but accepts the consequences when it's deemed necessary by the team.↩
That is, when users upgrade to a version with the newer APIs, they will get obsolete
warnings but their existing code will continue to build and run, as before the upgrade. In this way, customers can smoothly upgrade without breaking any builds.↩
You may also have noticed that the "Sandbox Dialog View" includes a little tag in it for the "XAML Spy", a tool that we use for WPF development. Just so you know the screenshots aren't faked... :-)↩
As with the WPF interface, we're likely to dogfood all of these technologies with Punchclock, our time-tracking and invoicing system written with Quino. The application server and web components that run on Windows could be migrated to run on one of our many Linux machines instead.↩
The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.
IDataSession
and IApplication
now directly implement the IServiceRequestHandler
and helper methods that used to extend IApplication
now extend this interface instead, so calls like GetModel()
can now be executed against an IApplication
or an IDataSession
. Many methods have been moved out of the IServiceRequestHandler
interface to extension methods declared in the Encodo.IOC
namespace. This move will require applications to update the usings
. ReSharper will automatically find the correct namespace and apply it for you.ApplicationExtensions.GetInstance()
has been replaced with a direct implementation of the IServiceRequestHandler
by IApplication
.MetaBuilder.Include()
has been replaced with Dependencies.Include()
CreateModel()
, you can no longer call CreateMainModule()
because the main module is set up automatically. Although the call is marked as obsolete, it can only be combined with the older overload of the CreateModel()
. Using it with the newer overload will cause a runtime error as the main module is added to the model twice.MetaBuilder
have been replaced by AddPath()
. To rewrite a path, use the following style:Builder.AddPath(
Elements.Classes.A.FromOne("Id"),
Elements.Classes.B.ToMany("FileId"),
path => path.SetMetaId(new Guid("...")).SetDeleteRule(MetaPathRule.Cascade),
idx => idx.SetMetaId(new Guid("..."))
);
Encodo published its first C# Handbook and published it to its web site in 2008. At the time, we also published to several other standard places and got some good, positive feedback. Over the next year, I made some more changes and published new versions. The latest version is 1.5.2 and is available from Encodo's web site. Since then, though I've made a few extra notes and corrected a few errors, but never published an official version again.
This is not because Encodo hasn't improved or modernized its coding guidelines, but because of several issues, listed below.
var
advice) or just plain wrong (e.g. var
advice)To address these issues and to accommodate the new requirements, here's what we're going to do:
Convert the entire document from Word to Markdown and put it in a Git repository
Separate the chapters into individual files and keep them shorter and more focused on a single topic
Separate all of the advice and rules into the following piles:
These are the requirements and goals for a new version of the C# handbook.
The immediate next steps are:
I hope to have an initial, modern version ready within the next month or so.
On Wednesday, Encodo had its first networking event of the year. Our very own Sebastian Greulach presented Code Review Best Practices. A bunch of our friends and colleagues from the area showed up for a lively discussion that, together with the presentation, lasted over 90 minutes.
We heard from people working with remote teams -- off- and near-shored -- as well as people working locally in both small and large teams and for small to large companies. We discussed various review styles, from formal to informal to nonexistent as well as the differences in managing and reviewing code for projects versus products. Naturally, we also covered tool support and where automation makes sense and where face-to-face human interaction is still better.
The discussion continued over a nice meal prepared on our outdoor grill. We even had a lot more vegetables this time! Thanks to lovely weather, we were able to spend some time outside and Pascal demonstrated his L337 drone-flying skills -- but even he couldn't save it from a rain gutter when a propeller came off mid-flight.
Thanks to everyone who helped make it happen and thanks to everyone who showed up!
Unwritten code requires no maintenance and introduces no cognitive load.
As I was working on another part of Quino the other day, I noticed that the oft-discussed registration and configuration methods1 were a bit clunkier than I'd have liked. To whit, the methods that I tended to use together for configuration had different return types and didn't allow me to freely mix calls fluently.
Register
and Use
The return type for Register
methods is IServiceRegistrationHandler
and the return type for Use
methods is IApplication
(a descendant), The Register* methods come from the IOC interfaces, while the application builds on top of this infrastructure with higher-level Use* configuration methods.
This forces developers to write code in the following way to create and configure an application.
public IApplication CreateApplication()
{
var result =
new Application()
.UseStandard()
.UseOtherComponent();
result.
.RegisterSingle<ICodeHandler, CustomCodeHandler>()
.Register<ICodePacket, FSharpCodePacket>();
return result;
}
That doesn't look too bad, though, does it? It doesn't seem like it would cramp anyone's style too much, right? Aren't we being a bit nitpicky here?
That's exactly why Quino 2.0 was released with this API. However, here we are, months later, and I've written a lot more configuration code and it's really starting to chafe that I have to declare a local variable and sort my method invocations.
So I think it's worth addressing. Anything that disturbs me as the writer of the framework -- that gets in my way or makes me write more code than I'd like -- is going to disturb the users of the framework as well.
Whether they're aware of it or not.
In the best of worlds, users will complain about your crappy API and make you change it. In the world we're in, though, they will cheerfully and unquestioningly copy/paste the hell out of whatever examples of usage they find and cement your crappy API into their products forever.
Do not underestimate how quickly calls to your inconvenient API will proliferate. In my experience, programmers really tend to just add a workaround for whatever annoys them instead of asking you to fix the problem at its root. This is a shame. I'd rather they just complained vociferously that the API is crap rather than using it and making me support it side-by-side with a better version for usually feels like an eternity.
Maybe it's because I very often have control over framework code that I will just not deal with bad patterns or repetitive code. Also I've become very accustomed to having a wall of tests at my beck and call when I bound off on another initially risky but in-the-end rewarding refactoring.
If you're not used to this level of control, then you just deal with awkward APIs or you build a workaround as a band-aid for the symptom rather than going after the root cause.
So while the code above doesn't trigger warning bells for most, once I'd written it a dozen times, my fingers were already itching to add [Obsolete]
on something.
I am well-aware that this is not a simple or cost-free endeavor. However, I happen to know that there aren't that many users of this API yet, so the damage can be controlled.
If I wait, then replacing this API with something better later will take a bunch of versions, obsolete warnings, documentation and re-training until the old API is finally eradicated. It's much better to use your own APIs -- if you can -- before releasing them into the wild.
Another more subtle reason why the API above poses a problem is that it's more difficult to discover, to learn. The difference in return types will feel arbitrary to product developers. Code-completion is less helpful than it could be.
It would be much nicer if we could offer an API that helped users discover it at their own pace instead of making them step back and learn new concepts. Ideally, developers of Quino-based applications shouldn't have to know the subtle difference between the IOC and the application.
Something like the example below would be nice.
return
new Application()
.UseStandard()
.RegisterSingle<ICodeHandler, CustomCodeHandler>()
.UseOtherComponent()
.Register<ICodePacket, FSharpCodePacket>();
Right? Not a gigantic change, but if you can imagine how a user would write that code, it's probably a lot easier and more fluid than writing the first example. In the second example, they would just keep asking code-completion for the next configuration method and it would just be there.
In order to do this, I'd already created an issue in our tracker to parameterize the IServiceRegistrationHandler
type in order to be able to pass back the proper return type from registration methods.
I'll show below what I mean, but I took a crack at it recently because I'd just watched the very interesting video Fun with Generics by Benjamin Hodgson, which starts off with a technique identical to the one I'd planned to use -- and that I'd already used successfully for the IQueryCondition
interface.2
Let's redefine the IServiceRegistrationHandler
interface as shown below,
public interface IServiceRegistrationHandler<TSelf>
{
TSelf Register<TService, TImplementation>()
where TService : class
where TImplementation : class, TService;
// ...
}
Can you see how we pass the type we'd like to return as a generic type parameter? Then the descendants would be defined as,
public interface IApplication : IServiceRegistrationHandler<IApplication>
{
}
In the video, Hodgson notes that the technique has a name in formal notation, "F-bounded quantification" but that a snappier name comes from the C++ world, "curiously recurring template pattern". I've often called it a self-referencing generic parameter, which seems to be a popular search term as well.
This is only the first step, though. The remaining work is to update all usages of the formerly non-parameterized interface IServiceRegistrationHandler
. This means that a lot of extension methods like the one below
public static IServiceRegistrationHandler RegisterCoreServices(
[NotNull] this IServiceRegistrationHandler handler)
{
}
will now look like this:
public static TSelf RegisterCoreServices<TSelf>(
[NotNull] this IServiceRegistrationHandler<TSelf> handler)
where TSelf : IServiceRegistrationHandler<TSelf>
{
}
This makes defining such methods more complex (again).3 in my attempt at implementing this, Visual Studio indicated 170 errors remaining after I'd already updated a couple of extension methods.
Instead of continuing down this path, we might just want to follow the pattern we established in a few other places, by defining both a Register
method, which uses the IServiceRegistrationHandler
, and a Use
method, which uses the IApplication
Here's an example of the corresponding "Use" method:
public static IApplication UseCoreServices(
[NotNull] this IApplication application)
{
if (application == null) { throw new ArgumentNullException("application"); }
application
.RegisterCoreServices()
.RegisterSingle(application.GetServices())
.RegisterSingle(application);
return application;
}
Though the technique involves a bit more boilerplate, it's easy to write and understand (and reason about) these methods. As mentioned in the initial sentence of this article, the cognitive load is lower than the technique with generic parameters.
The only place where it would be nice to have an IApplication
return type is from the Register*
methods defined on the IServiceRegistrationHandler
itself.
We already decided that self-referential generic constraints would be too messy. Instead, we could define some extension methods that return the correct type. We can't name the method the same as the one that already exists on the interface4, though, so let's prepend the word Use
, as shown below:
IApplication UseRegister<TService, TImplementation>(
[NotNull] this IApplication application)
where TService : class
where TImplementation : class, TService;
{
if (application == null) { throw new ArgumentNullException("application"); }
application.Register<TService, TImplementation>();
return application;
}
That's actually pretty consistent with the other configuration methods. Let's take it for a spin and see how it feels. Now that we have an alternative way of registering types fluently without "downgrading" the result type from IApplication
to IServiceRegistrationHandler
, we can rewrite the example from above as:
return
new Application()
.UseStandard()
.UseRegisterSingle<ICodeHandler, CustomCodeHandler>()
.UseOtherComponent()
.UseRegister<ICodePacket, FSharpCodePacket>();
Instead of increasing cognitive load by trying to push the C# type system to places it's not ready to go (yet), we use tiny methods to tweak the API and make it easier for users of our framework to write code correctly.5
Perhaps an example is in order:
interface IA
{
IA RegisterSingle<TService, TConcrete>();
}
interface IB : IA { }
static class BExtensions
{
static IB RegisterSingle<TService, TConcrete>(this IB b) { return b; }
static IB UseStuff(this IB b) { return b; }
}
Let's try to call the method from BExtensions
:
public void Configure(IB b)
{
b.RegisterSingle<IFoo, Foo>().UseStuff();
}
The call to UseStuff
cannot be resolved because the return type of the matched RegisterSingle
method is the IA
of the interface method not the IB
of the extension method. There is a solution, but you're not going to like it (I know I don't).
public void Configure(IB b)
{
BExtensions.RegisterSingle<IFoo, Foo>(b).UseStuff();
}
You have to specify the extension-method class's name explicitly, which engenders awkward fluent chaining -- you'll have to nest these calls if you have more than one -- but the desired method-resolution was obtained.
But at what cost? The horror...the horror.
See Encodos configuration library for Quino Part 1, Part 2 and Part 3 as well as API Design: Running and Application Part 1 and Part 2 and, finally, Starting up an application, in detail.↩
The video goes into quite a bit of depth on using generics to extend the type system in the direction of dependent types. Spoiler alert: he doesn't make it because the C# type system can't be abused in this way, but the journey is informative.↩
As detailed in the links in the first footnote, I'd just gotten rid of this kind of generic constraint in the configuration calls because it was so ugly and offered little benefit.↩
If you define an extension method for a descendant type that has the same name as a method of an ancestor interface, the method-resolution algorithm for C# will never use it. Why? Because the directly defined method matches the name and all the types and is a "stronger" match than an extension method.↩
The final example does not run against Quino 2.2, but will work in an upcoming version of Quino, probably 2.3 or 2.4.↩