v1.8.6: Bug fix release for v1.8.5 (no new features)

The summary below describes major new features, items of note and breaking changes. The full list of issues is also available for those with access to the Encodo issue tracker.


Breaking changes

No known breaking changes

Programming in the moderncurrent age

This article originally appeared on earthli News and has been cross-posted here.

In order to program in 2013, it is important not to waste any time honing your skills with outdated tools and work-flows. What are the essential pieces of software for developing software in 2013?


A runtime is a given for all but the most esoteric of programming exercises. Without something to execute your code, there is almost no point in writing it.


Programming without an integrated debugger can be very time-consuming, error-prone and will quite frankly suck the fun right out of the whole endeavor. And, by "debugger" I mean a source-level single-step debugger with call-stack and variable/object/structure inspection as well as expression evaluation. Poring through logs and inserting print statements is not a viable long-term or even medium-term solution. You shouldn't be writing in a language without one of these unless you absolutely can't avoid it (NAnt build scripts come to mind).


A syntax/semantics checker of some sort integrated into the editor ensures a tighter feedback/error-finding loop and saves time, energy and frustration. I was deliberately cagey with the "checker" because I understand that some languages, like Javascript1, do not have a compiled form. Duck-typed languages like Python or Ruby also limit static checking but anything is better than nothing.


A source-control system is essential in order to track changes, test ideas and manage releases. A lot of time can be wasted -- and effort lost -- without good source control. Great source control decreases timidity, encourages experimentation and allows for interruptible work-flows. I will argue below that private branches and history rewriting are also essential.

Even for the smallest projects, there is no reason to forgo any of these tools.

Managing your Source Code

tl;dr: It's 2013 and your local commit history is not sacrosanct. No one wants to see how you arrived at the solution; they just want to see clean commits that explain your solution as clearly as possible. Use git; use rebase; use "rebase interactive"; use the index; stage hunks; squash merge; go nuts.2

imageI would like to focus on the "versioning" part of the tool-chain. Source control tells the story of your code, showing how it evolved to where it is at any given point. If you look closely at the "Encodo Branching Model"3 diagram (click to enlarge), you can see the story of the source code:

  1. All development was done in the master branch until v1.0 was released
  2. Work on B was started in a feature branch
  3. Work on hotfix v1.0.1 was started in a hotfix branch
  4. Work on A was started in another feature branch
  5. Hotfix v1.0.1 was released, tagged and merged back to the master branch
  6. Development continued on master and both feature branches
  7. Master was merged to feature branch A (includes hotfix v1.0.1 commits)
  8. Finalization for release v1.1 was started in a release branch
  9. Feature A was completed and merged back to the master branch
  10. Version v1.1 was released, tagged and merged back to the master branch
  11. Master was merged to feature branch B (includes v1.1 and feature A commits)
  12. Development continued on master and feature B
  13. Version v1.2 was released and tagged

Small, precise, well-documented commits are essential in order for others to understand the project -- especially those who weren't involved in developing the code. It should be obvious from which commits you made a release. You should be able to go back to any commit and easily start working from there. You should be able to maintain multiple lines of development, both for maintenance of published versions and for development of new features. The difficulty of merging these branches should be determined by the logical distance between them rather than by the tools. Merging should almost always be automatic.

Nowhere in those requirements does it say that you're not allowed to lie about how you got to that pristine tree of commits.

Why you should be using private branches and history rewriting

A few good articles about Git have recently appeared -- Understanding the Git Workflow by Benjamin Sandofsky is one such -- explaining better than ever why rewriting history is better than server-side, immutable commits.

In the article cited above, Sandofsky divides his work up into "Short-lived work [...] larger work [...] and branch bankrupty." These concepts are documented to some degree in the Branch Management chapter of the Encodo Git Handbook (of which I am co-author). I will expand on these themes below.

Note: The linked articles deal exclusively with the command line, which isn't everyone's favorite user interface (I, for one, like it). We use the SmartGit/Hg client for visualizing diffs, organizing commits and browsing the log. We also use the command-line for a lot of operations, but SmartGit is a very nice tool and version 3 supports nearly all of the operations described in this article.

What is rebasing?

As you can see from the diagram above, a well-organized and active project will have multiple branches. Merging and rebasing are two different ways of getting commits from one branch into another.

Merging commits into a branch creates a merge commit, which shows up in the history to indicate that n commits were made on a separate branch. Rebasing those commits instead re-applies those commits to the head of the indicated branch without a merge commit. In both cases there can be conflicts, but one method doesn't pose a greatest risk of them than the other.4 You cannot tell from the history that rebased commits were developed in a separate branch. You can, however, tell that the commits were rebased because the author date (the time the commit was originally created) differs from the commit date (the last time that the commit was applied).

What do you recommend?

At Encodo, we primarily work in the master branch because we generally work on very manageable, bite-sized issues that can easily be managed in a day. Developers are free to use local branches but are not required to do so. If some other requirement demands priority, we shunt the pending issue into a private branch. Such single-issue branches are focused and involve only a handful of files. It is not at all important to "remember" that the issue was developed in a branch rather than the master branch. If there are several commits, it may be important for other users to know that they were developed together and a merge-commit can be used to indicate this. Naturally, larger changes are developed in feature branches, but those are generally the exception rather than the rule.

Remember: Nowhere in those requirements does it say that you're not allowed to lie about how you got to that pristine tree of commits.

Otherwise? Local commit history is absolutely not sacrosanct. We rebase like crazy to avoid unwanted merge commits. That is, when we pull from the central repository, we rebase our local commits on top of the commits that come form the origin. This has worked well for us.

If the local commit history is confusing -- and this will sometimes come up during the code review -- we use an interactive rebase to reorganize the files into a more soothing and/or understandable set of commits. See Sandofsky's article for a good introduction to using interactive rebasing to combine and edit commits.

Naturally, we weigh the amount of confusion caused by the offending commits against the amount of effort required to clean up the history. We don't use bisect5 very often, so we don't invest a lot of time in enforcing the clean, compilable commits required by that tool. For us, the history is interesting, but we rarely go back farther than a few weeks in the log.6

When to merge? When to rebase?

At Encodo, there are only a few reasons to retain a merge commit in the official history:

  1. If we want to remember which commits belonged to a particular feature. Any reasonable tool will show these commits graphically as a separate strand running alongside the master branch.
  2. If a rebase involves too much effort or is too error-prone. If there are a lot of commits in the branch to be integrated, there may be subtle conflicts that resolve more easily if you merge rather than rebase. Sometimes we just pull the e-brake and do a merge rather than waste time and effort trying to get a clean rebase. This is not to say that the tools are lacking or at fault but that we are pragmatic rather than ideological.7
  3. If there are merge commits in a feature branch with a large number of well-organized commits and/or a large number of changes or affected files. In this case, using a squash merge and rebuilding the commit history would be onerous and error-prone, so we just merge to avoid issues that can arise when rebasing merge commits (related to the point above).

When should I use private branches? What are they exactly?

There are no rules for local branches: you can name them whatever you like. However, if you promote a local branch to a private branch, at Encodo we use the developer's initials as the prefix for the branch. My branches are marked as "mvb/feature1", for example.

What's the difference between the two? Private branches may get pushed to our common repository. Why would you need to do that? Well, I, for example, have a desktop at work and, if I want to work at home, I have to transfer my workspace somehow to the machine at home. One solution is to work on a virtual machine that's accessible to both places; another is to remote in to the desktop at work from home; the final one is to just push that work to the central repository and pull it from home. The offline solution has the advantage of speed and less reliance on connectivity.

What often happens to me is that I start work on a feature but can only spend an hour or two on it before I get pulled off onto something else. I push the private branch, work on it a bit more at home, push back, work on another, higher-priority feature branch, merge that in to master, work on master, whatever. A few weeks later and I've got a private branch with a few ugly commits, some useful changes and a handful of merge commits from the master branch. The commit history is a disgusting mess and I have a sneaking suspicion that I've only made changes to about a dozen files but have a dozen commits for those changes.

That's where the aforementioned "branch bankruptcy" comes in. You're not obligated to keep that branch; you can keep the changes, though. As shown in the referenced article, you execute the following git commands:

git checkout master
git checkout -b cleaned_up_branch
git merge --squash private_feature_branch
git reset

The --squash tells git to squash all of the changes from the private_feature_branch into the index (staging) and reset the index so that those changes are in the working tree. From here, you can make a single, clean, well-written commit or several commits that correspond logically to the various changes you made.

Git also lets you lose your attachment to checking in all the changes in a file at once: if a file has changes that correspond to different commits, you can add only selected differences in a file to the index (staging). In praise of Gits index by Aristotle Pagaltzis provides a great introduction. If you, like me, regularly take advantage of refactoring and cleanup tools while working on something else, you'll appreciate the ability to avoid checking in dozens of no-brainer cleanup/refactoring changes along with a one-liner bug-fix.8

One final example: cherry picking and squashing

I recently renamed several projects in our solution, which involved renaming the folders as well as the project files and all references to those files and folders. Git automatically recognizes these kind of renames as long as the old file is removed and the new file is added in the same commit.

I selected all of the files for the rename in SmartGit and committed them, using the index editor to stage only the hunks from the project files that corresponded to the rename. Nice and neat. I selected a few other files and committed those as a separate bug-fix. Two seconds later, the UI refreshed and showed me a large number of deleted files that I should have included in the first commit. Now, one way to go about fixing this is to revert the two commits and start all over, picking the changes apart (including playing with the index editor to stage individual hunks).

Instead of doing that, I did the following:

  1. I committed the deleted files with the commit message "doh!" (to avoid losing these changes in the reset in step 3)
  2. I created a "temp" branch to mark that commit (to keep the commit visible once I reset in step 3)
  3. I hard-reset my master branch to the origin
  4. I cherry-picked the partial-rename commit to the workspace
  5. I cherry-picked the "doh!" commit to the workspace
  6. Now the workspace had the rename commit I'd wanted in the first place
  7. I committed that with the original commit message
  8. I cherry-picked and committed the separate bug-fix commit
  9. I deleted the "temp" branch (releasing the incorrect commits on it to be garbage-collected at some point)

Now my master branch was ready to push to the server, all neat and tidy. And nobody was the wiser.

  1. There are alternatives now, though, like Microsoft's TypeScript, that warrant a look if only because they help tighten the error-finding feedback loop and have the potential to make you more efficient (the efficiency may be robbed immediately back, however, if debugging generated code becomes difficult or even nightmarish).

  2. Once you've pushed, though? No touchie. At that point, you've handed in your test and you get graded on that.

  3. According to my business card, I'm a "senior developer and partner" at Encodo System AG.

  4. With the exception, mentioned elsewhere as well, that rebasing merge-commits can sometimes require you to re-resolve previously resolved conflicts, which can be error-prone if the conflicts were difficult to resolve in the first place. Merging merge-commits avoids this problem.

  5. bisect is a git feature that executes a command against various commits to try to localize the commit that caused a build or test failure. Basically, you tell it the last commit that worked and git uses a binary search to find the offending commit. Of course, if you have commits that don't compile, this won't work very well. We haven't used this feature very much because we know the code in our repositories well and using blame and log is much faster. Bisect is much more useful for maintainers that don't know the code very well, but still need to figure out at which commit it stopped working.

  6. To be clear: we're only so cavalier with our private repositories to which access is restricted to those who already know what's going on. If we commit changes to public, open-source or customer repositories, we make sure that every commit compiles. See Aristotle's index article (cited above) for tips on building and testing against staged files to ensure that a project compiles, runs and passes all tests before making a commit -- even if you're not committing all extant changes.

  7. That said, with experience we've learned that an interactive rebase and judicious squashing will create commits that avoid these problems. With practice, these situations crop up more and more rarely.

  8. Of course, you can also create a separate branch for your refactoring and merge it all back together, but that's more work and is in my experience rarely necessary.

Updating to a touch-friendly UI

This article originally appeared on earthli News and has been cross-posted here.

I was recently redesigning a web page and wanted to make it easier to use from touch-screen browsers. Links made only of text are relatively easy to click with a mouse, but tend to make poor touch targets. If the layout has enough space around the link, this can be remedied by applying CSS.

The basic box


Suppose we have a box with three links in it, as shown to the right.

Setting the height

The first step is to make this box taller, so the logical thing to do is to set the height. We'll have to pick a value, so set height: 40px on the gray box.


Aligning vertically

This isn't exactly what we want, though; we'd rather have the vertical space equally distributed. Also, if you hover over the links, you can see that the space below the text is not active. Maybe we can try to add vertical-align: middle to align the content.


Unfortunately, this doesn't have the desired effect. The vertical-align property works when used this way in table cells, but otherwise has no effect for block elements. Knowing that, we can set display: table-cell for the gray box.


And now the box has become longer, because the 50% width of the box is calculated differently for table cells than for regular boxes (especially when a table cell is found outside of a table).

Relative positioning

Let's abandon the vertical-alignment approach and try using positioning instead. Set position: relative and top: 25% to center the links vertically.


Now that looks much better, but the space above and below the links is still not active. Perhaps we can use the height trick again, to make the individual links taller as well. So we set height: 100% on each of the links.


We didn't get the expected result, but we should have expected that: the links are inline elements and can only have a height set if we set display: inline-block on each link as well. We use inline-block rather than block so that the links stay on the same line.


The links are now the right size, but they stick out below the gray box, which isn't what we wanted at all. We're kind of out of ideas with this approach, but there is another way we can get the desired effect.

Positive padding and negative margins

Let's start with the original gray box and, instead of choosing a random height as we did above -- 40px -- let's set padding: 8px on the gray box to make room above and below the links.


With just one CSS style, we've already got the links nicely aligned and, as an added benefit, this technique scales even if the font size is changed. The 8-pixel padding is preserved regardless of how large the font gets.1


This approach seems promising, but the links are still not tall enough. The naive approach of setting height: 100% on the links probably won't work as expected, but let's try it anyway.


It looks like the links were already 100% of the height of the container; in hindsight it's obvious, since the height of the gray box is determined by the height of the links. The 100% height refers to the client area of the gray box, which doesn't include the padding.

We'd actually like the links to have padding above and below just as the gray box has. As we saw above, the links will only honor the padding if they also have display: inline-block, so let's set that in addition to padding: 8px.


We're almost there. The only thing remaining is to make the vertical padding of the links overlap with the vertical padding of the gray box. We can do this by using a negative vertical margin, setting margin: -8px.


We finally have the result we wanted. The links are now large enough for the average finger to strike without trying too hard. Welcome to the CSS-enabled touch-friendly world of web design.

The code for the final example is shown below, with the sizing/positioning styles highlighted:

  background-color: gray;
  border: 1px solid black;
  border-width: 1px 0;
  width: 50%;
  text-align: center;
  padding: 8px 0;

.gray-box a
  background-color: #8F8F8F;
  display: inline-block;
  padding: 8px 20px;
  margin: -8px 0;

<div class="gray-box">
  <a href="#" style="color: goldenrod">First</a>
  <a href="#" style="color: gold">Second</a>
  <a href="#" style="color: yellowgreen">Third</a>

  1. Naturally, we could also use .8em instead and then the padding will scale with the font size. This would work just as well with the height. Let's pretend that we're working with a specification that requires an 8-pixel padding instead of a flexible one.

Visual Studio & Resharper Hotkey Reference (PDF)

It's always a good idea to familiarize yourself with the hotkeys (shortcuts, accelerators, etc.) for any piece of software that you use a lot. Using a keyboard shortcut is often much faster than reaching for the mouse. Applications with a lot of functionality -- like Word, IDEs and graphics tools -- have a lot of hotkeys, and they will help you become much more efficient.

imageAt Encodo, we do a lot of our programming in Visual Studio and Resharper. There are a lot of very useful hotkeys for this IDE combination and we listed the ones we use in a new document that you can download called the Encodo Visual Studio & Resharper Hotkey Reference.

We managed to make all of the shortcuts fit on a single A4 page, so you can print it out and keep it on your desk until you've got them memorized. Almost all of the hotkeys are the standard ones for Visual Studio and for ReSharper (when using the Visual Studio key mapping), so you won't even have to change anything.

To get the few non-standard hotkeys in the Encodo standard settings, you can import this VS2012 settings file. In addition to the most important combinations listed in the hotkey reference, the file includes the following changes to the standard layout.

  • Ctrl+W: Close window
  • Alt+Shift+W: Close all but this window
  • Ctrl+Shift+W: Close all windows
  • Ctrl+Shift+E: Select current word

And, even if you have your own preferred hotkeys, you can still take a look at the chart to discover a feature of Visual Studio or ReSharper that you may have missed.

At the time of writing, we're using Visual Studio 2012 SP1 and Resharper 7.1.

The Next Opera Next Browser

This article originally appeared on earthli News and has been cross-posted here.

imageOpera started a public beta-testing program a few years ago called Opera Next. Whereas the stable version naturally moved along more slowly -- but always rock-solid -- Opera Next often had a more up-to-date HTML/CSS renderer (code-named Presto) and Javascript engine (code-named Carakan). Opera recently anounced that future versions -- Opera Next Next -- would be built on the WebKit HTML/CSS renderer and Google's open-source V8 Javascript engine instead.

Why is it good news?

This is, I think, good news for both Opera users and Opera as a company. Opera and WebKit: a personal perspective by Bruce Lawson is of the same mind, writing pragmatically that, "Operas Presto engine was a means to an end".

The browser landscape has changed significantly since IE dominated the world with over 90% market share in 2004. IE now has less than 50% worldwide browser share (desktop share is 54%), but Chrome and Firefox each have about 20% as well. Users for some sites, like Wikipedia, are divided up much more evenly between Chrome, Firefox and IE with Safari and Opera making up about 10%. The point is, that the browser market is considerably different than it once was -- and all participants are actively working against documented W3C standards and specifications. Sure, there are still browser-specific CSS prefixes and some highly specific implementations (e.g. the file API from Google) but they mostly stick to a process and all browser vendors have input into an open process. And Gecko and WebKit code is open source.

As Lawson puts it so well,

These days, web standards arent a differentiator between browsers. Excellent standards support is a given in modern browsers. Attempting to compete on standards support is like opening a restaurant and putting a sign in the window saying All our chefs wash their hands before handling food.

Why WebKit?

All of which is why Lawson (he's the guy who wrote the press release for Opera's move to WebKit) writes,

[i]t seems to me that WebKit simply isnt the same as the competitors against which we fought, and its level of standards support and pace of development match those that Opera aspires to.

The Trident code-base -- the renderer for Microsoft's browsers -- was and still is closed-source. While it has been much more actively developed in the last couple of years, back in the bad old days of IE6, the code base was stagnant, lacked innovation and had little to no standards support.

WebKit is certainly not any of these things. If you follow the WebKit changelogs, it's clear that the majority of changes are to implement an HTML5 feature or to improve performance or to use less memory for common tasks. This is a vibrant, highly active and open-source project with developers from at least two large development teams -- Apple and Google -- actively contributing to it.1

Opera adding their cadre of excellent engineers to the mix can only be a good thing -- for everyone involved. They've already written that they plan to port their superior HTML5 forms support to WebKit, a very useful feature on which other participants were dragging their feet.

Running just to stay in place

A while back, Opera made a valiant attempt to get the whole browser running in a hardware-accelerated mode and almost made it, but had to pull back just before release because of stability issues on some machines. The mode is still there if you want to enable it. Flow layout is great and WebGL also made it in, but the implementation lagged. As did many other features.

While Opera was first to implement a few features -- like HTML Forms or their excellent SVG support -- they were having trouble keeping up and implementing every working draft and standard. I wonder whether the Opera dev team took a look at the Comparison on Opera vs. Chrome and despaired of ever catching up. For the first 3/4 of the page, Opera and Chrome are dead-even. And then things go downhill for Opera from there: 3D Transforms, filter effects, masks, touch events, border images, device-orientation events. These are all things that are standardized (working drafts anyway) and that I have used -- or wanted to use -- in current web projects.

They probably had to make hard choices about where to invest their time, energy and money. Some have tried to argue that the departure of the Presto engine will adversely affect standards acceptance. The article Hey Presto, Opera switches to WebKit by Peter Bright writes the following:

Opera could have gone the route that Microsoft has chosen, trying to educate Web developers and providing tools to make cross-platform testing and development easier, but perhaps the company [...] felt that asking Web developers to stick to standards was [...] futile. Historically, the company has tried to do just this, but its success at influencing real Web developers has been limited; for all the emphasis placed on standards, many developers don't, in practice, care about them.

The developers over which Opera had influence were most likely already coding to standards. The move to WebKit isn't going to change any of that. Opera's 2% of the market was not enough to even get any developers to test with it, and many products clearly got managerial approval to just stick an if (Opera) { fail(); } statement in their web sites to prevent their site from appearing buggy in an untested browser. It's hard to see how Opera's move will change standards development or acceptance.

Why Opera?

I don't think that most users chose Opera because of its renderer. In all honesty, there were very few cases where Opera rendered a site better than other browsers -- and enough examples where Opera did not render as well, whether due to missing functionality or deliberate crippling by the site.


Opera has led the way with many non-renderer innovations.

  • Tabbed browsing
  • Speed dial
  • Browsing sessions (tabs+history)
  • Popup blocking
  • Restart from last browsing session (with history)
  • Mouse gestures
  • Searchable window list (essential when dozens or hundreds of tabs are open)

These features were all pioneered by Opera and many have since been adopted by other major browsers (either natively or through extensions). It's certainly a good thing to think that the development team that brought you these innovations and features will be spending less time on HTML5 minutiae and more time on browser features like these.2

  1. The Chrome team has a stated goal of pushing changes in Chromium back upstream to the WebKit main line. Apple has historically done a lot of this as well. I'm not sure what the situation is now, so take my statement with a grain of salt.

  2. This is not to say that I haven't considered the possibility that Opera will, instead of moving the high-level dev staff to working on WebKit patches, simply drop them from the staff in order to save money. Their press release didn't indicate that they were slimming down, but then it wouldn't, would it? Time will tell.

Archive of Quino release notes are now online

For a long time, we maintained our release notes in a wiki that's only accessible for Encodo employees. For the last several versions -- since v1.7.6 -- we've published them on the web site, available for all to see.1 Because they reveal the progress and history of Quino quite nicely, we've made all archival release notes available on the web site as well.

Browse to the Quino folder in the blogs to see them all. Some highlights are listed below:

  • v0.1 -- November 2007: Even the very first version already included the core of Quino: metadata, basic schema migration and an automatically generated Winforms-based UI
  • v0.5 -- July 2008: Introduced reporting integration, based on the DevExpress reporting system and including a report-designer UI
  • v1.0 -- April 2009: The 1.0 release coincided with the initial release of the first suite of applications built with Quino2 and included a lot of performance tweaks (e.g. caching)
  • v1.0.5 -- September 2009: Included the first version of the MS-SQL driver, with support for automated schema migration (putting it more or less in the same class as the PostgreSql driver).
  • v1.5.0 -- October 2010: Quino officially moved to .NET 4.0 and VS2010
  • v1.6.0 -- March 2011: Included an initial verison of a Mongo/NoSQL data driver (without schema-migration support)
  • v1.7.0 -- February 2012: Included a radical rewrite of the startup and shutdown mechanism, including configuration, service locator registration as well as the feedback sub-system
  • v1.7.6 -- August 2012: Quino moved to NuGet for all third-party package integration, with NAnt integration for all common tasks
  • v1.8.0 -- August 2012 and 1.8.1 -- September 2012: The data-driver architecture was overhauled to improve support for multiple databases, remoting, cloud integration, custom caching and mini-drivers
  • v1.8.3 -- October 2012: Introduced a user interface to help analyze data traffic (which integrated with the statistics maintained by the data driver) for Winform applications
  • v1.8.5 -- November 2012: The latest available release -- as of the end of 2012 -- includes an overhaul of the metadata-generation pattern with an upgrade path for applications as well as support for running against a local database using the Mongo data driver.
  • v1.8.63: Coming up in the next release, is support for plugins and model overlays to allow an application to load customizations without recompiling.

  1. Please note that the detailed release notes include links to our issue-tracking system, for which a login is currently required.

  2. The initial version of Quino was developed while working on a suite of applications for running a medium-sized school. The software includes teacher/student administration, curriculum management, scheduling and planning as well as integration with mail/calendar and schedule-display systems. By the time we finished, it also included a web interface for various reporting services and an absence-tracking and management system is going to be released soon.

  3. The release notes were not available at publication time.

Windows 8: felled by a modem driver

tl;dr: if you can't read the BSOD message or need to examine the minidump files generated by Windows when it crashes, use the BlueScreenView utility to view them. Windows 8 kept crashing on shutdown for me because of an errant 56K modem driver. Sad -- so sad -- but true.

My Windows 8 installation went off with just one hitch: the machine crashed on shutdown. Every. Single. Time. This made it impossible to use the hibernation feature, which was a blocker issue for a laptop.

So, how to solve the issue? Well, the first step is to read the error message, right? In this case, the crash was a Blue Screen of Death (BSOD) so you can't copy the message or take a screenshot of it. You can take a picture if you're quick on the draw: for the last several versions, Windows has been extremely shy about crashing and will hurriedly restart before the user even realizes what has happened.

imageThat means you have to be fast to read the message, but it used to be possible. With Windows 8, though, the BSOD message has been improved to show a little sad face and a message that tells the user that Windows is gathering information related to the crash and will restart shortly. In the example to the right, you can see that the small text reads HAL_INITIALIZATION_FAILED. In the example, though, the error message takes up the whole screen; in my case, the blue area was limited to a 640x480 block in the center and the fine print had been scaled down into illegibility.

That tiny bit of text holds the salient nugget of information that can help a veteran Windows user solve the problem. This was, needless to say, quite frustrating. The Event Viewer showed nothing, which wasn't unusual in the case of a full system crash -- how would it be able to write an error message?

The system would still boot up fine and was perfectly usable, so I could search for help in finding that elusive message. Every halfway-useful page I found quickly ended in a forum moderator instructing users to upload their "minidump" files so that a Microsoft employee could examine them.

That wasn't acceptable, so I searched for help on how to read mini-dump files myself. Microsoft's instructions ran to multiple steps and installation of low-level debugging software. Frustrated, I jumped into a conversation called Blue Screen error delivers unreadable instructions (font too small); how to increase fontsize in Blue Screen? which also featured an unhelpful answer. Luckily, someone responded almost immediately with a tip to use the BlueScreenView to read mini-dump files.

Within seconds I'd found out that my crashes were caused by a driver file called CAX_CNXT.sys. A few more seconds and I'd found out that this was the driver for the 56K modem on my laptop. I disabled that device with extreme prejudice and restarted the machine. No crash. Problem solved. It took longer than it had to, but now my machine has been running stably on Windows 8 for days. And lacking a modem driver hasn't affected my workflow at all.

This article originally appeared on earthli News and has been cross-posted here.

Git Handbook 2.0 is now available

imageWe recently released the latest version of our Git Handbook. The 2.0 version includes a completely rewritten chapter on branch management and branching strategy (Chapter 11) -- including a branching diagram that's easier to understand than most (shown to the right). After having used Git for another year, we refined this section considerably to come up with a pragmatic strategy that works in many, if not all, situations.

If you downloaded an earlier version of the Git Handbook, it's definitely worth your time to take a look at this latest revision.

How to convert a Virtual PC 2007 VMC file to work with Hyper-V

Windows 8 was made publicly available a few weeks ago. As usual, Microsoft manages to guarantee compatibility with a lot of software, but there are a few tools that will simply no longer run.

One of these is Microsoft's own Security Essentials product, which has been completely replaced with Windows Defender, which is built right in to Windows 8. So that one's easy.

Another is Microsoft Virtual PC 2007. It doesn't run under Windows 8 at all. Neither is the configuration format that it uses directly compatible with any of the other virtualization solutions that do run under Windows 8.

  • As of November 2012, VirtualBox is still having some compatibility and speed problems under Windows 8
  • VMWare's runner also doesn't have an easy upgrade path for Virtual PC images. You have to convert the disk image and somehow recreate the VM configuration file
  • Even Microsoft's own Hyper-V is only available on machines that have hardware support for it and, while the disk image is compatible, the configuration format is completely different

If you're already a user of Microsoft's Virtual PC, then it's likely you'd like to just upgrade to using Hyper-V, if possible. Luckily, Hyper-V is available as an option for Windows 8 Pro and higher. To find out if your machine supports it and to install it, follow the instructions below.

Enable Hyper-V


  • Press Windows key+W to search settings
  • Type "win fea" and Enter to show the "Turn Windows Features On and Off" window
  • If the "Hyper-V" checkbox is already checked, then you're ready for the next step
  • If the "Hyper-V" checkbox is disabled, then you're out of luck; Hyper-V is not available for your machine and you'll have to try one of the other virtualization solutions mentioned above
  • Otherwise, check the "Hyper-V" checkbox and press "Ok". You'll naturally have to reboot for those changes to be applied.

Configure the Hyper-V Switch

Once Hyper-V is enabled and you've rebooted, you can startup the Hyper-V Manager and configure it.

  • Press the Windows key to show the start screen
  • Type "hyper" and Enter to find and start the "Hyper-V Manager"
  • Select your machine in the tree on the left under Hyper-V Manager
  • In the settings for that machine on the right, click "Virtual Switch Manager"
  • In that dialog, the "New virtual network switch" node should be selected and you'll see a list on the right. Select "External" to create a switch that has access to the Internet and press "Create Virtual Switch"


At this point, your Hyper-V server is ready to load your virtual machine and let it access the Internet.

Create a Hyper-V virtual machine from a .vmc file

All of the configuration settings for the Virtual PC virtual machine are stored in a .VMC file. Unfortunately, the Hyper-V manager can't import these files directly1. Luckily, there is a tool, called the VMC to Hyper-V Import Tool, which performs the import in a couple of easy steps.

  • Download, extract, install and run the tool
  • First, press "Connect" to attach to the local Hyper-V instance (if you just installed, then there is no user and password set)
  • Next, open the .vmc file for the virtual machine you want to import
  • The settings are loaded into the window; verify that they more-or-less match what you expect.2
  • Press the "Create Virtual Machine" button to create a new virtual machine in Hyper-V based on those settings.


You're now ready to configure and start up your virtual machine.

Configuring and Running the VM

There are two things to do to get this machine running smoothly under Hyper-V:

  • Set up the network interface
  • Install the Integration Services, which includes drivers but essentially makes the mouse work as expected and enables non-legacy networking for guest OSs that support it

There are two kinds of network interface: the standard one and a legacy one. If your guest operating system is Windows XP (as mine was), you have to use the legacy adapter. The documentation also says that a legacy adapter is required to have connectivity without the "Integration Services".3

Install a Legacy Adapter

If you have Windows XP, you can just remove the "Network Adapter" that's already included and instead install a "Legacy Adapter".

  • From the Hyper-V Manager, select the settings for your machine
  • "Add Hardware" is automatically selected
  • Select the "Legacy Network Adapter" and press "Add"
  • Select the existing "Network Adapter" in the list on the left to show its settings
  • Press the "Remove" button to remove the unneeded "Network Adapter"


You can now set up the network for that adapter, as shown below.

Set up the network

  • From the Hyper-V Manager, select the settings for your machine
  • Select the "Legacy Network Adapter" or "Network Adapter" to show its settings
  • Assign the switch (created in a step above) to the network and press OK to save settings


At this point, your virtual machine should be able to connect to the network once it's started.

Install Integration Services

The machine is not very useful until you've installed the integration services. These services enable seamless mouse support and also enable networking over a non-legacy adapter.


  • Start the VM
  • Wait for the machine to finish booting4
  • You can't install the integration services until the previous integration tools have been uninstalled. If your guest OS is Windows XP, uninstall the Virtual PC tools using the "Add/Remove Programs" control panel
  • Once all other tools are uninstalled, select "Insert Integration Services Setup Disk" from the "Action" menu
  • After a few seconds, the installation should start automatically
  • You'll have to reboot the guest OS to finish installation.

That's it! Your Windows XP should once again have full hardware support, including legacy networking (up to 100Mb). Adjust your display settings back up to a usable resolution, re-activate with Microsoft (you have three days) and enjoy your new Hyper-V virtual machine.

  1. It's an utter mystery why Microsoft couldn't be bothered to provide an upgrade path from its own product. Perhaps they didn't want to "officially" support such upgrades in order to kill off as many virtual machines running Windows XP as possible.

  2. imageIn my case, the path to the main disk image was incorrect and showed up in red. It's a mystery why that file had such an old path in it, while the VM started with the correct disk image in Virtual PC. At any rate, I adjusted the path to point to the correct disk image, the text turned black and I was allowed to continue.

  3. It's unclear to me whether network connectivity is required in order to install the integration tools. It took several attempts before the integration services installed successfully. It's possible that this was due to the unsatisfactory network situation, but I can't say for sure.

  4. And, if you're using Windows XP as the guest OS, until it has stopped complaining about hardware changes and activation problems

Quino Projekt-Template für Visual Studio 2012


Visual Studio bietet die Möglichkeit, zusätzlich zu den von Haus aus mitgelieferten Projekt-Templates eigene Templates zu erstellen und diese dann zu verwenden. Dies ist von Vorteil, wenn häufig ähnliche Projekte erstellt werden und das Projektsetup verhältnismässig aufwändig ist. Diese Voraussetzugen treffen auf unser hauseigenenes Framework Quino bestens zu: Für jede neue Quino Applikation muss ein Model erstellt werden, was jeweils einige Code- und Konfigurationsdateien erfordert. Dies nimmt, insbesondere wenn man es zum ersten Mal macht, schnell einige Stunden Zeit in Anspruch bis alles wie gewünscht läuft.

Mit einem Projekt-Template ist dies deutlich einfacher: Neues Projekt erstellen, die gewünschten Module wählen und schon wird eine lauffähige Quino Applikation mit einem einfachen Model erstellt. Darauf aufbauend kann man dann die eingene Applikation implementieren.

Die Erstellung eines eigenen Projekt-Templates ist aber, vor Allem wenn es etwas umfangreicher ist und zusätzlich einen eigenen Wizard haben soll, mit einigen Fallstricken versehen. Auch sind die Informationen dazu auf MSDN und generell im Internet eher spärlich und teilweise verwirrend. Um das gewonnene Know-How mit anderen Entwicklern zu teilen haben wir eine kleine Anleitung verfasst. Diese ist zwar auf Quino zugeschnitten, kann aber natürlich auch auf eigene Bedürfnisse angepasst werden.



Es soll ein Projekt-Template erstellt werden, das eine lauffähige Quino Applikation generiert. Dabei sollen die Dateinamen sowie die verwendeten Namespaces dem Namen des Projektes angepasst werden. Ausserdem soll das Projekt-Template beim Generieren des Projekts einen Wizard anzeigen, in dem verschiedenen Quino Module an- oder abgewählt werden können. Momentan sind dies Core für das Model und Winforms für eine Winforms Oberfläche. Für jedes gewählte Modul wird ein Projekt in der Solution erstellt und jeweils alle benötigten Referenzen richtig gesetzt.


Diese Anleitung geht davon aus, dass Visual Studio 2012 mit allen aktuellen Updates installiert ist. Zusätzlich wird das Microsoft Visual Studio 2012 SDK benötigt. Dies stellt den Projekttyp Project Template zur Verfügung. Ausserdem ist es ratsam, ein Projekt zu erstellen das genau dem Projekt entspricht, das nachher generiert werden soll.


Ein Projekt-Template ist im Grunde nichts anderes als eine Zip Datei, in welche alle benötigten Dateien gepackt sind. Von zentraler Bedeutung sind hier die .vstemplate-Dateien, von welchen jedes Projekt-Template mindestens eine beinhalten muss. Innerhalb einer .vstemplate-Datei kann dann wiederum auf andere .vstemplate-Dateien verlinkt werden um so Subprojekte zu generieren. Ausserdem sind in den .vstemplate-Dateien die Metadaten der einzelnen Projekte hinterlegt. Dies ist für die Haupt-.vstemplate-Datei besonders wichtig, da diese Daten im New 'Project Dialog' von Visual Studio angezeigt werden. Dazu gehören der Name des Projekts, eine kurze Beschreibung, ein Icon sowie eine grössere Grafik welche beispielsweise einen Screenshot oder ein Logo enthalten kann. Ausserdem sind alle Dateien enthalten, welche später in das neue Projekt eingefügt werden. Alle Dateien können mit Platzhalter versehen werden, welche dann bei der Generierung des Projekts durch die entsprechenden Werte ersetzt werden.

Dateisystem (nicht vollständig):



<VSTemplate Version="3.0.0" xmlns="http://schemas.microsoft.com/developer/vstemplate/2005" Type="ProjectGroup">
    <Name>Quino Application</Name>
    <Description>A quino application with different modules.</Description>
      <ProjectTemplateLink ProjectName="$quinoapplicationname$.Core">
      <ProjectTemplateLink ProjectName="$quinoapplicationname$.Winform">

Die Namen der Projekte können durch Platzhalter beeinflusst werden.

Core.vstemplate (stellvertretend für die Subprojekte):

<VSTemplate Version="3.0.0" xmlns="http://schemas.microsoft.com/developer/vstemplate/2005" Type="Project">
    <Name>Quino Core</Name>
    <Description>Provides a quino core project containing the model generation</Description>
      <Folder Name="App" TargetFolderName="App">
        <ProjectItem ReplaceParameters="true" TargetFileName="$quinoapplicationname$Configuration.cs">
      <Folder Name="Models" TargetFolderName="Models">
        <Folder Name="Generators" TargetFolderName="Generators">
          <ProjectItem ReplaceParameters="true" TargetFileName="$quinoapplicationname$CoreGenerator.cs">
        <ProjectItem ReplaceParameters="true" TargetFileName="$quinoapplicationname$ModelClasses.cs">

Die Namen der generierten Dateinen können ebenfalls durch Platzhalter beeinflusst werden.

Auch innerhalb der einzelnen Codedateien können Platzhalter verwendet werden die dann beim Generieren des Projekts durch die entsprechenden Variablen erstetzt werden:

using Encodo.Quino.Meta;

namespace $quinoapplicationname$.Models
  public class $quinoapplicationname$ModelClasses
    public IMetaClass Company { get; set; }

    public IMetaClass Person { get; set; }


Damit die einzelnen Quino Module an- oder abgewählt werden können sowie um dem Anwender andere Konfigurationsmöglichkeiten - wie etwa das Auswählen des Namespaces - zu geben kann man das Projekt-Template mit einem eigenen Wizard versehen. Dieser wird jedes Mal angezeigt, wenn aus dem Projekt-Template ein neues Projekt erzeugt wird. Der Wizard für das Projekt-Template ist grundsätzlich eine ganz normale .Net Anwendung welche in eine DLL kompiliert und dann im Template registriert wird.


Wizard erstellen

Ein eigener Wizard für das Projekt-Template kann implementiert werden indem man vom Interface Microsoft.VisualStudio.TemplateWizard.IWizard ableitet. Da das Quino Projekt-Template momentan zwei Module (Core und Winform) generieren kann werden insgesamt drei Wizards gebraucht:

  • QuinoWizard: Der Hauptwizard welcher den eigentlichen Wizard zur Verfügung stellt in dem die Anwender des Templates die gewünschten Optionen einstellen können.
  • CoreWizard, WinformWizard: Leiten die Einstellungen des QuinoWizards an das jeweilge Modul weiter.

Die beiden Sub-Wizards sind nötig, weil der Hauptwizard keinen Zugriff auf die Replace-Parameter der SubProjekt-Templates hat. Damit der Hauptwizard mit den Subwizards kommunizieren kann ist ein kleiner Trick nötig: Die im Hauptwizard getätigten Einstellungen werden in public static Parametern abgelegt auf diese wiederum die beiden Subwizards zugreifen können. Dies funktioniert, weil alle drei Wizards in der gleichen Runtime Umgebung laufen.


public class QuinoWizard : IWizard
  #region Implementation of IWizard

  public void RunStarted(
    object automationObject, 
    Dictionary<string, string> replacementsDictionary,
    WizardRunKind runKind,
    object[] customParams)
      using (var inputForm = new UserInputForm())
        // Der Winforms Dialog wird angezeigt und die Einstellungen des
        // Benutzers werden in public static Parametern abgelegt.
        GenerateCore = inputForm.GenerateCore;
        GenerateWinform = inputForm.GenerateWinform;
        QuinoApplicationName = inputForm.DefaultNamespace;
        EncodoSourceRoot = inputForm.EncodoSourceRoot;

        // Die Parameter werden in das replacementsDictionary übernommen.
    catch (Exception ex)

  public bool ShouldAddProjectItem(string filePath)
    return true;

  // Alle anderen implementierten Methoden haben einen leeren Methodenrumpf.


  public static string QuinoApplicationName { get; private set; }
  public static bool GenerateCore { get; private set; }
  public static bool GenerateWinform { get; private set; }
  public static string EncodoSourceRoot { get; private set; }

CoreWizard.cs (stellvertretend für die beiden Subwizards):

  #region Implementation of IWizard

  public void RunStarted(
    object automationObject, 
    Dictionary<string, string> replacementsDictionary, 
    WizardRunKind runKind, 
    object[] customParams)
    if (!QuinoWizard.GenerateCore)
      // So wird die Generierung des Subprojekts gegebenenfalls verhindert.
      throw new WizardCancelledException();

    // Die Parameter werden in das replacementsDictionary übernommen.

  public bool ShouldAddProjectItem(string filePath)
    return true;

  // Alle anderen implementierten Methoden haben einen leeren Methodenrumpf.


Der Vollständigkeit halber die Methode Tools.SetReplacementParameters:

public static void SetReplacementParameters(Dictionary<string, string> replacementsDictionary)
  replacementsDictionary.Add("$quinoapplicationname$", QuinoWizard.QuinoApplicationName);
  replacementsDictionary.Add("$encodosourceroot$", QuinoWizard.EncodoSourceRoot);

Registrieren im GAC

Damit der Wizard vom Projekt-Template aufgerufen werden kann muss er im Global Assembly Cache (GAC) registriert werden. Dazu das Wizward Projekt kompilieren, den Visual Studio Command Prompt im Administratormodus öffnen und das Assembly registrieren:

gacutil -i Wizard.dll

Verlinken mit dem Projekt-Template

In den vorherigen beiden Schritten wurde für jedes Subprojekt sowie für das Hauptprojekt jeweils ein Wizard erstellt. Diese Wizwards müssen jetzt in den einzelnen .vstemplate-Dateien verlinkt werden. Dazu jeweils nach dem Knoten folgenden XML Code einfügen:


  <Assembly>Wizard, Version=, Culture=Neutral, PublicKeyToken=[PublicKey]</Assembly>


  <Assembly>Wizard, Version=, Culture=Neutral, PublicKeyToken=[PublicKey]</Assembly>


  <Assembly>Wizard, Version=, Culture=Neutral, PublicKeyToken=[PublicKey]</Assembly>

Der Public Key ist in allen drei Fällen gleich und kann beispielsweise mit Tools wie dotPeek von JetBrains ermittelt werden. Der FullClassName hingegen muss auf den jeweiligen Wizard des Templates verweisen.


Um das fertige Template zu testen bietet Visual Studio einige praktische Hilfsmittel. So kann kann man einfach F5 drücken und eine neue Instanz von Visual Studio wird hochgezogen in der das neue Template bereits registriert ist. Dies bietet den Vorteil, dass man die Generierung der Projekte debuggen kann und so etwaige Fehler einfach findet.

Wenn das Template fertig ist und funktioniert kann man das Template im Release-Modus builden woraufhin ein Zip-File mit dem Projekt-Template generiert wird. Dieses kann man dann entweder in den Ordner %UserProfile%\Documents\Visual Studio 2012\Templates\ProjectTemplates\Visual C# kopieren wo das Projekt für den aktuellen Benutzer zur Verfügung steht oder in den Ordner %ProgramFiles%\Microsoft Visual Studio 11.0\Common7\IDE\ProjectTemplates\CSharp woraufhin das neue Projekt-Template dann bei allen Benutzern im New Project Dialog erscheint.


Das Erstellen eines eigenen Projekt-Templates ist insbesondere für Frameworkentwickler eine gute Möglichkeit, den Umgang mit dem Framework zu erleichtern. Der Anwender kann so mit wenigen Mausklicks eine funktionsfähige Applikation erstellen und sieht auch gleich die grundlegenden Designpatterns. Dies ermöglicht es ihm dann, die Anwendung nach seinen eigenen Bedürfnissen zu erweitern.

Natürlich bedeutet das Erstellen und Unterhalten eines eigenen Projekt-Templates auch einigen Aufwand, da das Template für jede neue Quino Version wieder geprüft und gegebenenfalls angepasst werden muss. Hier bietet es sich an, diesen Prozess zu automatisieren. Das Projekt-Template kann auf einem Buildserver vollautomatisch gebuildet und daraus dann ein Projekt generiert werden. Dieses kann der Buildserver dann wiederum builden und so prüfen, ob das Projekt-Template noch funktionsfähig ist.

Um die Installation des Templates auf verschiedenen Rechnern zu vereinfachen haben wir zusätzlich ein Installationsprogramm erstellt der den Wizard im GAC registriert und das Projekt-Template ins richtige Verzeichnis kopiert.

Insgesamt war das Erstellen des Projekt-Templates für Quino zwar aufwändig, ich würde aber auf jeden Fall sagen, dass der dadurch gewonnene Nutzen den Aufwand mehr als wett macht.