1 2
Why you shouldn't use Bootstrap when you have a styleguide

From a customer, we got the request to apply a visual style guide (VSG) to a Bootstrap-based application. Since we do have a lot of experience with applying style guides on web applications and styling in general, we accepted the job and started to evaluate the details.

Which version of Bootstrap to use

The most recent stable version of Bootstrap is 3.3.6. However, when you go to the Bootstrap website, there is an announcement that Bootstrap 4 "is coming". The current state of Bootstrap 4 is alpha and the last blog post is from December 2015 which is almost half a year ago. It also is not clear, when version 4 finally will be available and stable and so we had to use the old Bootstrap 3 for this project.

But even here, there is some obscurity going on: Bootstrap was initially developed with LESS but for some reason they decided to switch to SASS. Even if we prefer to use LESS at Encodo, we decided to use SASS for this project to be able to upgrade to Bootstrap 4 more easily when it's available. There is also a SASS version of Bootstrap available which we decided to use as the base for this project.

How to customize Bootstrap

Bootstrap is a GUI library that is intended to be as simple as possible to use for the consuming developer. Unfortunately, this does not mean that it is also simple to create a theme for it or to modify some of the existing components.

There is a customization section on the Bootstrap website that allows you to select the needed components and change some basic thing like colors and a few other options. This might be very nice if you just want to use Bootstrap with your own colors but since we had a style guide with a layout quite different from Bootstrap, we could not use this option.

So we decided to clone the entire Bootstrap library, make our changes and then build our custom Bootstrap version. This makes it possible to add some custom components and change the appearance of existing elements.

Problems we ran into

Bootstrap provides support for all kinds of browsers, including Internet Explorer down to version 8. While this is nice for developing an application that runs anywhere, it makes the SASS styling code very hard to read and edit. Also, you cannot use modern technologies such as Flexbox that makes the styling a lot easier and is the base of every layout we have created in the recent past.

Another important point is that the modularity of components is not really given. For example, the styles for the button are defined in one file, but there are many other locations where you can find styles for buttons and that modify the appearance of the button based on the container.

Also, the styles are defined "inside-out" which means that the size of a container is defined by its content. Styleguides normally work the other way around. All these points make it hard to change the structure of the page without affecting everything else. Especially when you try to use the original Bootstrap HTML markup that may not match the needs of the desired layout.

To increase the struggles, there is also the complex build- and documentation system used in the Bootstrap project. It might be great that Bootstrap itself is used for the documentation but I cannot understand why there is another CSS file with 1600 lines of code that changes some things especially for the documentation. Of course this messes up our painstakingly crafted Bootstrap styles again. In the end, we had to remove this file from our demo site, which broke styling for some documentation-specific features (like the sidebar menu).

Another point of concern is that Bootstrap uses jQuery plugins for controls that require JavaScript interaction. This might be good for simple websites that just need some basic interaction but is counterproductive for real web applications because the jQuery event handling can interfere with web application frameworks such as React or Angular.

When to use bootstrap

I do not think that Bootstrap is a bad library but it is not really suitable for projects like this. The main use case of Bootstrap is to provide a good-looking layout for a website with little effort and little foreknowledge required. If you just want to put some information in the web and do not really care how it looks as long as it looks good, then Bootstrap is a good option for you.

If you'd like more information about this, then please feel free to contact us!

Optimizing compilation and execution for dynamic languages

The long and very technical article Introducing the WebKit FTL JIT provides a fascinating and in-depth look at how a modern execution engine optimizes code for a highly dynamic language like JavaScript.

To make a long story short: the compiler(s) and execution engine optimize by profiling and analyzing code and lowering it to runtimes of ever decreasing abstraction to run as the least dynamic version possible.

A brief history lesson

What does it mean to "lower" code? A programming language has a given level of abstraction and expressiveness. Generally, the more expressive it is, the more abstracted it is from code that can actually be run in hardware. A compiler transforms or translates from one language to another.

When people started programming machines, they used punch cards. Punch cards did not require any compilation because the programmer was directly speaking the language that the computer understood.

The first layer of abstraction that most of us -- older programmers -- encountered was assembly language, or assembler. Assembly code still has a more-or-less one-to-one correspondence between instructions and machine-language codes but there is a bit of abstraction in that there are identifiers and op-codes that are more human-readable.

Procedural languages introduced more types of statements like loops and conditions. At the same time, the syntax was abstracted further from assembler and machine code to make it easier to express more complex concepts in a more understandable manner.

At this point, the assembler (which assembled instructions into machine op-codes) became a compile which "compiled" a set of instructions from the more abstract language. A compiler made decisions about how to translate these concepts, and could make optimization decisions based on registers, volatility and other settings.

In time, we'd graduated to functional, statically typed and/or object-oriented languages, with much higher levels of abstraction and much more sophisticated compilers.

Generally, a compiler still used assembly language as an intermediate format, which some may remember from their days working with C++ or Pascal compilers and debuggers. In fact, .NET languages are also compiled to IL -- the "Intermediate Language" -- which corresponds to the instruction set that the .NET runtime exposes. The runtime compiles IL to the underlying machine code for its processor, usually in a process called JIT -- Just-In-Time compilation. That is, in .NET, you start with C#, for example, which the compiler transforms to IL, which is, in turn, transformed to assembler and then machine code by the .NET runtime.

Static vs. Dynamic compilation

A compiler and execution engine for a statically typed language can make assumptions about the types of variables. The set of possible types is known in advance and types can be checked very quickly in cases where it's even necessary. That is, the statically typed nature of the language allows the compiler to reason about a given program without making assumptions. Certain features of a program can be proven to be true. A runtime for a statically typed language can often avoid type checks entirely. It benefits from a significant performance boost without sacrificing any runtime safety.

The main characteristic of a dynamic language like JavaScript is that variables do not have a fixed type. Generated code must be ready for any eventuality and must be capable of highly dynamic dispatch. The generated code is highly virtualized. Such a runtime will execute much more slowly than a comparable statically compiled program.

Profile-driven compilation

Enter the profile-driven compiler, introduced in WebKit. From the article,

The only a priori assumption about web content that our engine makes is that past execution frequency of individual functions is a good predictor for those functions future execution frequency.

Here a "function" corresponds to a particular overload of a set of instructions called with parameters with a specific set of types. That is, suppose a JavaScript function is declared with one parameter and is called once with a string and 100 times with an integer. WebKit considers this to be two function overloads and will (possibly) elect to optimize the second one because it is called much more frequently. The first overload will still handle all possible types, including strings. In this way, all possible code paths are still possible, but the most heavily used paths are more highly optimized.

All of the performance is from the DFGs type inference and LLVMs low-level optimizing power. [...]

Profile-driven compilation implies that we might invoke an optimizing compiler while the function is running and we may want to transfer the functions execution into optimized code in the middle of a loop; to our knowledge the FTL is the first compiler to do on-stack-replacement for hot-loop transfer into LLVM-compiled code.

Depending on the level of optimization, the code contains the following broad sections:

  • Original: code that corresponds to instructions written by the author
  • Profiling: code to analyze which types actually appear in a given code path
  • Switching: code to determine when a function has been executed often enough to warrant further optimization
  • Bailout code to abandon an optimization level if any of the assumptions made at that level no longer apply

image

While WebKit has included some form of profile-driven compilation for quite some time, the upcoming version is the first to carry the same optimization to LLVM-generated machine code.

I recommend reading the whole article if you're interested in more detail, such as how they avoided LLVM compiler performance issues and how they integrated this all with the garbage collector. It's really amazing how much that we take for granted the WebKit JS runtime treats as "hot-swappable". The article is quite well-written and includes diagrams of the process and underlying systems.

Converting an existing web application from JavaScript to TypeScript

TypeScript is a new programming language developed by Microsoft with the goal of adding static typing to JavaScript. Its syntax is based on the ECMA Script 6 standard, which is currently being defined by a consortium. There are features in the languages most developer know well from other languages like C#: Static Types, Generics, Interfaces, Inheritance and more.

With this new language, Microsoft tries to solve a problem that many web developers have faced while developing JavaScript: Since the code is not compiled, an error is only detected when the browser actually executes the code (at run time). This is time-consuming, especially when developing for mobile devices which are not that easy to debug. With TypeScript, the code passes through a compiler before actually being executed by a JavaScript interpreter. In this step, many of the errors can be detected and fixed by the developer before testing the code in the browser.

Another benefit of static typing is that the IDE is able to give much more precise hints to the developer as to which elements are available on a certain object. In plain JavaScript, pretty much everything was possible and you had to add type hints to your code to have at least some information available from the IDE.

Tools

For a developer, it is very important to tools for writing code are as good as possible. I tried different IDEs for developing TypeScript and came to the conclusion that the best currently available is VisualStudio 2013 with the TypeScript and the WebEssentials plugins. Those plugins are also available for VisualStudio 2012 but the new version feels much quicker when writing TypeScript code: Errors are detected almost immediately while typing. As a bonus, you can also install ReSharper. The TypeScript support of the current version 8.0 is almost nonexistent but JetBrains announced that it would improve this dramatically in the next version 8.1 which is currently available in JetBrains Early Access Program (EAP).

There is also WebStorm from JetBrains which also has support for TypeScript but this tool does not feel as natural to me as Visual Studio does (currently). I hope (and am pretty sure), JetBrains is working on this and there will soon be more good tools available for TypeScript development than just VisualStudio.

Switch the project

The actual switching of the project is pretty straight-forward. As a first step, change the file extension of every JavaScript file in your project to .ts. Then create a new TypeScript project in VisualStudio 2013 in which you then include all your brand-new TypeScript files. Since the TypeScript files later are compiled into similar *.js files, you don't have to change the script tags in your HTML files.

When you try to compile your new TypeScript project for the first time, it most certainly won't work. Chances are that you use some external libraries like jQuery or something similar. Since the compiler don't know those types, it assumes you have some errors in the code. To fix this, go to the GitHub project DefinitelyTyped and search for the typed interface for all of your libraries. There are also NuGet packages for each of those interfaces, if you prefer this approach. Then, you have to add a reference to those interfaces in at least one code file of your project. To include a file, simple add a line as follows at the top of the file:

///<reference path="typings/jquery/jquery.d.ts" />

This should fix most of the errors in your project. If the compiler still has something to complain about, chances are you that you've already gotten the first benefit out of TypeScript and found an actual error in your JavaScript code. Once you've fixed all of the remaining errors, your project has been successfully ported to TypeScript. I recommend to also enable source maps for a better debugging experience. Also I recommend not to include the generated *.js and *.js.map files in the source control because these files are generated by the compiler and would cause otherwise unnecessary merge conflicts.

When your project successfully runs on TypeScript, you can start to add static types to your project. The big advantage to TypeScript (versus something like Dart) is that you don't have to do this for the whole project in one rush but you can start where you think it brings the most benefit and then add other parts only when you have the time to do so. As a long-term goal, I recommend adding as many types and modules to your project as possible since this makes developing for you and your team easier in the future.

Conclusion

In my opinion, TypeScript is ready to be used in bigger web applications. There might be some worries because TypeScript is not yet at version 1.0 and there are not so many existing bigger projects implemented in TypeScript. But this is not a real argument because if you decide you don't like TypeScript at any point in the future, you can simply compile the project into JavaScript and work from then on with the generated code instead.

The most important point for me is that TypeScript forces you and your co-developers to write correct code, which is not given for JavaScript. If the compiler can't compile the code, you should not be able to push your code to the repository. This is also the case when you don't have a single class or module statement in the project. Of course, the compiler can't find all possible errors but can at least find the most obvious ones, which would otherwise have cost a lot of time to in testing and debugging. Time that you could better for something more important.

We do not have real long-term experience with TypeScript (nobody has that) but we decided at Encodo to not start another plain JavaScript project, as long as we have the choice (i.e. unless external factors force us to do so).

Learning HTML5 basics

We recently put together a list of links and references that would be useful for anyone interested in getting up to speed on HTML5 development. These are what we consider to be the absolute basics -- what you need in order to even begin discussing more complex issues of architecture, tiering, data-binding, MVC and so on. So, imagine you are participating in an HTML5 course at Encodo -- you'd probably get something like the following article in order to make sure you're ready.

HTML5 prerequisites

This section describes the knowledge prerequisites for course participants.

Since the course includes a significant hands-on component and will discuss the advantages and disadvantages of architecture-level concepts, a minimum level of proficiency in some basic topics is required.

  • Prerequisites include online resources that can be used to gain the minimum required level of proficiency. The resources are there to help participants learn about a topic. Participants that are already familiar with a topic do not need to use them.
  • Familiarity means I know what CSS selectors do and know a few of the basic ones. It does not mean I know every CSS selector by heart nor does it mean I read an article about CSS five years ago. Participants are expected to judge their own proficiency honestly and prepare accordingly.
  • Required resources are generally a few pages that can be read in 1015 minutes. Optional resources are helpful for learning more but arent required reading.

HTML/DOM

The DOM (Document Object Model) is the data structure on the client side that is rendered in the browser. A participant must be familiar with the basic tags and structure of the DOM as well as common attributes and events, including which ones are new to and deprecated in HTML5.

Required:

Optional:

CSS

Participants must be familiar with the basic syntax and units. A working knowledge of how the basic selectors are applied to elements in the DOM is also required. Knowing how style cascade and override other styles (specificity rules) is a plus.

Required:

Optional:

JavaScript

It is assumed the participants will be proficient in at least one programming language. An awareness of the common pitfalls and quirks of JavaScript is required: that it is untyped, has a very loose definition of objects and inheritance, and that web apps written in it tend to be quite functional and event-based in nature.

Required:

Optional:

jQuery

jQuery is an industry-standard library that binds HTML, CSS and JavaScript. Participants should be know how to use jQuery selectors use a CSS-like syntax to select elements from the HTML DOM, attach events to those elements and traverse to other elements in the DOM (e.g. children, siblings, etc.).

Required:

Optional:

Browser compatibility

All modern browsers support the basic features outlined in the sections above. The site http://caniuse.com/ is useful to for finding out on which browsers the more advanced features are supported.

Practice sites

The following sites provide online sandboxes where participants can enter HTML, CSS and JavaScript to test it without installing or executing anything. They all allow examples to be saved for sharing.

Some new CSS length units (and some lesser-known ones)

This article originally appeared on earthli News and has been cross-posted here.


I've been using CSS since its inception and use many parts of the CSS3 specification for both personal work and work I do for Encodo. Recently, I read about some length units I'd never heard of in the article CSS viewport units: vw, vh, vmin and vmax by Chris Mills.

  • 1vw: 1% of viewport width
  • 1vh: 1% of viewport height
  • 1vmin: 1vw or 1vh, whatever is smallest
  • 1vmax: 1vw or 1vh, whatever is largest

These should be eminently useful for responsive designs. While there is wide support for these new units, that support is only available in the absolute latest versions of browsers. See the article for a good example of how these can be used.

While the ones covered in the article are actually new, there are others that have existed for a while but that I've never had occasion to use. The Font-relative lengths: the em, ex, ch, rem units section lists the following units:

  • em: This one is well-known: 1em is equal to the "computed value of the 'font-size' property of the element on which it is used."
  • ex: Equal to the height of the letter 'x' in the font of the element on which it is used. This is useful when you want to size a container based on the height of a lower-case letter -- i.e. tighter -- rather than on the full size of the font (as you get with em).
  • ch: "Equal to the advance measure of the "0" (ZERO, U+0030) glyph found in the font used to render it." Since all digits in a font should be the same width, this unit is probably useful for pages that need to measure and render numbers in a reliable vertical alignment.
  • rem: The same as em but always returns the value for the root element of the page rather than the current element. Elements that use this unit will all scale against a common size, independently of the font-size of their contents. Theres more to the CSS rem unit than font sizing by Roman Rudenko has a lot more information and examples, as well as an explanation of how rem can stand in for the still nascent support for vw.
A rant in Ominor (the decline and fall of the Opera browser)

This article originally appeared on earthli News and has been cross-posted here.


Opera has officially released their first desktop browser based on the Blink engine (forked from WebKit). The vision behind Opera 15 and beyond by Sebastien Baberowski explains how Opera 15...

...is dead on arrival.1

Choose your market

For years, Opera has held a steady 1.7--2% of the desktop browser market. This seems small but comprises dozens of millions of users. More capitalist heads have clearly prevailed at Opera. They've struck out for a more lucrative market. Instead of catering to the 2% of niche, expert users that were die-hard, loyal fans, they will create a clone of Chrome/Firefox/Safari that will cater to a much, much wider market.

In terms of fiscal reasoning, it's not hard to see why they're going in this direction. They will abandon their previous user base -- the hardcore market -- to the thankless chore of downloading and configuring their browsers with buggy extensions that offer half-implemented versions of the features that used to be high-performance and native.

As one such user, I am saddened, but am also almost certain that there is no turning back.2 It's been a good run, though. The browser market will be quite homogenized, but perhaps some enterprising open-source project will take up the flame and build us a better Opera.

Here's how another user put it in the comments for the article,

Opera's main reason was not to spend their time on browser innovation, but to save money. Opera became misinformative, untrustworthy company, disrespectful towards long-time and power users, whose disappointment Opera now tries to appease by extensions and "future" features.

That has been my impression, as well.

Opera does too have features!

Though many of the features that defined Opera for its users are gone -- perhaps to be resurrected -- the company goes out of its way to trumpet its innovation in this latest incarnation of its browser.

The article lightly covers the same four features that they won't shut up about -- Speed Dial, Stash, Discover and Off-road Mode -- and tells loyal Opera users that if "you find that Opera 15 doesnt have a feature you depend upon, first check the growing list of extensions". In other words, Opera is now just Chrome without Google? All of the out-of-the-box features that Opera users have come to expect have just been deep-sixed? And we can all hold out hope that the community develops them for Opera? And we get to spend a ton of time evaluating, downloading, testing and setting up these extensions?

I can "discover" the web just fine on my own without Opera's help. This feature feels more like an AOL/Facebook/Google+ crutch to get me to read catered content. Where's the pro version of the Opera browser? I'm browsing on a desktop with a 150Mb Internet connection -- Off-road Mode is utterly useless for me. Just as Turbo was useless before.

Stash, the Process Model and Memory Hunger

And shall we guess why they're pushing Stash so hard? Because they want to train us to stop keeping so many tabs open. You see, keeping dozens and dozens of tabs open brings any browser other than Opera to its knees. Either that, or the browser soon takes over most of the resources of the machine on which it runs and brings the OS to its knees.

Now that Opera has inherited the process model from the Blink engine, well, they suffer from the same issues that Chrome has: it's just not very good at keeping dozens and dozens of tabs open. Kudos to Opera for at least recognizing the problem and trying to train its users to be more reasonable. It's a bit weird for Opera users to hear this, though, because that was one of the reasons we used their browser in the first place: it just worked and didn't make us change our work habits to accommodate the tool.

Next? Beta? Alpha.

The halcyon days of faster, better and slimmer are, apparently, gone. At least for now. Version 15, though it's called an official release, is, for an Opera user, not even a beta. It is, at best, an early alpha that is nowhere near feature-completeness.

I understand that you want to trim the fat: some non-browsing features can legitimately be moved to other apps or put to sleep. It's utterly arguable that a browser doesn't need it's own IRC client, an RSS reader, a mail client, something called Unite.

But intimating that "Fit to Width" is too confusing a feature and won't come back? Removing bookmarks? And sessions? And the whole "Reopen closed windows" feature? And replacing it all with a single-level Speed Dial and something called Stash? And, of course...

The article goes on to cheerfully explain that there is a bookmark manager extension. This extension comes from Opera itself and is the official recommendation from the press release/article linked above. The first few comments should be enough to scare off anyone. This isn't too surprising: the bookmark manager in Opera 12 was barely adequate and had seen little love for years. But it worked. It had folders.3 It synced via Opera Link.

All that is gone. Use Stash instead.

Oh, and anything you configure will be local to that machine until Opera Link is reactivated. No roadmap for that yet. No roadmap for anything, in fact. Just a bunch of promises that "we are looking at your comments and feedback". There's nowhere to actually register that feedback and see whether Opera's considering it (something like Microsoft's "User Voice" would be nice). I can't believe I just wrote that I wish Opera would be more like Microsoft in engaging with the community.

Who thought this was a good idea? Hey, maybe there's an extension for Opera Link? Maybe I can cut&paste my browser together from dozens of extensions? Isn't that why I was using Opera instead of another browser? And even were I to do this, I get to repeat this configuration on absolutely every machine on which I use Opera because...you guessed it: Opera Link is gone, so I don't get any data-synchronization anymore. Not for bookmarks (which are gone anyway) but also not for the Wand (which is gone anyway)4 and certainly not for extensions, which were never synced, even in Opera 12.x.5

How in the name of all that is holy is moving bookmarks to an extension a move that offers a "UI simple enough to be intuitive for a consumer who wants a solid, fast browser that just works"?

Well, of course everything just works -- your browser no longer has any features.

So the check list of features in Opera 15 consists of "show web pages" which comes free by including the Chromium project. Whoop-de-doo.

Wait and see

I can't believe I'm writing this because I've always upgraded to the latest version, but: you can stick Opera 15 where the sun doesn't shine; I'm sticking with Opera 12. I'm happy with that for now, but I know it's not a long-term -- or even medium-term -- solution. Sigh.



  1. Disclaimer: I've been using Opera since version 3.6. About ten years ago, I joined an early-tester program to help them build their Mac browser (for the egotistical reason that I wanted to use Opera on my Mac). I'm still enrolled in that program, though my participation is considerably less than it used to be.

  2. And no, the article Ctrl+Z of Ctrl+D by Krystian Kolondra, in which Opera backpedals and swears that they will restore native bookmarks, is far from reassuring. The product strategy is clear; a bit of backpedaling on one feature doesn't change very much.

  3. Even though you couldn't see the bookmark-folder hierarchy very well -- or at all on the Mac -- when selecting one in the drop-down.

  4. To be honest, I've long since moved on to LastPass because a browser-specific password solution was too limiting for my work.

  5. At least Google solved the customization problem to some degree by saving your extensions as part of your account and syncing them whenever you log in from somewhere. That's a good start. But many of the Chrome extensions are pale imitations of the classic Opera features so Chrome is at best a partially satisfactory fallback position.

The HTML5 AppCache and HTTP Authentication

The following article outlines a solution to what may end up being a temporary problem. The conditions are very specific: no server-side logic; HTTP authentication; AppCache as it is implemented by the target platforms -- Safari Mobile and Google Chrome -- in late 2012/early 2013. The solution is not perfect but it's workable. We're sharing it here in the hope that it can help someone else or serve as a base for a better solution.

The HTML5 AppCache

The application cache is a relatively new feature that is,

  • Supported by all modern browsers
  • Uses a manifest file that indicates which files to cache
  • Browser checks manifest for changes
  • If there are changes, all files are refreshed
  • External links work when online
  • When offline, the application works with the local cache
  • External links to non-cached content are redirected to fallback links

AppCache Limitations

Web applications can use the HTML5 application-cache to store local content, but different browsers apply different restrictions to the amount of space allocated per domain.

  • Safari Mobile is limited to 50MB per domain. This means that the restriction will generally apply to all content packages downloaded from the same server/domain
  • Google Chrome is limited as well, but the actual limit is a bit of a moving target

Optimizing the HTML5 AppCache for Authenticated Content

In particular, the Safari Mobile browser cannot update the application cache for files for which it must obtain authentication.

  • Some requests do not trigger authentication
    • Manifest file
    • Home-screen icons
  • A lost connection or timeout can invalidate the authentication token
  • Version checks are not reliable
    • Open pages/running apps do not check for status updates
    • Home-screen apps dont reliably check on startup
    • This can lead to out-of-date or missing content

Checking for and presenting updates to the user

The graphic below illustrates the mechanism by which a content package in a web application can manage content updates and present them to the user.

  • When online, the software regularly checks whether an update for the package is available
  • The user can determine whether to install an update
  • When an update has been found, the software stops checking for updates until the user has applied the latest update
  • If the user delays the update, the user interface displays an update button
  • The software will automatically start checking for updates whenever it detects that it is online
  • There is no way for a user to ignore updates
  • When the user proceeds with an update, the latest version is retrieved at that time, ensuring that the user has the latest version

image

Solving the problems with authenticated data and the AppCache

In order to address the problems described above, the application uses a separate version file to check for updates independent of the browsers application-cache mechanism and to trigger this update only when authentication has been reestablished.

image

  • The cache.version.txt file is publicly available but is very small and includes only a unique version number that is also included in the cache.manifest file (both of which are generated by a deployment script).
  • The software compares this version number against the last known good version number. If it differs, it knows that the server has been updated with new content for this package
  • Before the software can kick off the HTML5 AppCache update process, it must ensure that the user is authenticated and authorized to retrieve the update package (because most browsers will simply fail silently if this is not the case).
  • The software pulls the force.password.txt file from the private zone with an explicit request. The browser will ask the user to authenticate, if necessary. This file is also very small to avoid needlessly downloading a large amount of data simply to force re-authentication.
  • Once the user has authenticated, the software lets the automated HTML5 AppCache update take over, retrieving first the cache.manifest file and then updating files as needed. The user is notified that this download is taking place asynchronously.
  • The software receives a notification from the browser that the update is complete and can record the version number and then notify the user that the update has been applied and is ready to use.

This approach worked relatively well for us, although we continue to refine it based on feedback and experience.

asm.js: a highly optimizable compilation target

This article originally appeared on earthli News and has been cross-posted here.


The article Surprise! Mozilla can produce near-native performance on the Web by Peter Bright takes a (very) early look at asm.js, a compilation target that the Mozilla foundation is pushing as a way to bring high-performance C++/C applications (read: games) to browsers.

The tool chain is really, really cool. The Clang compiler has really come a long way and established itself as the new, more flexible compiler back-end to use (Apple's XCode has been using it since version 3.2 and it's been the default since XCode 4.2). Basically, Mozilla hooked up a JavaScript code generator to the Clang tool-chain. This way, they get compilation, error-handling and a lot of optimizations for free. From the article,

[The input] language is typically C or C++, and the compiler used to produce asm.js programs is another Mozilla project: Emscripten. Emscripten is a compiler based on the LLVM compiler infrastructure and the Clang C/C++ front-end. The Clang compiler reads C and C++ source code and produces an intermediate platform-independent assembler-like output called LLVM Intermediate Representation. LLVM optimizes the LLVM IR. LLVM IR is then fed into a backend code generatorthe part that actually produces executable code. Traditionally, this code generator would emit x86 code. With Emscripten, it's used to produce JavaScript.

Mozilla has had a certain amount of success with it, but if you read all the way through the article, the project is very much a work in progress. The benchmarks executed by Ars Technica, however, bear out Mozilla's claims of being within shooting distance of native performance (for some usages; e.g. native MT applications still blow it away because JavaScript lacks support for multi-threading and shared memory structures).

Just compiling C++/C code to JavaScript is only part of the solution: that wouldn't necessarily generate code that's any faster than hand-tuned JavaScript. The trick is to optimize the compilation target -- that is, if the code is going to be generated by a compiler, that compiler can avoid using JavaScript language features and patterns that are hard or impossible to optimize (read the latest spec to find out more). Not only that, but if the JavaScript engine is asm.js-aware, it will also be able to apply even more optimizations because the input code will be guaranteed not to make use of any dynamic features that require much more stringent checking and handling. From the article,

An engine that knows about asm.js also knows that asm.js programs are forbidden from using many JavaScript features. As a result, it can produce much more efficient code. Regular JavaScript JITs must have guards to detect this kind of dynamic behavior. asm.js JITs do not; asm.js forbids this kind of dynamic behavior, so the JITs do not need to handle it. This simpler modelno dynamic behavior, no memory allocation or deallocation, just a narrow set of well-defined integer and floating point operationsenables much greater optimization.

While the results so far are quite positive, there are still a few issues to address:

  • asm.js scripts are currently quite large; Chrome would barely run them at all and even Firefox needed to be restarted every once in a while. Guess which browser handled the scripts with aplomb? That's right: IE10
  • asm.js also preallocates a large amount of memory, managing its own heap and memory layout (using custom-built VMTs to emulate objects rather than using the slower dynamic typing native to JavaScript). This preallocation means that a script's base footprint is much larger than that for a normal JavaScript application.
  • Browsers that haven't optimized the asm.js codepath run it more slowly than regular JavaScript that does the same thing
  • Source-level debugging is not available and debugging the generated JavaScript is a fool's errand
Networking event #1 2013: Working with HTML5

Our first networking event of the year is scheduled for tonight (19.04) with a presentation on HTML5 development. The talk, to be presented by Marco, will cover our experiences developing a larger project for the web.

Here's the main overview:

  • Project parameters: what did we build?

  • Components, libraries and features

    • HTML5 tags & objects
    • CSS3 concepts
    • jQuery basics
  • Tools

    • IDE & Browser
    • Testing & Optimization

You can find the entire presentation in the documents section.

Updating to a touch-friendly UI

This article originally appeared on earthli News and has been cross-posted here.


I was recently redesigning a web page and wanted to make it easier to use from touch-screen browsers. Links made only of text are relatively easy to click with a mouse, but tend to make poor touch targets. If the layout has enough space around the link, this can be remedied by applying CSS.

The basic box

FirstSecondThird

Suppose we have a box with three links in it, as shown to the right.

Setting the height

The first step is to make this box taller, so the logical thing to do is to set the height. We'll have to pick a value, so set height: 40px on the gray box.

FirstSecondThird

Aligning vertically

This isn't exactly what we want, though; we'd rather have the vertical space equally distributed. Also, if you hover over the links, you can see that the space below the text is not active. Maybe we can try to add vertical-align: middle to align the content.

FirstSecondThird

Unfortunately, this doesn't have the desired effect. The vertical-align property works when used this way in table cells, but otherwise has no effect for block elements. Knowing that, we can set display: table-cell for the gray box.

FirstSecondThird

And now the box has become longer, because the 50% width of the box is calculated differently for table cells than for regular boxes (especially when a table cell is found outside of a table).

Relative positioning

Let's abandon the vertical-alignment approach and try using positioning instead. Set position: relative and top: 25% to center the links vertically.

FirstSecondThird

Now that looks much better, but the space above and below the links is still not active. Perhaps we can use the height trick again, to make the individual links taller as well. So we set height: 100% on each of the links.

FirstSecondThird

We didn't get the expected result, but we should have expected that: the links are inline elements and can only have a height set if we set display: inline-block on each link as well. We use inline-block rather than block so that the links stay on the same line.

FirstSecondThird

The links are now the right size, but they stick out below the gray box, which isn't what we wanted at all. We're kind of out of ideas with this approach, but there is another way we can get the desired effect.

Positive padding and negative margins

Let's start with the original gray box and, instead of choosing a random height as we did above -- 40px -- let's set padding: 8px on the gray box to make room above and below the links.

FirstSecondThird

With just one CSS style, we've already got the links nicely aligned and, as an added benefit, this technique scales even if the font size is changed. The 8-pixel padding is preserved regardless of how large the font gets.1

FirstSecondThird

This approach seems promising, but the links are still not tall enough. The naive approach of setting height: 100% on the links probably won't work as expected, but let's try it anyway.

FirstSecondThird

It looks like the links were already 100% of the height of the container; in hindsight it's obvious, since the height of the gray box is determined by the height of the links. The 100% height refers to the client area of the gray box, which doesn't include the padding.

We'd actually like the links to have padding above and below just as the gray box has. As we saw above, the links will only honor the padding if they also have display: inline-block, so let's set that in addition to padding: 8px.

FirstSecondThird

We're almost there. The only thing remaining is to make the vertical padding of the links overlap with the vertical padding of the gray box. We can do this by using a negative vertical margin, setting margin: -8px.

FirstSecondThird

We finally have the result we wanted. The links are now large enough for the average finger to strike without trying too hard. Welcome to the CSS-enabled touch-friendly world of web design.

The code for the final example is shown below, with the sizing/positioning styles highlighted:

.gray-box
{
  background-color: gray;
  border: 1px solid black;
  border-width: 1px 0;
  width: 50%;
  text-align: center;
  padding: 8px 0;
}

.gray-box a
{
  background-color: #8F8F8F;
  display: inline-block;
  padding: 8px 20px;
  margin: -8px 0;
}

<div class="gray-box">
  <a href="#" style="color: goldenrod">First</a>
  <a href="#" style="color: gold">Second</a>
  <a href="#" style="color: yellowgreen">Third</a>
</div>


  1. Naturally, we could also use .8em instead and then the padding will scale with the font size. This would work just as well with the height. Let's pretend that we're working with a specification that requires an 8-pixel padding instead of a flexible one.