1 2 3
On project maintenance

Consider the following scenarios:

  • You maintain a legacy project or your once-greenfield project has now turned a year (or two) old
  • You’ve been busy programming and have been pummelled by project admin

Under the stresses that come with the combination of these two scenarios, software developers often overlook one critical aspect to a successful, future-proof project: external package-maintenance.

I recently sat down and wrote an email explaining how I go about package-maintenance and thought it would be useful to write up those notes and share them with others.

The tech world moves quickly; new code styles, frameworks and best practices evolve in the blink of an eye. Before you know it, the packages you'd installed the previous year are no longer documented and there aren’t any blogposts describing how to upgrade them to their latest versions. Nightmare.

My general rule of thumb to avoid this ill-fated destiny is to set aside some time each sprint to upgrade packages. The process isn’t really involved, but it can be time-consuming if you upgrade a handful of packages at once and find that one of them breaks your code. You then have to go through each, one by one, downgrade and figure out if it’s the culprit.

My upgrade procedure (in this case using the yarn package manager) is:

  • Check which packages are due for upgrade - yarn outdated
  • Look through the READMEs for each of the outdated packages and check if any of the changes are likely to impact your codebase
  • Upgrade those packages that don’t appear to significantly have changed - yarn add clean-webpack-plugin@latest or yarn add clean-webpack-plugin@VERSION_NUMBER to install a specific version
  • Run the project’s test suite and check if the application still works. Fix any issues as required
  • Repeat for packages that have significantly changed

Tom Szpytman is a Software Developer at Encodo and works primarily on the React/Typescript stack

2017 Migration from ESXi 5.5 to Proxmox 5.0

Here at Encodo, we host our services in our own infrastructure which, after 12 years, has grown quite large. But this article is about our migration away from VMWare.

So, here's how we proceeded:

We set up a test environment as close as possible to the new one before buying the new server, to test everything. This is the first time we had contact with software raids and it's monitoring capabilities.

Install the Hypervisor

Installation time, here it goes:

  • Install latest Proxmox1: This is very straight forward, I won't go into that part.
  • After the installation is done, log in via ssh and check the syslog for errors (we had some NTP issues, so I fixed that before doing anything else).

Check Disks

We have our 3 Disks for our Raid5. We do not have a lot of files to store, so we use 1TB which should be still OK (see Why RAID 5 stops working in 2009 as to why you shouldn't do RAID5 anymore).

We set up Proxmox on a 256GB SSD. Our production server will have 4x 1TB SSD's, one of which is a spare. Note down the serial number of all your disks. I don't care how you do it -- make pictures or whatever -- but if you ever care which slot contains which disk or if the failing disk is actually in that slot, having solid documentation really helps a ton.

You should check your disks for errors beforehand! Do a full smartctl check. Find out which disks are which. This is key, we even took pictures prior to inserting them into the server (and put them in our wiki) so we have the SN available for each slot.

See which disk is which:

for x in {a..e}; do smartctl -a /dev/sd$x | grep 'Serial' | xargs echo "/dev/sd$x: "; done

Start a long test for each disk:

for x in {a..e}; do smartctl -t long /dev/sd$x; done

See SMART tests with smartctl for more detailed information.

Disk Layout & Building the RAID

We'll assume the following hard disk layout:

/dev/sda = System Disk (Proxmox installation)
/dev/sdb = RAID 5, 1
/dev/sdc = RAID 5, 2
/dev/sdd = RAID 5, 3
/dev/sde = RAID 5 Spare disk
/dev/sdf = RAID 1, 1
/dev/sdg = RAID 1, 2
/dev/sdh = Temporary disk for migration

When the check is done (usually a few hours), you can verify the test result with

smartctl -a /dev/sdX

Now that we know our disks are OK, we can proceed creating the software RAID. Make sure you get the correct disks:

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd

The RAID5 will start building immediately but you can also start using it right away. Since I had other things on my hand, I waited for it to finish.

Add the spare disk (if you have one) and export the configuration to the config:

mdadm --add /dev/md0 /dev/sde
mdadm --detail --scan >> /etc/mdadm/mdadm.conf

Configure Monitoring

Edit the email address in /etc/mdadm/mdadm.conf to a valid mail address within your network and test it via

mdadm --monitor --scan --test -1

Once you know that your monitoring mails come through, add active monitoring for the raid device:

mdadm --monitor --daemonise --mail=valid@domain.com --delay=1800 /dev/md0

To finish up monitoring, it's important to read the mismatch_cnt from /sys/block/md0/md/mismatch_cnt periodically to make sure the Hardware is OK. We use our very old Nagios installation for this and got a working script for the check from Mdadm checkarray by Thomas Krenn

Creating and Mounting Volumes

Back to building! We now need to make the created storage available to Proxmox. To do this, we create a PV, VG and an LV-Thin-Pool. We use 90% of the storage since we need to migrate other devices as well, and 10% is enough for us to migrate 2 VM's at a time. We format it with XFS:

pvcreate /dev/md0 storage
vgcreate raid5vg /dev/md0
lvcreate -l 90%FREE -T raid5vg
lvcreate -n migrationlv -l +100%FREE raid5vg
mkfs.xfs /dev/mapper/raid5vg-migrationlv

Mount the formatted migration logical volume (if you want to reboot, add it to fstab obviously):

mkdir /mnt/migration
mount /dev/mapper/raid5vg-migrationlv /mnt/migration

If you don't have the disk space to migrate the VM's like this, add an additional disk (/dev/sdh in our case). Create a new partition on it with

fdisk /dev/sdh
n

Accept all the defaults for max size. Then format the partition with xfs and mount it:

mkfs.xfs /dev/sdh1
mkdir /mnt/largemigration
mount /dev/sdh1 /mnt/largemigration

Now you can go to your Proxmox installation and add the thin pool (and your largemigration partition if you have it) in the Datacenter -> Storage -> Add. Give it an ID (I called it raid5 because I'm very creative), Volume Group: raid5vg, Thin Pool: raid5lv.

Extra: Upgrade Proxmox

At this time, we'd bought our Proxmox license and did a dist upgrade from 4.4 to 5.0 which had just released. To do that, follow the upgrade document from the Proxmox wiki. Or install 5.0 right away.

Migrating VMs

Now that the storage is in place, we are all set to create our VM's and do the migration. Here's the process we were doing - there are probably more elegant and efficient ways to do that, but this way works for both our Ubuntu installations and our Windows VM's:

  1. In ESXi: Shut down the VM to migrate
  2. Download the vmdk file from vmware from the storage or activate ssh on ESXi and scp the vmdk including the flat file (important) directly to /mnt/migration (or largemigration respectively).
  3. Shrink the vmdk if you actually downloaded it locally (use the non-flat file as input if the flat doesn't work):2
     vdiskmanager-windows.exe -r vmname.vmdk -t 0 vmname-pve.vmdk
  4. Copy the new file (vmname-pve.vmdk) to proxmox via scp into the migration directory /mnt/migration (or largemigration respectively)
  5. Ssh into your proxmox installation and convert the disk to qcow2:
     qemu-img convert -f vmdk /mnt/migration/vmname-pve.vmdk -O qcow2 /mnt/migration/vmname-pve.qcow2
  6. In the meantime you can create a new VM:
    1. In general: give it the same resources as it had in the old hypervisor
    2. Do not attach a cd/dvd
    3. Set the disk to at least the size of the vmdk image
    4. Make sure the image is in the "migration" storage
    5. Note the ID of the vm, you're gonna need it in the next step
  7. Once the conversion to qcow2 is done, override the existing image with the converted one. Make sure you get the correct ID and that the target .qcow2 file exists. Override with no remorse:
     mv /mnt/migration/vmname-pve.qcow2 /mnt/migration/images/<vm-id>/vm-<vm-id>-disk-1.qcow2
  8. When this is done, boot the image and test if it comes up and runs
  9. If it does, go to promox and move the disk to the RAID5:
    1. Select the VM you just started
    2. Go to Hardware
    3. Click on Hard Disk
    4. Click on Move Disk
    5. Select the Raid5 Storage and check the checkbox Delete Source
    6. This will happen live

That's it. Now repeat these last steps for all the VMs - in our case around 20, which is just barely manageable without any automation. If you have more VMs you could automate more things, like copying the VMs directly from ESXi to Proxmox via scp and do the initial conversion there.


  1. We initially installed Proxmox 4.4, then upgraded to 5.0 during the migration.

  2. You can get the vdiskmanager from Repairing a virtual disk in Fusion 3.1 and Workstation 7.1 (1023856) under "Attachments"

Is Encodo a .NET/C# company?

Encodo has never been about maintaining or establishing a monoculture in either operating system, programming language or IDE. Pragmatism drives our technology and environment choices.1

Choosing technology

Each project we work on has different requirements and we choose the tools and technologies that fit best. A good fit involves considering:

  • What exists in the project already?
  • How much work needs to be done?
  • What future directions could the project take?
  • How maintainable is the solution/are the technologies?
  • How appropriate are various technologies?
  • What do our developers know how to do best?
  • What do the developers who will maintain the project know best? What are they capable of?
  • Is there framework code available that would help?

History: Delphi and Java

When we started out in 2005, we'd also spent years writing frameworks and highly generic software. This kind of software is not really a product per se, but more of a highly configurable programmable "engine", in which other programmers would write their actual end-user applications.

A properly trained team can turn around products very quickly using this kind of approach. It is not without its issues, though: maintaining a framework involves a lot of work, especially producing documentation, examples and providing support. While this is very interesting work, it can be hard to make lucrative, so we decided to move away from this business and focus on creating individual products.

Still, we stuck to the programming environment and platform that we knew best2 (and that our customers were requesting): we developed software mostly in Delphi for projects that we already had.3 For new products, we chose Java.

Why did we choose Java as our "next" language? Simply because Java satisfied a lot of the requirements outlined above. We were moving into web development and found Delphi's offerings lacking, both in the IDE as well as the library support. So we moved on to using Eclipse with Jetty. We evaluated several common Java development libraries and settled on Hibernate for our ORM and Tapestry for our web framework (necessitating HiveMind as our IOC).

History: .NET

A few years later, we were faced with the stark reality that developing web applications on Java (at the time) was fraught with issues, the worst of which was extremely slow development-turnaround times. We found ourselves incapable of properly estimating how long it would take to develop a project. We accept that this may have been our fault, of course, but the reality was that (1 )we were having trouble making money programming Java and (2) we weren't having any fun anymore.

We'd landed a big project that would be deployed on both the web and Windows desktops, with an emphasis on the Windows desktop clients. At this point, we needed to reëvaluate: such a large project required a development language, runtime and IDE strong on the Windows Desktop. It also, in our view, necessitated a return to highly generic programming, which we'd moved away from for a while.

Our evaluation at the time included Groovy/Grails/Gtk, Python/Django/Gtk, Java/Swing/SWT/Web framekworks, etc. We made the decision based on various factors (tools, platform suitability, etc.) and moved to .NET/C# for developing our metadata framework Quino, upon which we would build the array of applications required for this big project.

Today (2014)

We're still developing a lot of software in C# and .NET but also have a project that's built entirely in Python.4 We're not at all opposed to a suggestion by a customer that we add services to their Java framework on another project, because that's what's best there.

We've had some projects that run on a Linux/Mono stack on dedicated hardware. For that project, we made a build-server infrastructure in Linux that created the embedded OS with our software in it.

Most of our infrastructure runs on Linux with a few Windows VMs where needed to host or test software. We use PostgreSql wherever we can and MS-SQL when the customer requires it.5

We've been doing a lot of web projects lately, which means the usual client-side mix of technology (JS/CSS/HTML). We use jQuery, but prefer Knockout for data-binding. We've evaluated the big libraries -- Angular, Backbone, Ember -- and found them to be too all-encompassing for our needs.

We've evaluated both Dart and TypeScript to see if those are useful yet. We've since moved to TypeScript for all of our projects but are still keeping an eye on Dart.

We use LESS instead of pure CSS. We've used SCSS as well, but prefer LESS. We're using Bootstrap in some projects but find it to be too restrictive, especially where we can use Flexbox for layout on modern browsers.

And, with the web comes development, support and testing for iOS and other mobile devices, which to some degree necessitates a move from pure .NET/C# and toward a mix.

We constantly reëvaluate our tools, as well. We use JetBrains WebStorm instead of Visual Studio for some tasks: it's better at finding problems in JavaScript and LESS. We also use PhpStorm for our corporate web site, including these blogs. We used the Java-based Jenkins build server for years but moved to JetBrains TeamCity because it better supports the kind of projects we need to build.

Conclusion

The description above is meant to illustrate flexibility, not chaos. We are quite structured and, again, pragmatic in our approach.

Given the choice, we tend to work in .NET because we have the most experience and supporting frameworks and software for it. We use .NET/C# because it's the best choice for many of the projects we have, but we are most definitely not a pure Microsoft development shop.

I hope that gives you a better idea of Encodo's attitude toward software development.



  1. If it's not obvious, we employ the good kind of pragmatism, where we choose the best tool for the job and the situation, not the bad kind, founded in laziness and unwillingness to think about complex problems. Just so we're clear.

  2. Remo had spent most of his career working with Borland's offerings, whereas I had started our with Borland's Object Pascal before moving on to the first version of Delphi, then Microsoft C++ and MFC for many years. After that came the original version of ASP.NET with the "old" VB/VBScript and, finally, back to Delphi at Opus Software.

  3. We were actually developing on Windows using Delphi and then deploying on Linux, doing final debugging with Borland's Linux IDE, Kylix. The software to be deployed on Linux was headless, which made it much easier to write cross-platform code.

  4. For better or worse; we inherited a Windows GUI in Python, which is not very practical, but I digress

  5. Which is almost always, unfortunately.

OpenBSD takes on OpenSSL

imageMuch of the Internet has been affected by the Heartbleed vulnerability in the widely used OpenSSL server-side software. The bug effectively allows anyone to collect random data from the memory of machines running the affected software, which was about 60% of encrypted sites worldwide. A massive cleanup effort ensued, but the vulnerability has been in the software for two years, so there's no telling how much information was stolen in the interim.

The OpenSSL software is used not only to encrypt HTTPS connections to web servers but also to generate the certificates that undergird those connections as well as many PKIs. Since data could have been stolen over a period of two years, it should be assumed that certificates, usernames and passwords have been stolen as well. Pessimism is the only way to be sure.1

In fact, any data that was loaded into memory on a server running a pre-Heartbleed version of the OpenSSL software is potentially compromised.

How to respond

We should all generate new certificates, ensuring that the root certificate from which we generate has also been re-generated and is clean. We should also choose new passwords for all affected sites. I use LastPass to manage my passwords, which makes it much easier to use long, complicated and most importantly unique passwords. If you're not already using a password manager, now would be a good time to start.

And this goes especially for those who tend to reuse their password on different sites. If one of those sites is cracked, then the hacker can use that same username/password combination on other popular sites and get into your stuff everywhere instead of just on the compromised site.

Forking OpenSSL

Though there are those who are blaming open-source software, we should instead blame ourselves for using software of unknown quality to run our most trusted connections. That the software was designed and built without the required quality controls is a different issue. People are going to write bad software. If you use their free software and it ends up not being as secure as advertised, you have to take at least some of the blame on yourself.

Instead, the security experts and professionals who've written so many articles and done so many reviews over the years touting the benefits of Open SSL should take more of the blame. They are the ones who misused their reputations by touting poorly written software to which they had source-code access, but were too lazy to perform a serious evaluation.

An advantage of open-source software is that we can at least pinpoint exactly when a bug appeared. Another is that the entire codebase is available to all, so others can jump in and try to fix it. Sure, it would have been nice if the expert security programmers of the world had jumped in earlier, but better late than never.

The site OpenSSL Rampage follows the efforts of the OpenBSD team to refactor and modernize the OpenSSL codebase. They are documenting their progress live on Tumblr, which collects commit messages, tweets, blog posts and official security warnings that result from their investigations and fixes.

They are working on a fork and are making radical changes, so it's unlikely that the changes will be taken up in the official OpenSSL fork but perhaps a new TLS/SSL tool will be available soon.2

VMS and custom memory managers

The messages tell tales of support for extinct operating systems like VMS, whose continued support makes for much more complicated code to support current OSs. This complexity, in turn, hides further misuses of malloc as well as misuses of custom buffer-allocation schemes that the OpenSSL team came up with because "malloc is too slow". Sometimes memory is freed twice for good measure.

The article Today's bugs have BRANDS? Be still my bleeding heart [logo] by Verity Stob has a (partially) humorous take on the most recent software errors that have reared their ugly heads. As also mentioned in that article, the Heartbleed Explained by Randall Munroe cartoon shows the Heartbleed issue well, even for non-technical people.

Lots o' cruft

This is all sounds horrible and one wonders how the software runs at all. Don't worry: the code base contains a tremendous amount of cruft that is never used. It is compiled and still included, but it acts as a cozy nest of code that is wrapped around the actual code.

There are vast swaths of script files that haven't been used for years, that can build versions of the software under compilers and with options that haven't been seen on this planet since before .. well, since before Tumblr or Facebook. For example, there's no need to retain a forest of macros at the top of many header files for the Metrowerks compiler for PowerPC on OS9. No reason at all.

There are also incompatibly licensed components in regular use as well as those associated with components that don't seem to be used anymore.

Modes and options and platforms: oh my!

There are compiler options for increasing resiliency that seem to work. Turning these off, however, yields an application that crashes immediately. There are clearly no tests for any of these modes. OpenSSL sounds like a classically grown system that has little in the way of code conventions, patterns or architecture. There seems to be no one who regularly cleans out and decides which code to keep and which to make obsolete. And, even when code is deemed obsolete, it remains in the code base over a decade later.

Security professionals wrote this?

This is to say nothing of how their encryption algorithm actually works. There are tales on that web site of the OpenSSL developers desperately having tried to keep entropy high by mixing in the current time every once in a while. Or even mixing in bits of the private key for good measure.

A lack of discipline (or skill)

The current OpenSSL codebase seems to be a minefield for security reviewers or for reviewers of any kind. A codebase like this is also terrible for new developers, the onboarding of which you want to encourage in such a widely used, distributed, open-source project.

Instead, the current state of the code says: don't touch, you don't know what to change or remove because clearly the main developers don't know either. The last person who knew may have died or left the project years ago.

It's clear that the code has not been reviewed in the way that it should be. Code on this level and for this purpose needs good developers/reviewers who constantly consider most of the following points during each review:

  • Correctness (does the code do what it should? Does it do it in an acceptable way?)
  • Patterns (does this code invent its own way of doing things?)
  • Architecture (is this feature in the right module?)
  • Security implications
  • Performance
  • Memory leaks/management (as long as they're still using C, which they honestly shouldn't be)
  • Supported modes/options/platforms
  • Third-party library usage/licensing
  • Automated tests (are there tests for the new feature or fix? Do existing tests still run?)
  • Comments/documentation (is the new code clear in what it does? Any tips for those who come after?)
  • Syntax (using braces can be important)

Living with OpenSSL (for now)

It sounds like it is high time that someone does what the BSD team is doing. A spring cleaning can be very healthy for software, especially once it's reached a certain age. That goes double for software that was blindly used by 60% of the encrypted web sites in the world.

It's wonderful that OpenSSL exists. Without it, we wouldn't be as encrypted as we are. But the apparent state of this code bespeaks of failure to manage on all levels. The developers of software this important must be of higher quality. They must be the best of the best, not just anyone who read about encryption on Wikipedia and "wants to help". Wanting to help is nice, but you have to know what you're doing.

OpenSSL will be with us for a while. It may be crap code and it may lack automated tests, but it has been manually (and possibly regression-) tested and used a lot, so it has earned a certain badge of reliability and predictability. The state of the code means only that future changes are riskier, not necessarily that the current software is not usable.

Knowing that the code is badly written should make everyone suspicious of patches -- which we now know are likely to break something in that vast pile of C code -- but not suspicious of the officially supported versions from Debian and Ubuntu (for example). Even if the developer team of OpenSSL doesn't test a lot (or not automatically for all options, at any rate -- they may just be testing the "happy path"), the major Linux distros do. So there's that comfort, at least.



  1. As Ripley so famously put it in the movie Aliens: "I say we take off and nuke the entire site from orbit. It's the only way to be sure."

  2. It will, however, be quite a while before the new fork is as battle-tested as OpenSSL.

The Internet of Things

This article originally appeared on earthli News and has been cross-posted here.


The article Smart TVs, smart fridges, smart washing machines? Disaster waiting to happen by Peter Bright discusses the potential downsides to having a smart home1: namely our inability to create smart software for our mediocre hardware. And once that software is written and spread throughout dozens of devices in your home, it will function poorly and quickly be taken over by hackers because "[h]ardware companies are generally bad at writing softwareand bad at updating it."

And, should hackers fail to crack your stove's firmware immediately, for the year or two where the software works as designed, it will, in all likelihood, "[...] be funneling sweet, sweet, consumer analytics back to the mothership as fast as it can", as one commentator on that article put it.

Manufacturers aren't in business to make you happy

Making you happy isn't even incidental to their business model now that monopolies have ensured that there is nowhere you can turn to get better service. Citing from the article above:

These devices will inevitably be abandoned by their manufacturers, and the result will be lots of "smart" functionalityfridges that know what we buy and when, TVs that know what shows we watchall connected to the Internet 24/7, all completely insecure.

Manufacturers almost exclusively design hardware with extremely short lifetimes, hewing to planned obsolescence. While this a great capitalist strategy, it is morally repugnant to waste so many resources and so much energy to create gadgets that will break in order to force consumers to buy new gadgets. Let's put that awful aspect of our civilization to the side for a moment and focus on other consequences.

These same manufacturers are going to take this bulletproof strategy to appliances that have historically had much longer lifetimes. They will also presumably take their extremely lackluster reputation for updating firmware and software into this market. The software will be terrible to begin with, it will be full of security holes and it will receive patches for only about 10% of its expected lifetime. What could possibly go wrong?

Either the consumer will throw away a perfectly good appliance in order to upgrade the software or the appliance will be an upstanding citizen of one, if not several, botnets. Or perhaps other, more malicious services will be funneling information about you and your household to others, all unbeknownst to you.

People are the problem2

These are not scare tactics; this is an inevitability. People have proven themselves to be wildly incapable of comprehending the devices that they already have. They have no idea how they work and have only vague ideas of what they're giving up. It might as well be magic to them. To paraphrase the classic Arthur C. Clarke citation: "Any sufficiently advanced technology is indistinguishable from magic" especially for a sufficiently technically oblivious audience.

Start up a new smart phone and try to create your account on it. Try to do so without accidentally giving away the keys to your data-kingdom. It is extremely difficult to do, even if you are technically savvy and vigilant.

Most people just accept any conditions, store everything everywhere, use the same terribly insecure password for everything and don't bother locking down privacy options, even if available. Their data is spread around the world in dozens of places and they've implicitly given away perpetual licenses to anything they've ever written or shot or created to all of the big providers.

They are sheep ready to be sheared by not only the companies they thought they could trust, but also by national spy agencies and technically adept hackers who've created an entire underground economy fueled by what can only be called deliberate ignorance, shocking gullibility and a surfeit of free time and disposable income.

The Internet of Things

The Internet of Things is a catch-phrase that describes a utopia where everything is connected to everything else via the Internet and a whole universe of new possibilities explode out of this singularity that will benefit not only mankind but the underlying effervescent glory that forms the strata of existence.

The article Ars readers react to Smart fridges and the sketchy geography of normals follows up the previous article and includes the following comment:

What I do want, is the ability to check what's in my fridge from my phone while I'm out in the grocery store to see if there's something I need.

That sounds so intriguing, doesn't it? How great would that be? The one time a year that you actually can't remember what you put in your refrigerator. On the other hand, how the hell can your fridge tell what you have? What are the odds that this technology will even come close to functioning as advertised? Would it not be more reasonable for your grocery purchases to go to a database and for you to tell that database when you've actually used or thrown out ingredients? Even if your fridge was smart, you'd have to wire up your dry-goods pantry in a similar way and commit to only storing food in areas that are under surveillance.

The commentator went on to write,

I do agree that security is a huge, huge issue, and one that needs to be addressed. But I really don't see how resisting the "Internet of things" is the longterm solution. The way technology seems to be trending, this is an inevitability, not a could be.

Resisting the "Internet of things" is not being proposed as the long-term solution. It is being proposed as a short- to medium-term solution because the purveyors of this shining vision of nirvana have proven themselves time and again to be utterly incapable of actually delivering the panaceas that they promise in a stream of consumption-inducing fraud. Instead, they consistently end up lining their own pockets while we all fritter away even more precious waking time ministering to the retarded digital children that they've birthed from their poisoned loins and foisted upon us.

Stay out of it, for now

Hand-waving away the almost-certain security catastrophe as if it can be easily solved is extremely disingenuous. This is not a world that anyone really wants to take part in until the security problems are solved. You do not want to be an early adopter here. And you most especially do not want to do so by buying the cheapest, most-discounted model available as people are also wont to do. Stay out of the fight until the later rounds: remove the SIM card, shut off Internet connectivity where it's not needed and shut down Bluetooth.

The best-case scenario is that early adopters will have their time wasted. Early rounds of software promise to be a tremendous time-suck for all involved. Managing a further herd of purportedly more efficient and optimized devices is a sucker's game. The more you buy, the less likely you are to be in charge of what you do with your free time.

As it stands, we already fight with our phones, begging them to connect to inadequate data networks and balky WLANs. We spend inordinate amounts of time trying to trick their garbage software into actually performing any of its core services. Failing that -- which is an inevitability -- we simply live with the mediocrity, wasting our time every day babysitting gadgets and devices and software that are supposed to be working for us.

Instead, it is we who end up performing the same monotonous and repetitive tasks dozens of times every day because the manufacturers have -- usually in a purely self-interested and quarterly revenue-report driven rush to market -- utterly failed to test the basic functions of their devices. Subsequent software updates do little to improve this situation, generally avoiding fixes for glaring issues in favor of adding social-network integration or some other marketing-driven hogwash.

Avoiding this almost-certain clusterf*#k does not make you a Luddite. It makes you a realist, an astute observer of reality. There has never been a time in history when so much content and games and media has been at the fingertips of anyone with a certain standard of living. At the same time, though, we seem to be so bedazzled by this wonder that we ignore the glaring and wholly incongruous dreadfulness of the tools that we are offered to navigate, watch and curate it.

If you just use what you're given without complaint, then things will never get better. Stay on the sidelines and demand better -- and be prepared to wait for it.



  1. Or a smart car or anything smart that works perfectly well without being smart.

  2. To be clear: the author is not necessarily excluding himself here. It's not easy to turn on, tune in and drop out, especially when your career is firmly in the tech world. It's also not easy to be absolutely aware of what you're giving up in as you make use of the myriad of interlinked services offered to you every day.

Setting up the Lenovo T440p Laptop

This article originally appeared on earthli News and has been cross-posted here.


I recently got a new laptop and ran into a few issues while setting it up for work. There's a tl;dr at the end for the impatient.

Lenovo has finally spruced up their lineup of laptops with a series that features:

  • An actually usable and large touchpad
  • A decent and relatively sensibly laid-out keyboard
  • Very long battery life (between 6-9 hours, depending on use)
  • Low-power Haswell processor
  • 14-inch full-HD (1920x1080)
  • Dual graphics cards
  • Relatively light at 2.1kg
  • Relatively small/thin form-factor
  • Solid-feeling, functional design w/latchless lid
  • Almost no stickers

I recently got one of these. Let's get it set up so that we can work.

Pop in the old SSD

Instead of setting up the hard drive that I ordered with the laptop, I'm going to transplant the SSD I have in my current laptop to the new machine. Though this maneuver no longer guarantees anguish as it would have in the old days, we'll see below that it doesn't work 100% smoothly.

As mentioned above, the case is well-designed and quite elegant. All I need is a Phillips screwdriver to take out two screws from the back and then a downward slide on the backing plate pulls off the entire bottom of the laptop.1

At any rate, I was able to easily remove the new/unwanted drive and replace it with my fully configured SSD. I replaced the backing plate, but didn't put the screws back in yet. I wasn't that confident that it would work.

My pessimism turns out to have been well-founded. I boot up the machine and was greeted by the BIOS showing me a list of all of the various places that it had checked in order to find a bootable volume.

It failed to find a bootable volume anywhere.

Try again. Still nothing.

UEFI and BIOS usability

From dim memory, I recalled that there's something called UEFI for newer machines and that Windows 8 likes it and that it may have been enabled on the drive that shipped with the laptop but almost certainly isn't on my SSD.

Snooping about in the BIOS settings -- who doesn't like to do that? -- I find that UEFI is indeed enabled. I disable that setting as well as something called UEFI secure-boot and try again. I am rewarded within seconds with my Windows 8 lock screen.

I was happy to have been able to fix the problem, but was disappointed that the error messages thrown up by a very modern BIOS are still so useless. To be more precise, the utter lack of error messages or warnings or hints was disappointing.

I already have access to the BIOS, so it's not a security issue. There is nothing to be gained by hiding from me the fact that the BIOS checked a potential boot volume and failed to find a UEFI bootable sector but did find a non-UEFI one. Would it have killed them to show the list of bootable volumes with a little asterisk or warning telling me that a volume could boot were I to disable UEFI? Wouldn't that have been nice? I'm not even asking them to let me jump right to the setting, though that would be above and beyond the call of duty.

Detecting devices

At any rate, we can boot and Windows 8, after "detecting devices" for a few seconds was able to start up to the lock screen. Let's log in.

I have no network access.

Checking the Device Manager reveals that a good half-dozen devices could not be recognized and no drivers were installed for them.

This is pathetic. It is 2014, people. Most of the hardware in this machine is (A) very standard equipment to have on a laptop and (B) made by Intel. Is it too much to ask to have the 20GB Windows 8 default installation include generic drivers that will work with even newer devices?

The drivers don't have to be optimized; they just have to work well enough to let the user work on getting better ones. Windows is able to do this for the USB ports, for the display and for the mouse and keyboard because it would be utter failure for it not to be able to do so. It is an ongoing mystery how network access has not yet been promoted to this category of mandatory devices.

When Windows 8 is utterly incapable of setting up the network card, then there is a very big problem. A chicken-and-egg problem that can only be solved by having (A) a USB stick and (B) another computer already attached to the Internet.

Thank goodness Windows 8 was able to properly set up the drivers for the USB port or I'd have had a sense-less laptop utterly incapable of ever bootstrapping itself into usefulness.

On the bright side, the Intel network driver was only 1.8MB, it installed with a single click and it worked immediately for both the wireless and Ethernet cards. So that was very nice.

Update System

The obvious next step once I have connectivity is to run Windows Update. That works as expected and even finds some extra driver upgrades once it can actually get online.

Since this is a Lenovo laptop, there is also the Lenovo System Update, which updates more drivers, applies firmware upgrades and installs/updates some useful utilities.

At least it would do all of those things if I could start it.

That's not 100% fair. It kind of started. It's definitely running, there's an icon in the task-bar and the application is not using any CPU. When I hover the icon, it even shows me a thumbnail of a perfectly rendered main window.

Click. Nothing. The main window does not appear.

Fortunately, I am not alone. As recently as November of 2013, there were others with the same problem.2 Unfortunately, no one was able to figure out why it happens nor were there workarounds offered.

I had the sound enabled, though and noticed that when I tried to execute a shortcut, it triggered an alert. And the System Update application seemed to be in the foreground -- somehow -- despite the missing main window.

Acting on a hunch, I pressed Alt + PrtSc to take a screenshot of the currently focused window. Paste into an image editor. Bingo.

image

Now that I could read the text on the main window, I could figure out which keys to press. I didn't get a screenshot of the first screen, but it showed a list of available updates. I pressed the following keys to initiate the download:

  • Alt + S to "Select all"
  • Alt + N to move to the next page
  • Alt + D to "Download" (the screenshot above)

Hovering the mouse cursor over the taskbar icon revealed the following reassuring thumbnail of the main window:

image

Lucky for me, the System Update was able to get the "restart now" onto the screen so that I could reboot when required. On reboot, the newest version of Lenovo System Update was able to make use of the main window once again.

Recommendations

  • If you can't boot off of a drive on a new machine, remember that UEFI might be getting in the way.
  • If you're going to replace the drive, make sure that you download the driver for your machine's network card to that hard drive so that you can at least establish connectivity and continue bootstrapping your machine back to usability.
  • Make sure you update the Lenovo System tools on the destination drive before transferring it to the new machine to avoid weird software bugs.

[![image](2014-01-22_22_27_49-hotkey_features_integration_for_windows_8.1_(64-bit),8(64-bit),7(32-bit,64-.png)](2014-01-22_22_27_49-hotkey_features_integration_for_windows_8.1(64-bit),8(64-bit),7(32-bit,_64-.png)


  1. I'm making this sound easier than it was. I'm not so well-versed in cracking open cases anymore. I was forced to download the manual to look up how to remove the backing plate. The sliding motion would probably have been intuitive for someone more accustomed to these tasks.

  2. In my searches for help, manuals and other software, I came across the following download, offered on Lenovo's web site. You can download something called "Hotkey Features Integration for Windows 8.1" and it only needs 11.17GB of space.

Apple Developer Videos

This article originally appeared on earthli News and has been cross-posted here.


It's well-known that Apple runs a walled garden. Apple makes its developers pay a yearly fee to get access to that garden. In fairness, though, they do provide some seriously nice-looking APIs for their iOS and OS X platforms. They've been doing this for years, as listed in the post iOS 7 only is the only sane thing to do by Tal Bereznitskey. It argues that the new stuff in iOS 7 is compelling enough to make developers consider dropping support for all older operating systems. And this for pragmatic reasons, such as having far less of your own code to support and correspondingly making the product cost less to support. It's best to check your actual target market, but Apple users tend to upgrade very quickly and reliably, so an iOS 7-only strategy is a good option.

Among the improvements that Apple has brought in the recent past are blocks (lambdas), GCD (asynchronous execution management) and ARC (mostly automated memory management), all introduced in iOS 4 and OS X 10.6 Snow Leopard. OS X 10.9 Mavericks and iOS 7 introduced a slew of common UI improvements (e.g. AutoLayout and HTML strings for labels).1

To find the videos listed below, browse to WWDC 2013 Development Videos.

For the web, Apple has improved developer tools and support in Safari considerably. There are two pretty good videos demonstrating a lot of these improvements:

#601: Getting to Know Web Inspector

This video shows a lot of improvements to Safari 7 debugging, in the form of a much more fluid and intuitive Web Inspector and the ability to save changes made there directly back to local sources.

#603: Getting the Most Out of Web Inspector

This video shows how to use the performance monitoring and analysis tools in Safari 7. The demonstration of how to optimize rendering and compositing layers was really interesting.

For non-web development, Apple has been steadily introducing libraries to provide support for common application tasks, the most interesting of which are related to UI APIs like Core Image, Core Video, Core Animation, etc.

Building on top of these, Apple presents the Sprite Kit -- for building 2D animated user interfaces and games -- and the Scene Kit -- for building 3D animated user interfaces and games. There are some good videos demonstrating these APIs as well.

#500: Whats New in Scene Kit

An excellent presentation content-wise; the heavily accented English is sometimes a bit difficult to follow, but the material is top-notch.

#502: Introduction to Sprite Kit

This is a good introduction to nodes, textures, actions, physics and the pretty nice game engine that Apple delivers for 2D games.

#503: Designing Games with Sprite Kit

The first half is coverage of tools and assets management along with more advanced techniques. The second half is with game designers Graeme Devine2 and Spencer Lindsay, who designed the full-fledged online multi-player game Adventure to showcase the Sprite Kit.



  1. Disclaimer: I work with C# for Windows and HTML5 applications of all stripes. I don't actually work with any of these technologies that I listed above. The stuff looks fascinating, though and, as a framework developer, I'm impressed by the apparent cohesiveness of their APIs. Take recommendations with a grain of salt; it could very well be that things are a good deal less rosy when you actually have to work with these technologies.

  2. Formerly of Trilobyte and then id Software, now at Apple.

How to fool people into giving up their email address

This article originally appeared on earthli News and has been cross-posted here.


On Codecademy, you can learn to program in various languages. It starts off very slowly and is targeted at non-technical users. That's their claim anyway -- the material in the courses I looked at ramps up pretty quickly.

Anyway, the interesting thing I saw was in their introductory test. It struck me as a subtle way to get you to enter your email address. I'd just recently discussed this on a project I'm working on: how can we make it fun for the user to enter personal information? The goal is not to sell that information (not yet anyway, but who knows what the future holds), but to be able to enhance -- nay, personalize -- the service.

Personalizing has a bad reputation but can be very beneficial. For example, if you're using a site for free and you're going to see offers and advertisements anyway, isn't it better to enter a bit of data that will increase the likelihood that offers and ads are interesting? Each person can -- and should -- decide for the themselves what to make public, but the answer isn't always necessarily no.

How Codecademy gets your email

image

Here they teach you how to use the "length" method by measuring your email address. Sneaky. I like it.

image

Even if you don't given them an address, they re-prompt you to enter your email, but it doesn't come across as pushy because you're taking a test.

I thought that this was pretty subtle. Because of the context, people who would ordinarily be sensitive to giving up their email might not even notice. Why? Because they want to answer the question correctly. They don't want the site to judge them for having entered something wrong, so they do as they're told.

Is Codecademy collecting emails this way? I have no way to be sure, but they'd be silly not to.

How to drag rewind and fast-forward into the 21st century

This article originally appeared on earthli News and has been cross-posted here.


The most difficult technical problems to solve are the ones that you don't notice. The workflow and tools to which you've become accustomed are terrible, but they're so ingrained that you might actually find yourself unthinkingly defending them because that's just how it has to be.

Below I take a shot at designing a better user experience for a common feature: rewinding or fast-forwarding a video recorded on a DVR.

Why is your DVR's fast-forwarding feature stuck in the past?

Fast-forwarding and rewinding digital movies is one of those things.

Many people have DVRs now -- provided, often enough, by the cable company itself -- but they often function as if customers were still juggling tapes instead of switching between files on a hard drive. While there is no technical hurdle to making this process better, I acknowledge that there are probably very important albeit tediously prosaic advertising reasons for keeping fast-forwarding not just primitive, but almost deliberately broken.

Despite the strong likelihood that this feature will not be improved for the reasons stated above (i.e. that the exorbitant monthly fee that you pay for your content will continue to be supplemented by advertising revenue generated by your captive eyeballs), it would still be fun to imagine how we could make this feature better.

Use cases

The most obvious use case for fast-forwarding is to skip commercials in recorded content: that's just reality. Though the cable companies and networks would dearly love for everyone to take their medicine and watch all of their advertisements, users would dearly love to just watch their content without the ads. That is often the reason that they recorded the content in the first place.

Another use case is to scrub forward in longer sports events, like cycling or the Olympics. The user generally doesn't want to watch six hours; instead, the user would like to skip forward 2.5 hours, watch 15 minutes, skip another hour, watch 30 minutes, skip another hour and watch the rest, all the while skipping commercials in between. Often the user doesn't even know how far they want to skip; they need to see the content at various intervals in order to see where to stop. This is currently achieved by just scrubbing through all the content sequentially.

This is all not only a tedious amount of work but also takes much longer than necessary: even at the top speed, the fast-forward feature takes long minutes to skip two hours of content. This is ridiculous, especially when most of us have seen it work at least marginally better on a computer, where one can skip large chunks of content and reliably jump to a specific position in the recording. The system described below could improve the experience for computer-based media players as well.

What's the problem?

Fast-forwarding is a pain because, while you'd like to jump forward as quickly as possible, you have to be fast enough to stop it before it's gone too far. This is old-school technology from the days of the VCR when there was only one read-head per device. Now there's a digital file that the machine can easily read and render thumbnails from anywhere in the data stream.

My media box from UPC Cablecom offers the standard controls for scrubbing: play, pause, fast-forward, rewind. When you press rewind or fast-forward, it moves between five speeds, skipping forward or backward faster with each level. When you've got it on 5 of 5, you skip commercials or content very quickly, but you're also extremely likely to skip over content you wanted to watch.

The standard pattern is to fly forward, slam on the brakes, then backtrack slowly, overshoot again -- but not by as much -- and then finally position the read-head about where you want it, watching the final 20 seconds of commercials or station identification that you couldn't avoid until you finally get to the content you were looking for.

There has to be a better way of doing this.

Making it better

The idea of five speeds is fine, but we should be able to take the twitch-gamer component out of the experience. And this is coming from someone who used to be a pretty dedicated gamer; I can't imagine what this system feels like to someone unaccustomed to technology. They probably just put it on the slow speed -- or don't bother fast-forwarding at all.

What about a solution that works like this: instead of changing speed immediately, pressing rewind or fast-forward pauses the stream and switches to a scrubbing mode. The scrubbing mode is displayed as a screen of tiles -- say 5x5 -- each tile representing a screenshot/thumbnail from the stream that you're watching.

The thumbnails are chosen in the following manner. If you pressed fast-forward, the thumbnail for your current position is shown in the upper left-hand corner. Subsequent tiles are chosen at 5-second intervals going forward in the stream. Pressing the fast-forward again increases the level -- as before -- but, instead of speeding through the stream, it simply chooses new thumbnails, this time at 10-second intervals. Press again to switch to 30-second, then 1-minute, then 5-minute intervals. At the top "speed" the bottom right-hand corner shows a thumbnail 24 x 5 minutes forward in the stream.

Rewind has the same behavior, except that the current position is shown in the bottom right-hand corner and thumbnails proceed from right-to-left, bottom-to-top to show the user data in the stream before that position.

Once the user is on this screen, he or she can use the cursor to select the desired thumbnail and refocus the screen on that one by clicking OK. In this way, the user can quickly and reliably use the fast-forward or rewind buttons to switch the granularity to "home in" in on a particular scene. All without any stress, missteps or a lot of senseless back-and-forth scrubbing. And all without having to watch hardly anything -- a few seconds at most -- that the user doesn't want to watch.

When the right scene is selected (to within 5 seconds), the user presses play or pause to continue watching from the newly selected position.

Players like Roku have a "jump back ten seconds" feature that's quite useful, but the system described above makes that sound utterly primitive and limiting.

Going beyond five intervals

It is no longer necessary to have only 5 fixed intervals either. Perhaps the default interval (user-configurable) is 2 seconds, but that's only the center of a scale with 10 steps, so the user can drop down to 1-second or 1/2-second increments as well.

Positioning the current scene in scrubber mode

The system described above moves the default location of the current scene, depending on whether the user pressed rewind (bottom-right corner) or fast forward (top-left corner). Another approach would be to ignore which button was pressed and to always show the current scene in the center of the grid, with thumbnails showing history as well as future in the recording. Further presses of rewind and fast forward increase or decrease the amount of time represented by each thumbnail.

Rendering thumbnails

If the software takes time to render the thumbnails, it can do it asynchronously, rendering thumbnails to screen as they become available. Showing the time under the thumbnail would be massively helpful even without a thumbnail. The user could easily jump ahead 4 minutes without any adrenalin at all.

This should be a huge problem, though. Whenever the user opens a recording, the software can proactively cache thumbnails based on expected usage or default settings.

Including PDF in web sites

At first glance, this seems to be a pretty easy topic since PDFs are everywhere and can be found in almost every bigger website. But in most cases, PDF files are just linked for download and not embedded directly in the site. If the user clicks such a link, the browser decides what to do with the file: Just download to the file system or display a preview in a new tab or window. This also works pretty well for mobile devices since there are PDF readers for almost every platform.

But what if we want more than this, like embedding the document in the website and jumping to a specific page? The first part of this is very easy: We can use an iframe and set the location to the URL of the PDF.

<iframe src="document.pdf"></iframe>

This works fine on a desktop browser like chrome as it fits the width of the PDF to the width of the iframe:

image

But when we open the same page in mobile safari, it looks like following:

image

The PDF is not scaled and much worse: You can not even drag the document around. In short: This form of embedding PDF in websites is completely useless on iOS. Investigating deeper on this, it turns out that there is no way to fix this issue with a pure HTML / CSS solution.

Another solution that is worth looking at is pdf.js originally intended as a Firefox plugin to display PDF files in a platform-independent way. It renders PDF files into canvas elements using nothing more than pure JavaScript. In the hope that this will fix our PDF problem, we tried to include this JavaScript library in our web application. This worked fine for small- to medium-sized PDF files in desktop browsers as well as on mobile safari. But when we tried to display PDFs with complex content or more than 100 pages, we quickly ran into some limitations of this library: The rendering of huge PDFs was painfully slow and failed completely in some cases. I personally like this approach as it provides a platform independent solution which can be easily included in web applications, but it seems that it's just not ready right now. But maybe in a couple of months or years, this will be the way to go for displaying PDFs in web applications.

Another approach to fix this is to convert each page of the PDF into a PNG image. This approach is used widely on the web; for example by google books for the preview function. This approach is technically easy and things like zooming or jumping directly to a specific page can be implemented with a couple of lines of JavaScript. But one of the big drawbacks is that text is not submitted as text but as an image. This increases the transferred data size significantly which is, on the other hand, bad for mobile devices with their typically low bandwidth. To address this there are techniques like using the offline-application cache, which should definitely be kept in mind when using this approach.

After many hours investigating this topic, we ended up using the approach to include single pages of the PDF as PNG images in our web application. This requires that the PDF files be prepared server-side. Also, we implemented a dynamic-load algorithm which loads the images only when they are visible on the screen. This allowed us to display big documents without overburdening the video memory or the bandwidth of the mobile device.