Tiny node containers


My favorite language at the moment is javascipt. It's fun & functional!

Since I'm also working quite a bit with docker, I've been frustrated with the size of nodejs docker images. A typical node container holds node, npm and all your dependencies. Add a few apt-get's and your quickly looking at > 500 MB.

I even started hacking some Go solely for the ability to compile to a single binary.

Until I found nexe...

I can haz javascript aaaand binary???

Building with nexe

Nexe will compile your node app into a single executable binary. No joke! Have a look!

Since we are now compiling, we need to think about things like compile target. Containers run linux. My desktop runs Darwin. A binary compiled on/for Darwin won't be able run inside a container. So, I made a container for compiling apps with nexe.

docker run -v $(pwd):/app -w /app asbjornenge/nexe-docker -i index.js -o app

Weird bugs

Granted, nexe is a bit flakey atm. I found two main bugs that I had to work around:

A default package.json somehow messes up the executable.
Workaround: I added a build script that will move package.json to pkg.json, build, then move it back.

When passing arguments to a comiled binary, there must exist a first argument.
Workaround: Just pass a random first argument.


When distributing, we can use the simplest container possible, and just add the binary.

FROM debian:jessie
ADD app /usr/bin/app


I used this approach to build skylink, check out the difference!

      |   normal  |  nexe
 size |  640.3 MB | 133.6 MB


♥ to the nexe folks!
Gif from here.

Vagrant skydocking


UPDATED 30.01.2014 - Using a route instead of linking interfaces. A bit simpler. Original here.

A bridge over vagrant water

I've been working quite a bit with docker lately. If you haven't yet checked it out, it's about time. Docker is already popping paradigms.

Since I'm on OSX I'm running my docker host on Virtualbox via Vagrant.

Instead of having to forward ports and using lots of -p args when spawning containers, I wanted to bridge my host and the vm's docker interface, so that I could ping my containers from my OSX terminal.

Create a private_network in your Vagrantfile. I'm picking an ip on a different subnet than the docker0 interface to avoid any potential conflicts.

Vagrant::VERSION >= "1.1.0" and Vagrant.configure("2") do |config|
    config.vm.network "private_network", ip: "", netmask: ""
    config.vm.provider :virtualbox do |vb|
        vb.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]

The vb.customize is to allow forwarding packets for the bridge interface. The --nicpromisc2 translates to Promiscuous mode for nic2, where nic2 -> eth1. So --nocpromisc3 would change that setting for eth2, etc.

After reloading vagrant we need create a route on the host. Basically, any traffic trying to reach the docker subnet ( should be routed to our new interface inside the vm (

$> sudo route -n add -net
# Linux (untested)
$> sudo route -net netmask gw

You now have a bridge from your host to your docker network!!

$> IP=`docker inspect -format='{{.NetworkSettings.IPAddress}}' skydns`
$> ping $IP
PING ( 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=64 time=0.232 ms
64 bytes from icmp_req=2 ttl=64 time=0.103 ms
--- ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1009ms
rtt min/avg/max/mdev = 0.103/0.167/0.232/0.065 ms



Docker is all about distributed systems; packing single components inside containers and have them talk to eachother. One of the painpoints when shattering your monolith is linking all those loose components together.

(Docker provides a -link parameter for linking containers. But this quickly falls short in complex scenarios.)

I was just about to dig into service discrovery solutions like etcd or similar, when Michael Crosby posted his skydock (video). It's brilliant! It let's you discover your services via DNS. I won't go into setting up skydock, just check out the awesome tutorial by Michael.

So, with skydock my containers can discover eachother via DNS names like myservice.env.domain.com. Awesome! But, with my network bridge set up, so can my host!! No? That would be really nice for development...

$> curl elasticsearch.dev.domain.com:9200
curl: (6) Could not resolve host: elasticsearch.dev.domain.com

﴾͡๏̯͡๏﴿ ... Ah, we need to hook up skydns as a nameserver. This is where I stray a little from Michael's skydock tutorial. I had some issues binding to the docker0 interface (docker v0.7.6), so instead I'm using the skydns container as the nameserver directly (PS! this requires passing a -dns arg to each new container). Either way, we have to edit resolv.conf.

$> sudo vi /etc/resolv.conf
   # nameserver <- skydock tutorial
   nameserver # <- skydns container ip
$> dig elasticsearch.dev.domain.com
elasticsearch.dev.domain.com.   20  IN  A

✌(-‿-)✌ ... Hoplah! Now, hopefully that will be it for you and you're all set to curl containers from the comforts of your host terminal! I however, had one more issue to solve...

$> curl elasticsearch.dev.domain.com:9200
curl: (6) Could not resolve host: elasticsearch.dev.domain.com # w00000000t???

OSX weirdness

Apparently OSX is rather weird in how it handles DNS. dig, host, etc. can resolve the host just fine, but other tools like curl and even ping does not obey resolv.conf. I eventually stumbled across the issue and found this script that apparently solves it for most people. It didn't help. Eventually I added the DNS server via OSX network preferences, and that did the trick.

$> curl elasticsearch.dev.domain.com:9200
    "ok" : true,
    "status" : 200,
    "name" : "Damian, Margo",
    "version" : {
        "number" : "1.0.0.Beta2",
        "build_hash" : "296cfbe390dc51bb00c00ba48ad0c8a9efabcfe9",
        "build_timestamp" : "2013-12-02T15:46:27Z",
        "build_snapshot" : false,
        "lucene_version" : "4.6"
    "tagline" : "You Know, for Search"


I'm now a ᕙ༼ຈل͜ຈ༽ᕗ curl’er of containers!!


Docker, Skydock and Skydns all deserve a big fat ♥.
I followed this guide by Lukas Pustina to set up my vagrant networking.
Gifs from here and faces from there.

Out of sorts


My first real fight with fonts.


Before I even start this I should probably state that this adventure leads into unfamiliar terrain, and over half my findings are probably half-witted nonsense. There. I should probably start all my blogposts like that.


Fonts are important. Most of what we see on our screens is text in some form, or typeface, to dive right into the syntax.

Starting out on my current font adventure I was quite shocked by how little I knew about the font world. I had been developing websites and apps full of text for years, but hardly knew what a baseline was. To some extent that is a good thing, it had just worked. On the other hand, it is a level of control over my design I had been completely ingorant about.

The design bullet

The current problem I was facing seemed simple enough; allow a line of text to include a bullet. Easy as pie.

<span style="display:list-item">Some Text</span>

Turned out it wasn't quite so easy. These bullets were part of our clients design manual, but they were not the same as the bullet glyph of the font. Modifying the font was also out of the question because of licensing.

But, there was a clear definition; the bullet was a square of height and width x relative the font-size, vertically aligned (centered) with the fonts x-height.

Calculating the x-height of a target element is easy enough using css's ex unit.

$('<div style="width:1ex"></div>').appendTo(target)[0].offsetWidth

But the x-height itself was of little use. To vertically align my bullet with the x-height, I needed to know the margin, bottom or top, of the baseline or median; I needed more metrics.

Alright, easy enough. Let's see… "javascript font metrics". Uhm…

The bad news

There is no built-in, easy, standard way of extracting the metrics of a font.

The good news

It's possible to calculate! AND, there is a great library that will do most of the heavy lifting for you! We'll get to that.


To calculate a fonts vertical metrics, there are two approaches as far as I can tell.

1. Measuring dom elements

The first approach is to use a bunch of dom elements with specific font-related metrics (1em, 1ex, etc.) and measure these in px (offsetWidth) at different levels and at different font-size's.

The approach seems to work quite well for the calculation part. Sturdy across browsers and fonts. For the actual positioning there were other icebergs floating around.

NB! The solution is a possible performance drain if used unwisely - measuring offsetWidth might cause unwanted reflow (repaint of your dom elements).

2. Canvas

The second approach is using the canvas element. The 2d context of a canvas has font, fillText and measureText functions. Unfortunately measureText only deals with the width metric, but that seems to be about to change (!!). For now though, the approach is to dump and analyze the raw pixel data and figure out how many pixels are used vertically to draw different letters of the font.

This approach also works perfectly for the calculation part, and thanks to the awesome fontmetrics.js it's easy.

But again, for the actual positioning, I was soon stuck in a pitch black room (next to a tiny, grey, startling little cat with diarrhea. Sitting on a matressless, iron-sprung bed with its huge eyes mewing at me. Meow. Smoking as well, probably. And then some terrible guy the colour of an aubergine round the corner holding a mug of beef tea and wearing a string vest going “meew. Fuckn brrr aaah” ~ Dylan Moran).


The days of web typography is upon us. We are no longer limited to a handful of built-in fonts. Using technologies like @font-face we can embed "any" font on our page and have it render "beautifully" on the client's browser.

There are however quite a few pitfalls & legibility issues.


The one that hit me hard in the face is the fact that different browsers, and even the same browsers on different operating systems, deal very differently with how they render fonts. Even different versions of the same operating system will sometimes render fonts very differently.

At typical body-text sizes, the computer has to draw each letter using only 15 or so pixels in each direction. It’s not possible to draw each letter exactly as the typographer intended, and keep all the lines crisp and smooth, with that few pixels. Windows, OSX, and Linux all resolve this dilemma differently: to oversimplify a bit, OSX tries harder to preserve the font shapes, Windows tries harder to make the lines sharp, and Linux tries to do both at once and winds up achieving neither.
~ Zachary Weinberg

Sometimes the font won't even render inside it's bounding box! (!!!!) For my current problem, that makes any font metric calculation futile. Turns out, this library I've been mumbling about had a solution for even this.


Another issue with embedded fonts is knowing when the font is loaded. If you try to measure prematurely you will end up measuring the fallback font, and thats no good.

The only viable solution I have come across is using a "dummy" fallback font that will encode a character as a zero-width unit. Putting that in a paragraph and polling for a real width. It's not a great solution but it works.


Fortunately someone has already thread this path for us.
Font.js adds a Font object to your javascript toolbelt. It's designed to behave similar to the Image object.

var font = new Font();
font.onload  = function() {}
font.onerror = function() {}
font.src = "http://your.domain.com/fonts/font.otf"

It handles timing issue using the detailed solution above, and will call your onload function when the font is available. It gives you metrics.

font.metrics -> {}
font.measureText(string, size) -> {}

They even handle the rendering issue (to some extent).

Font.js actually draws text offscreen, does a scanline pass to find out what the "real" ascent and descent is, and then sets height to ascent + 1 + descent ("1" for the baseline itself). This generally works quite well, but will lead to incorrect heights for fonts that don't implement the Latin blocks =)
~ Michiel Kamermans

One important thing to note is that the fonts are loaded using XMLHttpRequest's. This is important since it is the only way to get the font data so it can be inspected and manipulated. But it does mean you have to deal with hosting your own fonts or setting up CORS to avoid Access-Control-Allow-Origin issues.

Font.js is a great library for solving most of the current headaches related to fonts.

Grab it from the github repo or via bower.

bower install Font.js



Zero Todo


Todo workflow for inbox zeroists.

I'm an inbox zeroist; my inbox is my todo list.

For us (well, for me at least) todo applications quikly get neglected. I love their shiny UI's and impressive and thought out UX, but the fact remains that the tasks I so optimistically punch in never get done. I've tried numerous approaches. My read-later services are filled to the brim by awesomeness that will never get parsed by anyone but @marcoarment's robots.

I always return to my inbox, so whatever gets in there gets action.

The following is an attempt to simplify adding "tasks" to my inbox.


Get your local postfix relaying to a proper smtp server. I followed this guide for gmail. Be sure to also add the following to /etc/postfix/main.cf.

smtp_sasl_security_options = noanonymous

$ mail

Now you can send emails from your shell.

df -h | mail -s "Disk usage" you@domain.io


There are multiple ways to have a hotkey execute a script. I choose Alfred because I like Alfred and because it has support for passing any selected text as an argument to the script.

Add your extension. It might be a good idea to click Advanced and configure escaping. Mail seems to handle all these chars nicely, so I just unchecked it all.

Add a hotkey for that extension and check "Selected text in OS X".

An thats it, you can now select any text in osx and stack it on top of your inbox by pressing your specified hotkey.

JSON Schema Validation


Your probably talking JSON with a RESTful api, right?
If you care about creating a great experience, you need to take error handling seriously. Handling timeouts and http error codes is pretty straight forward, but handling corrupt data can be tricky. It often leaves an ugly footprint in your code. Lot's of if's and hasOwnProperty's. Instead, using json-schema, you can validate your JSON data first and be sure it is as expected.


A JSON Media Type for Describing the Structure and Meaning of JSON Documents

Example; If you have some JSON Data:

    "title" : "Kapsokisio"

You can define a corresponding JSON Schema:

    "type" : "object",
    "required" : ["title"],
    "properties" : {
        "title" : { "type" : "string" } 

You can validate your data using that schema. If it is valid, you can be sure this data is an object with a title property of type string.


The latest IETF draft is currently v3, but they have a v4 being prepared for submission in early 2013. This post will focus on v4.

The new drafts are up!



There is a variety of implementations available. Since I choose to focus on v4 and since I'm a webnerd, I'll be using the tv4 validator for the examples.


NB! This article is in no way a usage reference!!
It's more a collection of the things I stubled across trying to figure out how this JSON-Schema thing works. Some important bits, and some of the things I found really useful. See the further reading section for more possibilites and options.


Using "type" you can specify the datatype required for the current object. The value can be a string or an array. Available values are; object, array, string, boolean, integer, number, null. The following requires the data to be either an object or a string.

    "type" : ["object","string"]

tv4.validate({}, schema) // true
tv4.validate([], schema) // false


Using "enum" you can define an array with elements of any type. Data must be equal to one of the elements to validate.

    "enum" : [[1,true,0], {}, 28, "Burbon"]

tv4.validate([1,true,0], schema) // true
tv4.validate(34, schema) // false


Using "required" you can define an array of required properties. It's value is an array of strings.

    "required" : ["title","origin"]

tv4.validate({"title" : "", "origin" : ""}, schema) // true
tv4.validate({"title" : ""}, schema) // false


Using "properties" you can further specify an objects properties. It is an object where each value is a separate schema.

    "properties" : {
        "title"   : { "type" : "string" },
        "weight"  : { "type" : "number" }

tv4.validate({"title" : "", "weight" : 2}, schema) // true
tv4.validate({"title" : "", "weight" : "2"}, schema) // false


Using "items" you can specify the requirements for the items in an array. It can be a single schema or an array of schemas. The following requires the elements in this array to be a string or an object.

    items : [
        { "type" : "string" },
        { "type" : "object" }

tv4.validate(["",{}], schema) // true
tv4.validate(["",true], schema) // false


Using "pattern" you can validate using regular expressions. Powerful stuff!

    "properties" : {
        "url" : { "type" : "string", "pattern" : /(http|ftp|https):\/\/[\w-]+(\.[\w-]+)+([\w.,@?^=%&amp;:\/~+#-]*[\w@?^=%&amp;\/~+#-])?/ }

tv4.validate({"url" : "http://google.com"}, schema) // true
tv4.validate({"url" : "htt:/googleco.m"}, schema) // false


Using "$ref" you can reference other schemas. You can use a URI or an # for internal referencing. Using definitions as a location for your internal referenced schemas is not a rule but a common practice.

    "items" : { 
        "$ref" : "#/definitions/bean"
    "definitions" : {
        "bean" : {
            "type" : "object",
            "required" : ["origin"],
            "properties" : {
                "origin" : { "enum" : ["kenya","rawanda"] }

tv4.validate([{"origin" : "kenya"}], schema) // true
tv4.validate([{"origin" : "brazil"}], schema) // false
tv4.validate(["kenya","rawanda"], schema) // false


Using "allOf" you can define an array of schemas where your data elements must validate against all of them.

    "allOf" : [
        { "type" : "integer" },
        { "minimum" : 6 }

tv4.validate(6, schema) // true
tv4.validate(5, schema) // false


Using "oneOf" you can define an array of schemas where your data elements must validate against one (and only one) of them.

    "oneOf" : [
        { "type"    : "integer" },
        { "minimum" : 6 }

tv4.validate(5, schema) // true
tv4.validate(6, schema) // false


Using "anyOf" you can define an array of schemas where your data elements can validate against any (at least one) of them.

    "anyOf" : [
        { "type"    : "integer"  },
        { "minimum" : 6 }

tv4.validate(5, schema) // true
tv4.validate(6, schema) // true


Using "not" you can define a schema your data elements should to not validate against.

    "not" : { "type" : "string" }

tv4.validate(1, schema) // true
tv4.validate("test", schema) // false

Error handling

(tv4 specific)

I just thought I'd quickly mention how tv4 handles a failure:

tv4.validate([],{"type" : "object"})
var err = tv4.error
while(err != null) {
    console.log(err.message, err.schemaPath, err.dataPath)
    err = err.subErrors

Further reading

I would really recommend reading through the tests for tv4, they provide excellent usage examples for the different possibilites. On the JSON-Schema website you will find the documentation and some great examples.


One of the biggest benefits of using JSON-Schema validation is that it will allow you a cleaner codebase. You can trust your data. That in turn improves readability and maintainability which leads to better and more robust applications. In the end; a better user experience.


It can be quite tedious building a good schema describing your data. And of course, if you change your data structures, you need to update your schema (in addition to your code). But considering how this approach will simplify your codebase, I would definately say it's well worth it.

Real world example


    "title"   : "Kapsokisio",
    "origin"  : "Kenya",
    "variety" : ["SL28","SL34","Burbon"],
    "process" : "Washed",
    "roast" : {
        "level" : 4,
        "date"  : "08.02.2012"
    "bag" : {
        "weight" : 354,
        "date"   : "08.02.2012"
    "brew_tip" : {
        "method" : "pourover",
        "grind"  : "medium",
        "vessle" : "chemex"


    "type" : "object",
    "required" : ["title","origin","variety","process","roast","bag"],
    "properties" : {
        "title"    : { "type" : "string"  },
        "origin"   : { "type" : "string"  },
        "variety"  : { "type" : "array"   },
        "process"  : { "type" : "string" },
        "bag"      : { "$ref" : "#/definitions/bag" },
        "roast"    : { "$ref" : "#/definitions/roast" },
        "brew_tip" : { "$ref" : "#/definitions/brew_tip" }
    "definitions" : {
        "roast" : {
            "type" : "object",
            "required" : ["level", "date"],
            "properties" : {
                "level" : { "type" : "integer" },
                "date"  : { 
                    "type" : "string", 
                    "pattern" : /^\d{2}([./-])\d{2}\1\d{4}$/
        "bag" : {
            "type" : "object",
            "required" : ["weight", "date"],
            "properties" : {
                "weight" : { "type" : "number" },
                "date"   : { 
                    "type" : "string", 
                    "pattern" : /^\d{2}([./-])\d{2}\1\d{4}$/
        "brew_tip" : {
            "type" : "object",
            "required" : ["method","grind","vessle"],
            "properties" : {
                "method" : { "type" : "string" },
                "grind"  : { "type" : "string" },
                "vessel" : { "type" : "string" }