U2F demo application

(by )

Two weeks ago I got my first Universal Second Factor Device. It’s an inexpensive small USB key: the FIDO U2F Security Key. This key can be used as a 2nd Factor Authentication device.

It uses the protocol as specified by the FIDO Alliance, which consists of Google, Microsoft, Yubico, Lenovo and others.

What it provides

The overview document states:

The FIDO U2F protocol enables relying parties to offer a strong cryptographic 2nd factor option for end user security.

After the user has registered their device, the application can request authentication using this key on login (or when it seems necessary, e.g. when changing some other security settings).

Right now it relies on a extension for Chrome to provide the JavaScript API: FIDO U2F (Universal 2nd Factor) extension. Hopefully this will soon be implemented directly in the browser.

How it works

The U2F protocol is not complex at all, making it easy to implement and verify its correctness. It consists of 2 phases: registration and autentication, both requiring explicit human interaction.


  1. The server choses a pseudo-random 32 byte challenge
  2. It sends this challenge, a version identifier and its appId to the browser
  3. The browser forwards this data and the origin of the challenge to the key after requesting access requiring human interaction
  4. The key assembles its public key, key handle and a signature. The signature includes the seen appId, a hash of the provided challenge and origin, its own public key and its key handle.
  5. The browser sends back this registration data to the server, where the certificate is checked, the signature validated and public key and key handle are saved.

The key is now registered for use with this origin and appId.


  1. The server choses a pseudo-random 32 byte challenge for every possible key handle.
  2. This data is sent to the browser, including the appId
  3. The browser forwards this data to the key, including the origin
  4. The key is activated by human interaction, it then creates a signature of a hash of the appId, a counter value and a hash of the provided challenge and origin. This signature and the counter value is sent back to the browser, which submits it to the server
  5. The server verifies the signature using the previously saved public key and verifies that the counter value is larger than any previously seen counter for this key handle.

If all runs through the user is successfully authenticated based on his key.

The implementation

The small demo application does nothing more than authenticating a user by name and a password and authorizing access to the private section of the website. A user is then able to add second factor authentication through U2F devices by registering one or more keys for their account. If a user has U2F devices registered, the server requires additional authentication by providing the U2F key to the website,

I decided to built this small application using the Cuba framework, a small Rack-based web framework providing only the absolute basics necessary for this. Authentication is handled by Shield, user data is stored using Ohm. For correct generation and verification of the U2F data I rely on ruby-u2f, an implementation of the full specification. The code itself is quite small, there are some todos and unimplemented things still open, but from what I understand right now they are not security-impacting. But before you run this in production, please take your own measurement and check the implementation against the spec.

The following will only describe the U2F relevant parts. The rest should be straight forward.

Key registration

Before a user can use second factor authentication, they need to register their device with the service.

on get do
  registration_requests = u2f.registration_requests
  session[:challenges] = registration_requests.map(&:challenge)

  render "key_add",
    registration_requests: registration_requests

First we generate registration requests for the key to sign later. We then need to save the provided challenges into the session to be able to check them later again. These could also be saved directly into a database. We could also add sign requests for known key handles to later check if the key is already known, but for simplicity we don’t do this here.

Then we simply render our form, the important JavaScript part in the frontend is this:

var registerRequests = {{ registration_requests.to_json }};

var signRequests = [];

u2f.register(registerRequests, signRequests, function(registerResponse) {
    var form, reg;

    if (registerResponse.errorCode) {
        return alert("Registration error: " + registerResponse.errorCode);

    form = document.forms[0];
    response = document.querySelector("[name=response]");

    response.value = JSON.stringify(registerResponse);


First we pass in the register and sign requests as JSON to be inspected by JavaScript. We then call the u2f API provided by the browser (for now added by an extension). The browser handles all the complicated stuff of verifying the provided request, asking for the user’s permission to use the key, sending it to the key and returning back the signed data to the browser. Once this is done, the callback is called. All that’s left to do is sending this data back to the server. We use a simple hidden form for that.

On the server side the data is parsed and verified. Again, this is handled completely by the library. All we need to do is calling the right methods and saving the key handle and public key to our database.

on post, param("response") do |response|
  u2f_response = U2F::RegisterResponse.load_from_json(response)

  reg = begin
          u2f.register!(session[:challenges], u2f_response)
        rescue U2F::Error => e
          session[:error] =  "Unable to register: #{e.class.name}"
          redirect "/private/keys/add"

  Registration.create(:certificate => reg.certificate,
                      :key_handle  => reg.key_handle,
                      :public_key  => reg.public_key,
                      :counter     => reg.counter,
                      :user        => current_user)

  session[:success] = "Key added."
  redirect "/private/keys"

The user has now a registered U2F key and must provide this on the next login to be successfully authenticated.

Second Factor authentication

A user with a registered U2F device first needs to login using the usual way by providing a username and the password.

if login(User, username, password)
  if current_user.registrations.size > 0
    session[:notice] = "Please insert one of your registered keys to proceed."
    session[:user_prelogin] = current_user.id
    redirect "/login/key"

  # …

If the provided login data is correct and the user has U2F devices registered, we redirect him to the next page handling this.

In this second login step, we generate a sign request on the server:

# Fetch existing Registrations from your db
key_handles = user.registrations.map(&:key_handle)
if key_handles.empty?
  session[:notice] = "Please add a key first."
  redirect "/private/keys"

# Generate SignRequests
sign_requests = u2f.authentication_requests(key_handles)

and provide it to the user:

var signRequests = {{ sign_requests.to_json }};

u2f.sign(signRequests, function(signResponse) {
    var form, reg;

    if (signResponse.errorCode) {
        return alert("Authentication error: " + signResponse.errorCode);

    form = document.forms[0];
    response = document.querySelector("[name=response]");

    response.value = JSON.stringify(signResponse);


Again, we simply pass on this data to the browser API, which makes sure the device is actually present and then lets the key sign the provided data. Once it returns we then send on this data to the server.

If there is an error in the signing process we just alert the user for now. For a better user experience this should be handled more nicely, showing the user a proper error message and giving the option to try again.

On the server side we need to check that the key handle exists for the user, then let the library validate the signed authentication request against our previously saved challenge. If everything checks out fine, we can finally login the user and set the session. As the last step we’re also updating the saved counter for the given key handle. This way we can protect against reply attacks. New authentications are only valid if the sent counter is higher than our saved one.

u2f_response = U2F::SignResponse.load_from_json(response)

registration = user.registrations.find(key_handle: u2f_response.key_handle).first

unless registration
  session[:error] = "No matching key handle found."
  redirect "/login"

  u2f.authenticate!(session[:challenges], u2f_response,
                    Base64.decode64(registration.public_key), registration.counter.to_i)

rescue U2F::Error => e
  session[:error] = "There was an error authenticating you: #{e}"
  redirect "/login"

registration.counter = u2f_response.counter

And that’s it. That’s all it takes for a working U2F implementation.

(what’s not visible: the browser asks for permission to use the U2F key on registration and the simple key is only usable for a short time after insertion, so it needs to be reinserted for each login, requiring explicit human interaction)

The full code is available in the repository on GitHub: cuba-u2f-demo

Thanks to @soveran for proof-reading a draft of this post and of course for his work on Cuba.

The difference of Rust's thread::spawn and thread::scoped

(by )

So yesterday I gave a Rust introduction talk at the local hackerspace, CCCAC. The slides are already online. The talk went pretty well and I think I could convince a few people why the ideas in Rust are actually useful. Though I made one mistake in explaining a concurrency feature (see slide 30). As it turns out, the example as I explained it was different from the presented code and one of the attendees actually asked me about it.

// Careful, this example is not quite right.
use std::thread;
use std::sync::{Arc, Mutex};

fn main() {
    let numbers = Arc::new(Mutex::new(vec![1, 2, 3]));

    for i in 0..3 {
        let number = numbers.clone();

        let _ = thread::scoped(|| {
            let mut array = number.lock().unwrap();

            array[i] += 1;

            println!("numbers[{}] is {}", i, array[i]);

I used this example to explain why it is necessary to wrap the vector in a mutex and the mutex in an Arc to make it possible to write to it from several threads. The problem lies within the used thread abstraction: thread::scoped.

Spawn a new scoped thread, returning a JoinGuard for it. The join guard can be used to explicitly join the child thread (via join), returning Result, or it will implicitly join the child upon being dropped.

So in the case of the above code each thread is joined right after it was created and thus the threads don’t even run concurrently, making the Arc and Mutex unnecessary. The following shortened example will still work, though not show casing what I intended to:

use std::thread;

fn main() {
    let mut numbers = vec![1, 2, 3];

    for i in 0..3 {
        let number = &mut numbers;

        let _ = thread::scoped(|| {
            number[i] += 1;

            println!("numbers[{}] is {}", i, number[i]);

There is another in-built threading method: thread::spawn. Its documentation reads:

Spawn a new thread, returning a JoinHandle for it. The join handle will implicitly detach the child thread upon being dropped.

And this is actually what I need to correctly demonstrate what I wanted to: the use of Arc and Mutex to safely share writable access to shared memory through mutual exclusion. The following example works and has all necessary parts:

use std::thread;
use std::sync::{Arc, Mutex};

fn main() {
    let numbers = Arc::new(Mutex::new(vec![1, 2, 3]));

    let mut threads = vec![];
    for i in 0..3 {
        let number = numbers.clone();

        let cur = thread::spawn(move|| {
            let mut array = number.lock().unwrap();

            array[i] += 1;

            println!("numbers[{}] is {}", i, array[i]);

    for i in threads {
        let _ = i.join();

Running it gives the expected output (your output might differ, the order is non-deterministic):

$ rustc concurrency.rs
$ ./concurrency
numbers[1] is 3
numbers[2] is 4
numbers[0] is 2

The Rust book contains a complete chapter on this topic: Concurrency, covering a bit more of the background and also the Channel concept.

Again, thanks to the CCCAC and for all people listening to me and quite some questions afterwards. For all who could not attend: the video should be up soon.

hiredis is up to date

(by )

Back in December 2014 antirez reached out to the community, to find a new maintainer of hiredis. In a joined effort Michael, Matt and me took on the effort and just 2 weeks ago Matt released Version 0.12.1, after just two years without a proper release.

This weekend I got in contact with Pieter who handed over access to all major packages based on hiredis to me. Sunday I pushed out new versions for all of them:

All of them include only non-breaking changed and an upgrade of the underlying hiredis code, so all additional libraries and applications depending on them should just work after an upgrade. If not, please open a ticket.

I again want to say thank you to Pieter who maintained all of these packages in the past. Also thank you to Matt and Michael, who helped push hiredis forward.

rdb-rs - fast and efficient RDB parsing utility

(by )

Ever since I started looking into Rust I knew I needed a bigger project for which I could use it. I released a few small libraries, all based on Redis code/tools, so I figured: Why not a bigger project focused on Redis as well? We already have a nice client library, so I did not need to write one myself. I then thought about writing a Cluster-aware library on top of that, but before I got any real code written I faced some difficulties in how I want the API to look like, so I abandonded the idea for now as well. The next idea was to have another implementation of the Cluster configuration utility redis-trib. Problem though: I did not even finish my attempt in Go. I looked around for more ideas and then I came across the redis-rdb-tools again. This small Python utility can parse, format and analyze Redis’ dump files. Its author Sripathi Krishnan also documented the file format and version history of RDB.

Sripathi didn’t pay much attention to his project anymore, and so some bugs and feature requests remained unsolved. With the current changes for Redis 3.0 like the Quicklist and a new RDB version, the redis-rdb-tools can’t be used anymore for new dump files without patches.

So on last year’s 31c3 I took some time and started reading and implementing a RDB parser in Rust based on the documentation. While reading the documentation, the Python code as well as Redis’ own rdb.c I took notes and later rewrote and reformatted Sripathi’s documentation to be up to date and to also include the latest changes.

Today I release this updated documentation. It’s available online at rdb.fnordig.de:

I will keep it updated, should there be need and of course I will improve it if necessary.

At the same time I also open source my port of the redis-rdb-tools to Rust:


rdb-rs is a library and tool to parse RDB and dump it into another format like JSON or the Redis protocol.

rdb-rs is offered both as a library and as a stand-alone command line tool.

The command line tool can be used to dump an existing RDB file in one of the provided formats:

$ rdb --format json dump.rdb
$ rdb --format protocol dump.rdb

For now it is nothing too fancy, but it gets the job done. Over the next days and weeks I will improve it, add missing features such as filter options and hopefully also a memory reporter.


Using the library is as easy as calling the rdb::parse function and pass it a stream to read from and a formatter to use.

use std::io::{BufferedReader, File};

let file = File::open(&Path::new("dump.rdb"));
let reader = BufferedReader::new(file);
rdb::parse(reader, rdb::JSONFormatter::new());

rdb-rs brings 4 pre-defined formatters, which can be used:

Adding your own formatter is as easy as implementing the RdbParseFormatter trait.

Over the next weeks I will rework parts of the library. Currently I’m not too happy with the API offered, especially proper return values and error handling. The code also needs to be refactored to allow for filtering and memory reporting. For the redis-rdb-tools Sripathi also reverse-engineered and documented the memory usage of key-value pairs, which needs updates as well. This will take me some time to bring into the same format.

Finally, I want to say thanks to Sripathi Krishnan for building the rdb-tools and for the proper and very well written documentation. It helped a lot getting this done.

Also thanks to Matt, Andy and Itamar for some input and comments on the small parts of the project I showed them.

2014 in many words

(by )

My year in numbers

Important things first: I got about 12 new T-Shirts for my collection (I didn’t count them).

I had my very first real job interview (I didn’t get the job, but had a nice time in Dublin).
I traveled about 35000 km in a plane, spread across 8 flights, the longest from San Francisco to Frankfurt.
I probably traveled another 5000 km in trains, to and from home to Dortmund, Kamen, Berlin, Hamburg, …
I visited 9 different conferences in 7 different cities across 4 different countries:

  1. FOSDEM in Brussel, Belgium (February)
  2. eurucamp in Berlin (August)
  3. FrOSCon/RedFrogConf in Sankt Augustin (August)
  4. reject.js in Berlin (September)
  5. jsconf EU in Berlin (September)
  6. RailsCamp Germany in Cologne (September)
  7. The next Web Barcamp in Salzburg (October)
  8. IETF 91 in Honolulu, USA (November)
  9. 31c3 in Hamburg (December)

I held 3 talks this year, twice the same Rust talk, both times spontaneous to fill time, and once a Redis talk.
I held 3 different workshops, twice for the OTS DO coaching HTML & CSS and Javascript and once as part of the Barcamp Salzburg, coaching Hoodie.

I published 20 blog posts covering a wide range of topics.
I wrote more than 8200 tweets this year, most of them in reply to someone or as part of a longer rage about computers. I posted 36 photos on Instagram and my ~/photos/2014/ directory now contains about 3000 other photos.

In total I made 380 countable contributions on GitHub (or 879 if you count private repositories as well) and took part in hundreds of issues.
We saw 18 releases of Redis, I was mentioned 4 times in the release notes and became #5 in the list of committers (though only 19 commits so far).
I sent 120 mails to the Redis mailing list (if I can trust my local mail copy).
I learned a new language, Rust, and immediately released 4 libraries.

For the university, I wrote a 15-page paper, containing about 7500 words.
I wrote 6 exams and passed each one.
I wrote a 50-page thesis, containing about 21000 words and made 233 code commits across 4 different repositories for the thesis.
I finally got my Bachelor in Computer Science after 3 years of studying.

I attended 3 different sport tournaments with the Handball team of my university, winning one with my team by accident. For the first time in my life I actually played Beach-Handball, the whole summer long once a week.

2014 was an amazing year for me.


All of the above was only possible because I had so many people supporting me, working with me, discussing with me, hosting me, sponsoring me, partying with me or traveling with me. That’s why I want to say thanks to all these people.

Thanks to the great Hoodie community, which I am proud to be a part of. Special thanks to Jan, Gregor, Ola and Lena for the discussions and a few funny hangout sessions. Very special thanks to Lena for hosting the Hoodie Workshop at the Barcamp in Salzburg with me.

Thanks to Hannes and Stephan for inviting me to Salzburg.

Thanks to the Open Tech School Dortmund team, Ola, Carsten, Leif and Hendrik. These are the nicest people, investing their free time to provide awesome talks and workshops for free. They encouraged me to coach at workshops and to finally hold a talk. I know they will keep this going in 2015!

Thanks to my employer rrbone and my boss Dominik for making it possible for me to learn, to work on what I love, and to sponsor me trips to some of the conferences.

Thanks to Lotte for joining Dominik and me on our trip to the US, for telling me quite a bit about beer and, even better, for drinking with me.

And thanks to all the people I forgot to mention explicitely, thanks to my friends and fellow students for making lectures less boring, thanks to my family for welcoming me back everytime I head back home.

There are already a lot of things planned for 2015, so it will be just as busy as this year.