omnomnom - Parsing ISO8601 dates using nom

(by )

There are thousands of ways to note down a date and time. The international date format is standardized as ISO8601, though it still allows a widespread of different formats.

The basic format looks like this:


And that’s what we will parse today using nom, a parser combinator library created by Geoffroy Couprie.

The idea is that you write small self-contained parsers, which all do only one simple thing, like parsing the year in our string, and then combine these small parsers to a bigger one to parse the full format. nom comes with a wide variety of small parsers: handling different integers, reading simple byte arrays, optional fields, mapping parsed data over a function, … Most of them are provided as combinable macros. It’s very easy to implement your own small parsers, either by providing a method that handles a short byte buffer or by combining existing parsers.

So let’s dive right in and see how to use nom in real code.


This is what we want to parse:


It has several parts we need to parse:


with the following meaning:

Characters Meaning
YYYY The year, can be negative or null and can be extended if necessary
MM Month from 1 to 12 (0-prefixed)
DD Day from 1 to 31 (0-prefixed)
T Separator between date and time
HH Hour, 0-23 (0-prefixed)
MM Minutes, 0-59 (0-prefixed)
SS Seconds, 0-59 (0-prefixed)
OOOO Timezone offset, separated by a + or - sign or Z for UTC

Parts like the seconds and the timezone offset are optional. Datetime strings without them will default to a zero value for that field. The date parts are separated by a dash (-) and the time parts by a colon (:).

We will built a small parser for each of these parts and at the end combine them to parse a full date time string.

Parsing the date: 2015-07-16

Let’s start with the sign. As we need it several times, we create its own parser for that. Parsers are created by giving them a name, stating the return value (or defaulting to a byte slice) and the parser combinators to handle the input.

named!(sign <&[u8], i32>, alt!(
        tag!("-") => { |_| -1 } |
        tag!("+") => { |_| 1 }

First, we parse either a plus or a minus sign. This combines two already existing parsers: tag!, which will match the given byte array (in our case a single character) and alt!, which will try a list of parsers, returning on the first successful one. We can directly map the result of the sub-parsers to either -1 or 1, so we don’t need to deal with the byte slice later.

Next we parse the year, which consists of an optional sign and 4 digits (I know, I know, it is possible to extend this to more digits, but let’s keep it simple for now).

named!(positive_year  <&[u8], i32>, map!(call!(take_4_digits), buf_to_i32));
named!(pub year <&[u8], i32>, chain!(
        pref: opt!(sign) ~
        y:    positive_year
        || {
            pref.unwrap_or(1) * y

A lot of additional stuff here. So let’s separate it.

named!(positive_year  <&[u8], i32>, map!(call!(take_4_digits), buf_to_i32));

This creates a new named parser, that again returns the remaining input and an 32-bit integer. To work, it first calls take_4_digits and then maps that result to the corresponding integer (using a small helper function).

take_4_digits is another small helper parser. We also got one for 2 digits:

named!(pub take_4_digits, flat_map!(take!(4), check!(is_digit)));
named!(pub take_2_digits, flat_map!(take!(2), check!(is_digit)));

This takes 4 (or 2) characters from the input and checks that each character is a digit. flat_map! and check! are quite generic, so they are useful for a lot of cases.

named!(pub year <&[u8], i32>, chain!(

The year is also returned as a 32-bit integer (there’s a pattern!). Using the chain! macro, we can chain together multiple parsers and work with the sub-results.

        pref: opt!(sign) ~
        y:    positive_year

Our sign is directly followed by 4 digits. It’s optional though, that’s why we use opt!. ~ is the concatenation operator in the chain! macro. We save the sub-results to variables (pref and y).

        || {
            pref.unwrap_or(1) * y

To get the final result, we multiply the prefix (which comes back as either 1 or -1) with the year. Don’t forget the , (comma) right before the closure. This is a small syntactic hint for the chain! macro that the mapping function will follow and no more parsers.

We can now successfully parse a year:

assert_eq!(Done(&[][..], 2015), year(b"2015"));
assert_eq!(Done(&[][..], -0333), year(b"-0333"));

Our nom parser will return an IResult. If all went well, we get Done(I,O) with I and O being the appropriate types. For our case I is the same as the input, a buffer slice (&[u8]), and O is the output of the parser itself, an integer (i32). The return value could also be an Error(Err), if something went completely wrong, or Incomplete(u32), requesting more data to be able to satisfy the parser (you can’t parse a 4-digit year with only 3 characters input).

Parsing the month and day is a bit easier now: we simply take the digits and map them to an integer:

named!(pub month <&[u8], u32>, map!(call!(take_2_digits), buf_to_u32));
named!(pub day   <&[u8], u32>, map!(call!(take_2_digits), buf_to_u32));

All that’s left is combining these 3 parts to parse a full date. Again we can chain the different parsers and map it to some useful value:

named!(pub date <&[u8], Date>, chain!(
        y: year      ~
           tag!("-") ~
        m: month     ~
           tag!("-") ~
        d: day
        || { Date{ year: y, month: m, day: d }

Date is a small struct, that can hold the necessary information, just as you would expect.

And it already works:

assert_eq!(Done(&[][..], Date{ year: 2015, month: 7, day: 16  }), date(b"2015-07-16"));
assert_eq!(Done(&[][..], Date{ year: -333, month: 6, day: 11  }), date(b"-0333-06-11"));

Parsing the time: 16:43:52

Next, we parse the time. The individual parts are really simple, just some digits:

named!(pub hour   <&[u8], u32>, map!(call!(take_2_digits), buf_to_u32));
named!(pub minute <&[u8], u32>, map!(call!(take_2_digits), buf_to_u32));
named!(pub second <&[u8], u32>, map!(call!(take_2_digits), buf_to_u32));

Putting them together becomes a bit more complex, as the second part is optional:

named!(pub time <&[u8], Time>, chain!(
        h: hour      ~
           tag!(":") ~
        m: minute    ~
        s: empty_or!(chain!(tag!(":") ~ s:second , || { s }))
        || { Time{ hour: h,
                   minute: m,
                   second: s.unwrap_or(0),
                   tz_offset: 0 }

As you can see, even chain! parsers can be nested. The sub-parts then must be mapped once for the inner parser and once into the final value of the outer parser. empty_or! returns an Option. Either None if there is no input left or it applies the nested parser. If this parser doesn’t fail, Some(value) is returned.

Our parser now works for simple time information:

assert_eq!(Done(&[][..], Time{ hour: 16, minute: 43, second: 52, tz_offset: 0}), time(b"16:43:52"));
assert_eq!(Done(&[][..], Time{ hour: 16, minute: 43, second:  0, tz_offset: 0}), time(b"16:43"));

But it leaves out one important bit: the timezone.

Parsing the timezone: +0100


Above are three variants of valid dates with timezones. The timezone in an ISO8601 string is either an appended Z, indicating UTC, or it’s separated using a sign (+ or -) and appends the offset from UTC in hours and minutes (with the minutes being optional).

Let’s cover the UTC special case first:

named!(timezone_utc <&[u8], i32>, map!(tag!("Z"), |_| 0));

This should look familiar by now. It’s a simple Z character, which we map to 0.

The other case is the sign-separated hour and minute offset.

named!(timezone_hour <&[u8], i32>, chain!(
        s: sign ~
        h: hour ~
        m: empty_or!(chain!(tag!(":")? ~ m: minute , || { m }))
        || { (s * (h as i32) * 3600) + (m.unwrap_or(0) * 60) as i32 }

We can re-use our already existing parsers and once again chain them to get what we want. The minutes are optional (and might be separated using a colon).

Instead of keeping this as is, we’re mapping it to the offset in seconds. We will see why later. We could also just map it to a tuple like
(s, h, m.unwrap_or(0)) and handle conversion at a later point.

Combined we get

named!(timezone <&[u8], i32>, alt!(timezone_utc | timezone_hour));

Putting it all together

We now got individual parsers for the date, the time and the timezone offset.

Putting it all together, our final datetime parser looks quite small and easy to understand:

named!(pub datetime <&[u8], DateTime>, chain!(
        d:   date      ~
             tag!("T") ~
        t:   time      ~
        tzo: empty_or!(call!(timezone))
        || {
                date: d,
                time: t.set_tz(tzo.unwrap_or(0)),

Nothing special anymore. We can now parse all kinds of date strings:


But it will also parse invalid dates and times:


But this is fine for now. We can handle the actual validation in a later step. For example, we could use chrono, a time library, to handle this for us. Using chrono it’s obvious why we already multiplied our timezone offset to be in seconds: this time we can just hand it off to chrono as is.

The full code for this ISO8601 parser is available in The repository also includes a more complex parser, that does some validation while parsing (it checks that the time and date are reasonable values, but it does not check that it is a valid date for example)

What’s left?

These simple parsers or even some more complex ones are already usable. At least if you already got all the data at hand and if a simple return value satisfies your needs. But especially for larger and more complex formats like media files reading everything into memory and spitting out a single large value isn’t sufficient at all.

nom is prepared for that. Soon it will become as easy as using an object from which nom can Read. For most things you shouldn’t worry about that, as a simple BufReader will work.

For the other end of the chain, nom has Consumers. A Consumer handles the complex part of actually requesting data, calling the right sub-parsers and holding the necessary state. This is what you need to build yourself. Internally it’s best abstracted using some kind of state machine, so you always know which part of the format to expect next, how to parse it, what to return to the user and so on. Take a look at the MP4 parser, which has an MP4Consumer handling the different parts of the format. Soon my own library, rdb-rs, will have this as well.

Small thing aside: Geoffroy created machine to define a state machine and I got microstate for this.

Why am I doing this?

I’m currently developing rdb-rs, a library to parse and analyze Redis dump files. It’s currently limited to parsing and reformatting into several formats and can be mainly used as a CLI utility. But there are projects that could benefit from a nicer API to integrate it into another tool. The current parser is hand-made. It’s fast, it’s working, but it provides a limited, not very extensible API. I hope to get a proper parser done with nom, that I can build on to provide all necessary methods, while still being super-fast and memory-safe. Work already started, but I’m far from done for now

Thanks to Geoffroy for the discussions, the help and for reading a draft of this post.

Redis Sentinel & Redis Cluster - what?

(by )

In the last week there were several questions regarding Redis Sentinel and Redis Cluster, if one or the other will go away or if they need to be used in combination. This post tries to give a short and precise info about both and what they are used for.

Redis Sentinel

Redis Sentinel was born in 2012 and first released when Redis 2.4 was stable. It is a system designed to help managing Redis instances.

It will monitor your master & slave instances, notify you about changed behaviour, handle automatic failover in case a master is down and act as a configuration provider, so your clients can find the current master instance.

Redis Sentinel runs as a seperate program. You should have atleast 3 Sentinel instances monitoring a master instance and its slaves. Sentinel instances try to find consensus when doing a failover and only an odd number of instances will prevent most problems, 3 being the minimum. In this case one of the Sentinel instances can go down and a failover will still work as (hopefully) the other two instances reach consensus which slave to promote.

One thing about the configurable quorum: this is only the number of Sentinel who have to agree a master is down. You still need N/2 + 1 Sentinels to vote for a slave to be promoted (that N is the total number of all Sentinels ever seen for this pod).

A pod of Sentinels can monitor multiple Redis master & slave nodes. Just make sure you don’t mix up names, add slaves to the right master and so on.

Full documentation for Sentinel.

Redis Cluster

If we go by first commit, then Cluster is even older than Sentinel, dating back to 2011. There’s a bit more info in antirez’ blog. It’s released as stable with version 3.0 as of April 1st, 2015.

Redis Cluster is a data sharding solution with automatic management, handling failover and replication.

With Redis Cluster your data is split across multiple nodes, each one holding a subset of the full data. Slave instances replicate a single master and act as fallback instances. In case a master instance will become unavailable due to network splits or software/hardware crashes, the remaining Master nodes in the Cluster will register this and will reach a state triggering a failover. A suitable Slave of the unavailable Master node will then step up and will be promoted to takeover as a new Master.

You don’t need additional failover handling when using Redis Cluster and you should definitely not point Sentinel instances at any of the Cluster nodes. You also want to use a smart client library that knows about Redis Cluster, so it can automatically redirect you to the right nodes when accessing data.

Redis Cluster specification and Redis Cluster Tutorial.
I gave a talk about Redis Cluster at the PHPUGDUS meeting last month, my slides are on

Want to hear more about Redis, Redis Sentinel or Redis Cluster? Just invite me!

Using a Kindle for status information

(by )

Back in 2011 I got a Kindle 4 (the non-touch version) and for some time it was the primary device for reading, be it ebooks, technical documentation or slides and transcripts from university.

But then I was using it less and less and for the last one and a half years it basically layed around unused. While it is a good device for book reading, it isn’t for other content. It’s slow, it can’t handle PDFs properly (zooming is just awful) and adding notes is really annoying with that on-screen keyboard.

For some time now I have this link saved: Kindle Weather Display.

Well, what better to do with a lazy holiday then doing some hacking with the Kindle? And so I did and this is the current result: It displays the weather forecast.

For now it shows the weather forecast

As the original article is quite short on the precise steps to get this finished, I wanted to write them up here.

(Just in case: I’m not responsible if you break your kindle while hacking around with it.)

First you need to jailbreak your Kindle, this will make the following things a bit easier. You should get it done using this short guide. The next step is to set up SSH to get shell access on the Kindle. I used the USBnet variant described in the Kindle 4 NT Hacking Guide (yes, that’s the same as the Jailbreak one). Despite its name this can enable the SSH daemon on the WiFi interface too. Attach the Kindle via USB, mount it and then open the usbnet/etc/config and add:


Now you can also enable auto-starting USBnet. Caution: As long as USBnet is running, you can’t mount the Kindle.

# the Kindle should be mounted into /mnt/sdb1
mv /mnt/sdb1/usbnet/DISABLED_auto /mnt/sdb1/usbnet/auto

Next, reboot your device. Once it’s back up you should be able to connect to it via SSH on the IP it has in your WiFi network.

ssh root@

The root password is either mario or of the form fionaABCD. Use the Kindle root password tool to find out based on the serial number.

There’s just one more tool: Kite, the application launcher. You can get it in this forum post. Installation is easy once you got the kite.gz. Copy the kite file to the kindle, then execute it:

jer@brain$ gunzip kite.gz
jer@brain$ scp kite root@
jer@brain$ ssh root@
root@kindle# cd /tmp
root@kindle# chmod +x kite
root@kindle# ./kite

One thing to note: You just downloaded some binary blob from some random forum and executed it. But you did that with the jailbreak and USBnet above anyway. And hey, that’s how these things worked back in the old days, it actually was totally normal in the PSP scene too

Back to our project: Reboot the Kindle and in the start screen you should see some note that Kite is started as well. The Kindle will also contain some new directories:

root@kindle# ls -l /mnt/us/kite
drwxr-xr-x    2 root     root         8192 May 14 12:13 onboot
drwxr-xr-x    2 root     root         8192 May 14 11:57 ondrop

onboot is the relevant one. All scripts in there are executed by Kite on startup of the Kindle. That’s where we disable some stuff and display our image for the first time. Write the following code to a file and place it in onboot (or just get it from the repository):


/etc/init.d/framework stop
/etc/init.d/powerd stop

This will disable the framework (= the Kindle UI basically) and the power management daemon (= responsible for disabling WiFi and switching to the screensaver if idle for too long). In case you want to get back to the old state, just enable framework and powerd again (and first remove the which will otherwise directly disable them again).

The script now does the hard stuff, which is pretty easy: Clear the screen, get a new image, display it.


cd "$(dirname "$0")"

rm -f display.png
eips -c
eips -c

if wget -q http://server/path/to/display.png; then
    eips -g display.png
    eips -g weather-image-error.png

eips is the tool to write something on the screen or display an image.

Now to regularly and automatically get a new image, set up a cronjob:

root@kindle# mntroot rw
root@kindle# echo '0 7,19 * * * /mnt/us/weather/' >> /etc/crontab/root
root@kindle# mntroot ro
root@kindle# /etc/init.d/cron restart

The script will now be executed every day at 7:00 and 19:00, showing a picture from the internet (well, at best it’s a picture you generated).

As this post is already getting quite long, I leave the server-side up to you. All files (for both the Kindle and the server part) are in the GitHub repository: kindle-weather-display. This is the final result: My Kindle hanging on the wall right under the calendar. :)

It's hanging at the wall

Thanks to @e2b for proofreading a draft of this post.

New releases of hiredis-py and hiredis-node

(by )

I just published hiredis-py v0.2.0 to PyPi and hiredis-node v0.3.0 to npm.

Both of these do not include many new features compared to the last release, but it still took me hours and hours to get this out, and that’s for one simple reason: We now have basic Windows support in hiredis and thus in hiredis-py and hiredis-node as well.

These two modules only use the parser functionality of hiredis and leave the socket stuff to the language itself. Since v0.12, this parser functionality in hiredis was extracted into seperate files, which made it easily possible to include the necessary compatibility code (if any) to use it on Windows as well.

What made these releases take so long to get finished was the CI process. I didn’t want to include support unless I can make sure it keeps working and for this I need to run the tests on the desired systems. But because I don’t personally own a Windows machine on which I could develop (nor would I want one) I had to use some external service for this. I was pointed to appveyor, basically the TRavis CI for Windows. Setting everything up and making sure tests run correctly took me quite some time. The last time I touched any compiler on a Windows machine is several years back, so I had to gather all needed information from the documentation and demo scripts from the Internet. And builds that take as long as 40 minutes for 6 different environments don’t really help to get started fast. The actual build per environment takes only 3 minutes, but even that is high compared to the Linux builds on Travis, that run in about a minute (that is for 3 environments).

I finally reached green builds now and I hope I can keep it that way. I will rely on these builds for releases from now on to support Windows as best as I can, but as said before, I have no machine to test these in more detail and I rely soly on user input if anything breaks beyond the simple compile and test appveyor now does.

At next I will release a new version of hiredis itself with several fixes and new features, but this may take a bit more time (I wanted to finish it this week, but I can’t promise that anymore).

You’re interest in Open Tech? Come to the otsconf in August! First batch of tickets goes on sale this Sunday, 5. April, 5:00 pm CEST.

U2F demo application

(by )

Two weeks ago I got my first Universal Second Factor Device. It’s an inexpensive small USB key: the FIDO U2F Security Key. This key can be used as a 2nd Factor Authentication device.

It uses the protocol as specified by the FIDO Alliance, which consists of Google, Microsoft, Yubico, Lenovo and others.

What it provides

The overview document states:

The FIDO U2F protocol enables relying parties to offer a strong cryptographic 2nd factor option for end user security.

After the user has registered their device, the application can request authentication using this key on login (or when it seems necessary, e.g. when changing some other security settings).

Right now it relies on a extension for Chrome to provide the JavaScript API: FIDO U2F (Universal 2nd Factor) extension. Hopefully this will soon be implemented directly in the browser.

How it works

The U2F protocol is not complex at all, making it easy to implement and verify its correctness. It consists of 2 phases: registration and autentication, both requiring explicit human interaction.


  1. The server choses a pseudo-random 32 byte challenge
  2. It sends this challenge, a version identifier and its appId to the browser
  3. The browser forwards this data and the origin of the challenge to the key after requesting access requiring human interaction
  4. The key assembles its public key, key handle and a signature. The signature includes the seen appId, a hash of the provided challenge and origin, its own public key and its key handle.
  5. The browser sends back this registration data to the server, where the certificate is checked, the signature validated and public key and key handle are saved.

The key is now registered for use with this origin and appId.


  1. The server choses a pseudo-random 32 byte challenge for every possible key handle.
  2. This data is sent to the browser, including the appId
  3. The browser forwards this data to the key, including the origin
  4. The key is activated by human interaction, it then creates a signature of a hash of the appId, a counter value and a hash of the provided challenge and origin. This signature and the counter value is sent back to the browser, which submits it to the server
  5. The server verifies the signature using the previously saved public key and verifies that the counter value is larger than any previously seen counter for this key handle.

If all runs through the user is successfully authenticated based on his key.

The implementation

The small demo application does nothing more than authenticating a user by name and a password and authorizing access to the private section of the website. A user is then able to add second factor authentication through U2F devices by registering one or more keys for their account. If a user has U2F devices registered, the server requires additional authentication by providing the U2F key to the website,

I decided to built this small application using the Cuba framework, a small Rack-based web framework providing only the absolute basics necessary for this. Authentication is handled by Shield, user data is stored using Ohm. For correct generation and verification of the U2F data I rely on ruby-u2f, an implementation of the full specification. The code itself is quite small, there are some todos and unimplemented things still open, but from what I understand right now they are not security-impacting. But before you run this in production, please take your own measurement and check the implementation against the spec.

The following will only describe the U2F relevant parts. The rest should be straight forward.

Key registration

Before a user can use second factor authentication, they need to register their device with the service.

on get do
  registration_requests = u2f.registration_requests
  session[:challenges] =

  render "key_add",
    registration_requests: registration_requests

First we generate registration requests for the key to sign later. We then need to save the provided challenges into the session to be able to check them later again. These could also be saved directly into a database. We could also add sign requests for known key handles to later check if the key is already known, but for simplicity we don’t do this here.

Then we simply render our form, the important JavaScript part in the frontend is this:

var registerRequests = {{ registration_requests.to_json }};
var signRequests = [];

u2f.register(registerRequests, signRequests, function(registerResponse) {
    var form, reg;

    if (registerResponse.errorCode) {
        return alert("Registration error: " + registerResponse.errorCode);

    form = document.forms[0];
    response = document.querySelector("[name=response]");

    response.value = JSON.stringify(registerResponse);


First we pass in the register and sign requests as JSON to be inspected by JavaScript. We then call the u2f API provided by the browser (for now added by an extension). The browser handles all the complicated stuff of verifying the provided request, asking for the user’s permission to use the key, sending it to the key and returning back the signed data to the browser. Once this is done, the callback is called. All that’s left to do is sending this data back to the server. We use a simple hidden form for that.

On the server side the data is parsed and verified. Again, this is handled completely by the library. All we need to do is calling the right methods and saving the key handle and public key to our database.

on post, param("response") do |response|
  u2f_response = U2F::RegisterResponse.load_from_json(response)

  reg = begin
          u2f.register!(session[:challenges], u2f_response)
        rescue U2F::Error => e
          session[:error] =  "Unable to register: #{}"
          redirect "/private/keys/add"

  Registration.create(:certificate => reg.certificate,
                      :key_handle  => reg.key_handle,
                      :public_key  => reg.public_key,
                      :counter     => reg.counter,
                      :user        => current_user)

  session[:success] = "Key added."
  redirect "/private/keys"

The user has now a registered U2F key and must provide this on the next login to be successfully authenticated.

Second Factor authentication

A user with a registered U2F device first needs to login using the usual way by providing a username and the password.

if login(User, username, password)
  if current_user.registrations.size > 0
    session[:notice] = "Please insert one of your registered keys to proceed."
    session[:user_prelogin] =
    redirect "/login/key"

  # …

If the provided login data is correct and the user has U2F devices registered, we redirect him to the next page handling this.

In this second login step, we generate a sign request on the server:

# Fetch existing Registrations from your db
key_handles =
if key_handles.empty?
  session[:notice] = "Please add a key first."
  redirect "/private/keys"

# Generate SignRequests
sign_requests = u2f.authentication_requests(key_handles)

and provide it to the user:

var signRequests = {{ sign_requests.to_json }};

u2f.sign(signRequests, function(signResponse) {
    var form, reg;

    if (signResponse.errorCode) {
        return alert("Authentication error: " + signResponse.errorCode);

    form = document.forms[0];
    response = document.querySelector("[name=response]");

    response.value = JSON.stringify(signResponse);


Again, we simply pass on this data to the browser API, which makes sure the device is actually present and then lets the key sign the provided data. Once it returns we then send on this data to the server.

If there is an error in the signing process we just alert the user for now. For a better user experience this should be handled more nicely, showing the user a proper error message and giving the option to try again.

On the server side we need to check that the key handle exists for the user, then let the library validate the signed authentication request against our previously saved challenge. If everything checks out fine, we can finally login the user and set the session. As the last step we’re also updating the saved counter for the given key handle. This way we can protect against reply attacks. New authentications are only valid if the sent counter is higher than our saved one.

u2f_response = U2F::SignResponse.load_from_json(response)

registration = user.registrations.find(key_handle: u2f_response.key_handle).first

unless registration
  session[:error] = "No matching key handle found."
  redirect "/login"

  u2f.authenticate!(session[:challenges], u2f_response,
                    Base64.decode64(registration.public_key), registration.counter.to_i)

rescue U2F::Error => e
  session[:error] = "There was an error authenticating you: #{e}"
  redirect "/login"

registration.counter = u2f_response.counter

And that’s it. That’s all it takes for a working U2F implementation.

(what’s not visible: the browser asks for permission to use the U2F key on registration and the simple key is only usable for a short time after insertion, so it needs to be reinserted for each login, requiring explicit human interaction)

The full code is available in the repository on GitHub: cuba-u2f-demo

Thanks to @soveran for proof-reading a draft of this post and of course for his work on Cuba.