Create GitHub releases with Rust using Hubcaps

(by )

For one of my projects I need to access the GitHub API to create releases. Luckily, through reading This Week in Rust #119, I discovered Hubcaps, a library for interfacing with GitHub.

Though it lacks some documentation and is not yet fully finished, it already provides APIs for the relevant parts regarding releases.

On GitHub a release is always associated with a Git tag, but need to be specifially created to be shown on the site with the full description and optional assets attached to them. It is also possible to mark releases as a draft (then it is only visible to repo contributors) or as a pre-release, useful for alpha releases of a library or application.

Once you have a Git tag in your repository the API can be used to create an associated release using the following Rust code:

extern crate hyper;
extern crate hubcaps;

use std::{env, process};
use hyper::Client;
use hubcaps::{Github, ReleaseOptions};

fn main() {
    let token = match env::var("GITHUB_TOKEN").ok() {
        Some(token) => token,
        _ => {
            println!("example missing GITHUB_TOKEN");

    let client = Client::new();
    let github = Github::new("hubcaps/0.1.1", &client, Some(token));

    let user = "username";
    let repo = "my-library";
    let name = "ONE DOT OH";
    let body = "This is a long long body";
    let tag = "v1.0.0";

    let opts = ReleaseOptions::builder(tag)

    let repo = github.repo(user, repo);
    let release = repo.releases();
    match release.create(&opts) {
        Ok(_) => println!("Release created"),
        Err(e) => println!("Failed to create release: {:?}", e),

If you clone Hubcaps and put the above code in a file named into the examples/ folder, you can run it with cargo run --example releases. You need to get a personal access token first and set it in your environment (export GITHUB_TOKEN=<your token here>)

Of course it has the repository and tag hard-coded, but this is easy to adapt.

The code was tested with Rust 1.6 and hubcaps 0.1.1

Updated version to work with hubcaps 0.2.0 online.

2015 in many words and some photos

(by )

Last year I summarized my year in a long blog post, and with 2015 being nearly over here comes this year’s version.

My year in numbers

I was at 6 different conferences in 6 different cities across 5 different countries:

  1. FOSDEM in Brussel, Belgium (February)
  2. .concat() in Salzburg, Austria (March)
  3. otsconf in Dortmund, Germany (August). I was one of the organizers
  4. Redis Dev Day in London, Great Britain (October)
  5. Hungarian Web Conference 2015 in Budapest, Hungaria (November). I gave a talk about Rust
  6. 32c3 in Hamburg, Germany (December)

This includes the first conference I ever organized (otsconf) and my first real conference talk (Web Conference). I hope I get more opportunities to speak next year.

GitHub says I made 1434 countable contributions (1650 if you count private repositories) across dozens of repositories. I now maintain 8 published Rust crates (9 to be really correct) as well as 5 unreleased Rust libraries. I hope to finish up the remaining crates and polish the existing ones (and maybe bump them to a stable 1.0)

I released 6 versions of hiredis, 2 of hiredis-rb, 4 of hiredis-node and 2 of hiredis-py. My plan for 2016 is to mark hiredis as stable, push out the 1.0 release and then update the others as well (considering it might be used in Rails soon, this seems to be a good idea)

About 158 mails on the Redis mailing list were sent by me.

I posted 196 photos on Instagram and my ~/photos/2015 directory now contains about 4600 more photos.

I wrote more than 8600 tweets, more than half of them in reply to someone. I didn’t quite reach my own goal of writing a bit more, but 12 blog posts still come down to about one per month.

My year in photos and words

After the busy ending of 2014, 2015 started just as busy. Away from university, I spend more time working, but I found enough time for conferences and a quick one-week visit in San Francisco in April.

Golden Gate Bridge Bay Bridge

University and work kept me busy the whole summer, but I still catched a bit of the Sun.

After finishing up the Summer semester, I left for Norway in August.

Norway Flag

I came back for a weekend to run otsconf. It was a success, but see for yourself:

Back in Norway I had the absolute best time. For example hiking up a hill, sitting on the Kjeragbolten thousand meter over the Fjord.


My semester ended early on the first of December, so I had another chance to travel around Norway. Even without a lot of light, the Lofoten are a beautiful place to stay.


With 2015 coming to an end, so are other things. I had to replace my old Laptop, which stayed with me for the last four years.


I really enjoyed living & traveling in Norway and I already put it on my todo list to come back in the summer. I was lucky to be part of so many different but welcoming communities. It’s a joy every time I meet these people again.


This year was so great because of all the great people I met along the way.
Thanks to the otsconf team, Ola, Shelly, Carsten, Leif and Hendrik.
Again thanks to my employer rrbone and my boss Dominik for giving me opportunities to learn and work.
Thanks to the the new friends I made while living in Norway. I hope we manage to meet up again soon. Knowing people everywhere is great if you travel and I’m very grateful to all people that hosted me, like Lotte in Hamburg and Tobias in Tromsø. And thanks to all those I didn’t name explicitely, but who made this year so much fun.

2016 won’t start any less busy. With my next university semester only starting in April, I will make the best of that time. The next trip starts in less than five days.


Redis Dev Day London 2015

(by )

Last Monday the Redis Dev Day took place in London, followed by a small Unconference on Tuesday. The Redis Dev Day is a gathering of all people involved in the Redis development, that means Redis creator Salvatore as well developers and engineers from several companies are there to discuss the future development of Redis.

Thanks to Rackspace and especially Nikki I was able to attend as well.

The Dev Day itself was packed with proposals and interesting ideas about improvements and new features for Redis. In the following I’m trying to sum up some of them, listed by their relevance as I see them (most relevant first).

NoNoSQL for Redis

Salvatore itself proposed this one: Native indexing in Redis. He recently published an article on indexing based on Sorted Sets. While this method is manual, it could very well be hidden behind a nice client interface (and indeed there are some out there, I just can’t find a good example). But having it right inside Redis might be more memory-efficient, faster, avoids transactions and might be easier to use. Salvatore proposed new commands for that, for example to select based on a previously defined index:

IDXSELECT myindex IFI FIELDS $3 WHERE $1 == 56 and $2 >= 10.00 AND $2 <= 30.00

None of this is final yet, there are a lot of things to get right before this can be implemented. For example it’s not done with providing the commands for selection based on indexes, but needing to add, update and remove the index is necessary as well. More in-depth discussions happened the next day, prior to the Unconf.

Even though this kinda goes against the current idea of Redis – provide the basic tools with a simple API and not much more – there is the possibility to implement it right and make it as usuable as Redis is right now.

This proposal needs more design effort to get right (both on the exposed API and internal design).

Redis as a Cloud Native

Bill – yes, the real one – always was a heavy user of Sentinel and thus had the most insight on what works and what doesn’t. And in fact one big thing where Redis still does not work in a way that anyone can be satisfied with is inside a Docker container. Because of how Sentinel (or Cluster) announce themselves (or monitored instances) and the way Docker remaps ports, it is currently hardly possible to run it inside a container without unusual configuration (like --net=host).

This needs improvements like making it possible to specify the announce address and port for all running modes. Another thing that should be doable is configuration replication across nodes in a pod or Cluster. This could easily be handled by a new command. Instead of replicating all configuration automatically, this needs to be triggered by an admin, making it easy to only selectively replicate configuration options.

Both things seem necessary and not too hard.

Other proposals include:


Since the introduction of Lua scripting inside Redis more and more people use it as a way to abstract logic away behind a single (atomic) call. Just look at what is possible implementing Quadtrees inside Lua in Redis.

Because of a security issue access to the debug feature in Redis was disabled. This also breaks some of the available options to properly debug Lua scripts. Debug functionality is needed once you go this route, so bringing it back eventually is a good idea and maybe finally closing very old issues.

Command composition

For some commands we have STORE options (SORT has it as an option, SINTERSTORE and others are their own command).

A more general form like STORE dest SINTER keyA keyB could make some users happy. The current code base doesn’t support that in a generic way, but it’s not impossible to change that. This might need a bit more design effort to be applied to all data types though.


Every time Redis gets discussed the issue about modularity comes up. Most of the time I am a fan of making components reusable, modularize them where possible and abstract away the hard stuff.

Redis is different here. A lot of the stuff in Redis interacts with each other and there is hardly a clear cut to make.

Should all underlying data type implementations be extracted? They are useful for sure elsewhere, but then they won’t benefit from shortcuts made.

Should the IO be completely separated from parsing and dispatching the commands? Sounds useful for sure, especially now that the base is used in Disque as well. But again, the coupling allows for some shortcuts.

Should hiredis be integrated and be part of the project? No way, hiredis is also a stand-alone client used by many others. Keeping it in-tree would make it harder to develop on its on.

One thing we will do for sure is to unify the code base again. The in-tree hiredis is currently not the same as the stand-alone one, partly due to the updated sds (the string implementation) and partly because some bugs where fixed in the stand-alone project that don’t affect Redis (I hope so)

omnomnom - Parsing ISO8601 dates using nom

(by )

There are thousands of ways to note down a date and time. The international date format is standardized as ISO8601, though it still allows a widespread of different formats.

The basic format looks like this:


And that’s what we will parse today using nom, a parser combinator library created by Geoffroy Couprie.

The idea is that you write small self-contained parsers, which all do only one simple thing, like parsing the year in our string, and then combine these small parsers to a bigger one to parse the full format. nom comes with a wide variety of small parsers: handling different integers, reading simple byte arrays, optional fields, mapping parsed data over a function, … Most of them are provided as combinable macros. It’s very easy to implement your own small parsers, either by providing a method that handles a short byte buffer or by combining existing parsers.

So let’s dive right in and see how to use nom in real code.


This is what we want to parse:


It has several parts we need to parse:


with the following meaning:

Characters Meaning
YYYY The year, can be negative or null and can be extended if necessary
MM Month from 1 to 12 (0-prefixed)
DD Day from 1 to 31 (0-prefixed)
T Separator between date and time
HH Hour, 0-23 (0-prefixed)
MM Minutes, 0-59 (0-prefixed)
SS Seconds, 0-59 (0-prefixed)
OOOO Timezone offset, separated by a + or - sign or Z for UTC

Parts like the seconds and the timezone offset are optional. Datetime strings without them will default to a zero value for that field. The date parts are separated by a dash (-) and the time parts by a colon (:).

We will built a small parser for each of these parts and at the end combine them to parse a full date time string.

Parsing the date: 2015-07-16

Let’s start with the sign. As we need it several times, we create its own parser for that. Parsers are created by giving them a name, stating the return value (or defaulting to a byte slice) and the parser combinators to handle the input.

named!(sign <&[u8], i32>, alt!(
        tag!("-") => { |_| -1 } |
        tag!("+") => { |_| 1 }

First, we parse either a plus or a minus sign. This combines two already existing parsers: tag!, which will match the given byte array (in our case a single character) and alt!, which will try a list of parsers, returning on the first successful one. We can directly map the result of the sub-parsers to either -1 or 1, so we don’t need to deal with the byte slice later.

Next we parse the year, which consists of an optional sign and 4 digits (I know, I know, it is possible to extend this to more digits, but let’s keep it simple for now).

named!(positive_year  <&[u8], i32>, map!(call!(take_4_digits), buf_to_i32));
named!(pub year <&[u8], i32>, chain!(
        pref: opt!(sign) ~
        y:    positive_year
        || {
            pref.unwrap_or(1) * y

A lot of additional stuff here. So let’s separate it.

named!(positive_year  <&[u8], i32>, map!(call!(take_4_digits), buf_to_i32));

This creates a new named parser, that again returns the remaining input and an 32-bit integer. To work, it first calls take_4_digits and then maps that result to the corresponding integer (using a small helper function).

take_4_digits is another small helper parser. We also got one for 2 digits:

named!(pub take_4_digits, flat_map!(take!(4), check!(is_digit)));
named!(pub take_2_digits, flat_map!(take!(2), check!(is_digit)));

This takes 4 (or 2) characters from the input and checks that each character is a digit. flat_map! and check! are quite generic, so they are useful for a lot of cases.

named!(pub year <&[u8], i32>, chain!(

The year is also returned as a 32-bit integer (there’s a pattern!). Using the chain! macro, we can chain together multiple parsers and work with the sub-results.

        pref: opt!(sign) ~
        y:    positive_year

Our sign is directly followed by 4 digits. It’s optional though, that’s why we use opt!. ~ is the concatenation operator in the chain! macro. We save the sub-results to variables (pref and y).

        || {
            pref.unwrap_or(1) * y

To get the final result, we multiply the prefix (which comes back as either 1 or -1) with the year. Don’t forget the , (comma) right before the closure. This is a small syntactic hint for the chain! macro that the mapping function will follow and no more parsers.

We can now successfully parse a year:

assert_eq!(Done(&[][..], 2015), year(b"2015"));
assert_eq!(Done(&[][..], -0333), year(b"-0333"));

Our nom parser will return an IResult. If all went well, we get Done(I,O) with I and O being the appropriate types. For our case I is the same as the input, a buffer slice (&[u8]), and O is the output of the parser itself, an integer (i32). The return value could also be an Error(Err), if something went completely wrong, or Incomplete(u32), requesting more data to be able to satisfy the parser (you can’t parse a 4-digit year with only 3 characters input).

Parsing the month and day is a bit easier now: we simply take the digits and map them to an integer:

named!(pub month <&[u8], u32>, map!(call!(take_2_digits), buf_to_u32));
named!(pub day   <&[u8], u32>, map!(call!(take_2_digits), buf_to_u32));

All that’s left is combining these 3 parts to parse a full date. Again we can chain the different parsers and map it to some useful value:

named!(pub date <&[u8], Date>, chain!(
        y: year      ~
           tag!("-") ~
        m: month     ~
           tag!("-") ~
        d: day
        || { Date{ year: y, month: m, day: d }

Date is a small struct, that can hold the necessary information, just as you would expect.

And it already works:

assert_eq!(Done(&[][..], Date{ year: 2015, month: 7, day: 16  }), date(b"2015-07-16"));
assert_eq!(Done(&[][..], Date{ year: -333, month: 6, day: 11  }), date(b"-0333-06-11"));

Parsing the time: 16:43:52

Next, we parse the time. The individual parts are really simple, just some digits:

named!(pub hour   <&[u8], u32>, map!(call!(take_2_digits), buf_to_u32));
named!(pub minute <&[u8], u32>, map!(call!(take_2_digits), buf_to_u32));
named!(pub second <&[u8], u32>, map!(call!(take_2_digits), buf_to_u32));

Putting them together becomes a bit more complex, as the second part is optional:

named!(pub time <&[u8], Time>, chain!(
        h: hour      ~
           tag!(":") ~
        m: minute    ~
        s: empty_or!(chain!(tag!(":") ~ s:second , || { s }))
        || { Time{ hour: h,
                   minute: m,
                   second: s.unwrap_or(0),
                   tz_offset: 0 }

As you can see, even chain! parsers can be nested. The sub-parts then must be mapped once for the inner parser and once into the final value of the outer parser. empty_or! returns an Option. Either None if there is no input left or it applies the nested parser. If this parser doesn’t fail, Some(value) is returned.

Our parser now works for simple time information:

assert_eq!(Done(&[][..], Time{ hour: 16, minute: 43, second: 52, tz_offset: 0}), time(b"16:43:52"));
assert_eq!(Done(&[][..], Time{ hour: 16, minute: 43, second:  0, tz_offset: 0}), time(b"16:43"));

But it leaves out one important bit: the timezone.

Parsing the timezone: +0100


Above are three variants of valid dates with timezones. The timezone in an ISO8601 string is either an appended Z, indicating UTC, or it’s separated using a sign (+ or -) and appends the offset from UTC in hours and minutes (with the minutes being optional).

Let’s cover the UTC special case first:

named!(timezone_utc <&[u8], i32>, map!(tag!("Z"), |_| 0));

This should look familiar by now. It’s a simple Z character, which we map to 0.

The other case is the sign-separated hour and minute offset.

named!(timezone_hour <&[u8], i32>, chain!(
        s: sign ~
        h: hour ~
        m: empty_or!(chain!(tag!(":")? ~ m: minute , || { m }))
        || { (s * (h as i32) * 3600) + (m.unwrap_or(0) * 60) as i32 }

We can re-use our already existing parsers and once again chain them to get what we want. The minutes are optional (and might be separated using a colon).

Instead of keeping this as is, we’re mapping it to the offset in seconds. We will see why later. We could also just map it to a tuple like
(s, h, m.unwrap_or(0)) and handle conversion at a later point.

Combined we get

named!(timezone <&[u8], i32>, alt!(timezone_utc | timezone_hour));

Putting it all together

We now got individual parsers for the date, the time and the timezone offset.

Putting it all together, our final datetime parser looks quite small and easy to understand:

named!(pub datetime <&[u8], DateTime>, chain!(
        d:   date      ~
             tag!("T") ~
        t:   time      ~
        tzo: empty_or!(call!(timezone))
        || {
                date: d,
                time: t.set_tz(tzo.unwrap_or(0)),

Nothing special anymore. We can now parse all kinds of date strings:


But it will also parse invalid dates and times:


But this is fine for now. We can handle the actual validation in a later step. For example, we could use chrono, a time library, to handle this for us. Using chrono it’s obvious why we already multiplied our timezone offset to be in seconds: this time we can just hand it off to chrono as is.

The full code for this ISO8601 parser is available in The repository also includes a more complex parser, that does some validation while parsing (it checks that the time and date are reasonable values, but it does not check that it is a valid date for example)

What’s left?

These simple parsers or even some more complex ones are already usable. At least if you already got all the data at hand and if a simple return value satisfies your needs. But especially for larger and more complex formats like media files reading everything into memory and spitting out a single large value isn’t sufficient at all.

nom is prepared for that. Soon it will become as easy as using an object from which nom can Read. For most things you shouldn’t worry about that, as a simple BufReader will work.

For the other end of the chain, nom has Consumers. A Consumer handles the complex part of actually requesting data, calling the right sub-parsers and holding the necessary state. This is what you need to build yourself. Internally it’s best abstracted using some kind of state machine, so you always know which part of the format to expect next, how to parse it, what to return to the user and so on. Take a look at the MP4 parser, which has an MP4Consumer handling the different parts of the format. Soon my own library, rdb-rs, will have this as well.

Small thing aside: Geoffroy created machine to define a state machine and I got microstate for this.

Why am I doing this?

I’m currently developing rdb-rs, a library to parse and analyze Redis dump files. It’s currently limited to parsing and reformatting into several formats and can be mainly used as a CLI utility. But there are projects that could benefit from a nicer API to integrate it into another tool. The current parser is hand-made. It’s fast, it’s working, but it provides a limited, not very extensible API. I hope to get a proper parser done with nom, that I can build on to provide all necessary methods, while still being super-fast and memory-safe. Work already started, but I’m far from done for now

Thanks to Geoffroy for the discussions, the help and for reading a draft of this post.

Redis Sentinel & Redis Cluster - what?

(by )

In the last week there were several questions regarding Redis Sentinel and Redis Cluster, if one or the other will go away or if they need to be used in combination. This post tries to give a short and precise info about both and what they are used for.

Redis Sentinel

Redis Sentinel was born in 2012 and first released when Redis 2.4 was stable. It is a system designed to help managing Redis instances.

It will monitor your master & slave instances, notify you about changed behaviour, handle automatic failover in case a master is down and act as a configuration provider, so your clients can find the current master instance.

Redis Sentinel runs as a seperate program. You should have atleast 3 Sentinel instances monitoring a master instance and its slaves. Sentinel instances try to find consensus when doing a failover and only an odd number of instances will prevent most problems, 3 being the minimum. In this case one of the Sentinel instances can go down and a failover will still work as (hopefully) the other two instances reach consensus which slave to promote.

One thing about the configurable quorum: this is only the number of Sentinel who have to agree a master is down. You still need N/2 + 1 Sentinels to vote for a slave to be promoted (that N is the total number of all Sentinels ever seen for this pod).

A pod of Sentinels can monitor multiple Redis master & slave nodes. Just make sure you don’t mix up names, add slaves to the right master and so on.

Full documentation for Sentinel.

Redis Cluster

If we go by first commit, then Cluster is even older than Sentinel, dating back to 2011. There’s a bit more info in antirez’ blog. It’s released as stable with version 3.0 as of April 1st, 2015.

Redis Cluster is a data sharding solution with automatic management, handling failover and replication.

With Redis Cluster your data is split across multiple nodes, each one holding a subset of the full data. Slave instances replicate a single master and act as fallback instances. In case a master instance will become unavailable due to network splits or software/hardware crashes, the remaining Master nodes in the Cluster will register this and will reach a state triggering a failover. A suitable Slave of the unavailable Master node will then step up and will be promoted to takeover as a new Master.

You don’t need additional failover handling when using Redis Cluster and you should definitely not point Sentinel instances at any of the Cluster nodes. You also want to use a smart client library that knows about Redis Cluster, so it can automatically redirect you to the right nodes when accessing data.

Redis Cluster specification and Redis Cluster Tutorial.
I gave a talk about Redis Cluster at the PHPUGDUS meeting last month, my slides are on

Want to hear more about Redis, Redis Sentinel or Redis Cluster? Just invite me!