Releasing Rust projects, the automatic way

(by )

One of the strength of the Rust ecosystem is its package manager Cargo and the package system Pulling in some dependencies is as easy as adding it to your projects’ Cargo.toml and running cargo build.

Releasing your own project is nearly as easy. Make sure you got everything working, add a version number in your Cargo.toml and run cargo publish. It will package the code and upload it.

Of course that’s not the whole story. For a proper release that people will like to use you want to follow some good practices:

  1. Have tests and make sure they are green. Most people already use Travis CI. The travis-cargo project makes it easy to test all channels (stable, beta, nightly, maybe a specific version), run documentation tests and upload coverage info and documentation.
  2. Keep a changelog. Your software is not done with the first release. It changes, bugs get fixed, new features get introduced. Keeping a changelog helps users to understand what changed from version to version.
  3. Pick a version number. This is not nearly as easy as it sounds. Your project’s version number carries a lot of information. Often more than we’d like. The Rust ecosystem recommends to strictly follow semver, but even that has ambiguities and requires a lot of thinking to do the right thing.
  4. Release on the right platforms. Even though is the package system you want your project in, having a GitHub release is a nice to have. Maybe your project is an application and you want to distributed pre-compiled binaries.

At the moment a lot of people process each of these steps manually. Maybe they have a few scripts lying around that help in reducing the number of errors that can happen. All in all there’s still to much manual work required. It does not have to be that way.

Stephan Bönnemann build semantic-release for the npm eco system a while ago. It allows for fully automated package publishing by relying on a few conventions and a lot of automatisation.

I wanted to have a similar thing for the Rust eco system. That’s why Jan aka @neinasaservice and I sat down at last year’s 32c3 and started hacking on a tool to achieve that.

It took us a while to get something working, but now I can present to you:

🚀 semantic-rs 🚀

What is it?

semantic-rs gives you fully automatic crate publishing. It runs after your tests are finished, analyzes the latest commits, picks out a version number, creates a commit and git tag, creates a release on GitHub and publishes your crate on

All you have to do is follow the Angular.js commit message conventions, which are really easy. Your commit message consists of a type, an optional scope, a subject and an optional body.

<type>(<scope>): <subject>

The type should be one of the following:

The next version number is decided depending on type of commits since the last release. A feat will trigger a minor version bump, a fix a patch version bump. The other types don’t cause a release.

However, should you make a breaking change, you need to document this in the commit message as well. Include BREAKING CHANGE in the body of the commit message and add information what changed and how to change existing code to make it work again (if possible). This will then trigger a major version bump.

What works?

The Happy Path.

If everything is configured properly and the tests succeed, semantic-rs will correctly pick a version, add changes to a, create a release commit, tag it, create a GitHub release and publish on

The test-project crate is published completely automatically now.

semantic-rs already has some safety features integrated. It will only run when the build is on the master branch (or the branch you configure), and it will make sure that it only runs once on the build leader (which is always the first job in your build matrix). It also waits for the other jobs to finish and succeed before trying to do a release.

What’s missing?

In case of problems, semantic-rs will just bail out. That might leave you with changes pushed to GitHub, but not published on (at worst), or with no visible changes but no new release (at best). We’re working hard on making this safer to use with better error reporting.

Installing semantic-rs from source each time your tests run adds significant overhead to the build time, as it must be compiled again and again. In the future we will provide binary releases that you can simpy drop into Travis and it will work.

It’s not released on yet, because we’re using a dependency from GitHub. That one should soon be fixed once they push out a release as well.

Now that we got that out of the way, let’s see how to actually use it.

How to use it

Right now usage of semantic-rs is not as straight-forward as it can be, we’re working on that. To run it on Travis you have to follow these manual steps.

The first job of your build matrix will be used to do the publish, so make sure it is a full build. Make it your stable build to be on the safe side. Your .travis.yml should contain this:

  - stable
  - beta
  - nightly

Next, install semantic-rs on Travis by adding this to your .travis.yml:

  - |
      cargo install --git --debug &&
      export PATH=$HOME/.cargo/bin:$PATH &&
      git config --global semantic-rs &&
      git config --global semantic@rs

(This installs semantic-rs in debug mode, which is quite a lot faster to compile without significant runtime impact at the moment)

This will also set a git user and mail address, which will be used to create the git tag. You can change this to your own name and email address.

Now add a personal access token from GitHub. It only needs the public_repo permission (unless of course your repository is private).

Add it to your .travis.yml encrypted:

$ travis encrypt GH_TOKEN=<your token here> --add

To release on you need a token as well. Get it from your account settings and add it to your .travis.yml:

$ travis encrypt CARGO_TOKEN=<your token here> --add

At last make sure semantic-rs runs after the tests succeeds. Add this to the .travis.yml:

  - semantic-rs

Make sure to follow the AngularJS Git Commit Message Conventions. semantic-rs will use this convention to decide which should be the next release version.

See the full .travis.yml of our test project.

What’s next?

We still have some plans for semantic-rs.

First we need to make it more safe and easy to integrate into a project’s workflow.

We also want to look into how we can determine more information about a project to assist the developers. Ideas we have include running integration tests from the previous version to detect breaking changes and statically analyzing code changes to determine their impact. Rust’s RFC 1105 already defines the impact certain changes should have. Maybe it is possible to automatically check some of these things.

We would be happy to hear from you. If semantic-rs breaks or otherwise does not fit into your workflow, let us know. Open an issue to discuss this. If you want to use it and have more ideas what is necessary or could be improved, talk to us!

Load your config into your environment

(by )

It became quite popular to store certain configuration variables in your environment, to be later loaded by your aplication. This way of having all configuration available is part of the twelve-factor app definition.

The idea is to place your variables in a .env file and load this as environment variables to be accessed by your application. Most of the time you can just plug in one of the dozens of libraries that load this config from a file and your application can fetch the values as normal from the environment.

But sometimes you might want to have this config loaded into your shell or some other interactive tool. That’s where you can use dotenv-shell, a small tool, written in Rust. It wraps around rust-dotenv and allows to load the config and then execute a program (your shell by default).

First install the tool:

cargo install dotenv-shell

Create a .env file with your config:

echo "REDIS_URL=redis://localhost:6379" > .env

Then start a shell and you can access the configuration as environment variables:

$ dotenv-shell
$ echo $REDIS_URL

Of course you can launch whatever tool you want:

$ dotenv-shell /usr/bin/irb
irb(main):001:0> ENV['REDIS_URL']
=> "redis://localhost:6379"

Available on GitHub and as a Crate.


Only now I learn about another application doing just the same: benv by @timonvonk.

Create GitHub releases with Rust using Hubcaps

(by )

For one of my projects I need to access the GitHub API to create releases. Luckily, through reading This Week in Rust #119, I discovered Hubcaps, a library for interfacing with GitHub.

Though it lacks some documentation and is not yet fully finished, it already provides APIs for the relevant parts regarding releases.

On GitHub a release is always associated with a Git tag, but need to be specifially created to be shown on the site with the full description and optional assets attached to them. It is also possible to mark releases as a draft (then it is only visible to repo contributors) or as a pre-release, useful for alpha releases of a library or application.

Once you have a Git tag in your repository the API can be used to create an associated release using the following Rust code:

extern crate hyper;
extern crate hubcaps;

use std::{env, process};
use hyper::Client;
use hubcaps::{Github, ReleaseOptions};

fn main() {
    let token = match env::var("GITHUB_TOKEN").ok() {
        Some(token) => token,
        _ => {
            println!("example missing GITHUB_TOKEN");

    let client = Client::new();
    let github = Github::new("hubcaps/0.1.1", &client, Some(token));

    let user = "username";
    let repo = "my-library";
    let name = "ONE DOT OH";
    let body = "This is a long long body";
    let tag = "v1.0.0";

    let opts = ReleaseOptions::builder(tag)

    let repo = github.repo(user, repo);
    let release = repo.releases();
    match release.create(&opts) {
        Ok(_) => println!("Release created"),
        Err(e) => println!("Failed to create release: {:?}", e),

If you clone Hubcaps and put the above code in a file named into the examples/ folder, you can run it with cargo run --example releases. You need to get a personal access token first and set it in your environment (export GITHUB_TOKEN=<your token here>)

Of course it has the repository and tag hard-coded, but this is easy to adapt.

The code was tested with Rust 1.6 and hubcaps 0.1.1

Updated version to work with hubcaps 0.2.0 online.

2015 in many words and some photos

(by )

Last year I summarized my year in a long blog post, and with 2015 being nearly over here comes this year’s version.

My year in numbers

I was at 6 different conferences in 6 different cities across 5 different countries:

  1. FOSDEM in Brussel, Belgium (February)
  2. .concat() in Salzburg, Austria (March)
  3. otsconf in Dortmund, Germany (August). I was one of the organizers
  4. Redis Dev Day in London, Great Britain (October)
  5. Hungarian Web Conference 2015 in Budapest, Hungaria (November). I gave a talk about Rust
  6. 32c3 in Hamburg, Germany (December)

This includes the first conference I ever organized (otsconf) and my first real conference talk (Web Conference). I hope I get more opportunities to speak next year.

GitHub says I made 1434 countable contributions (1650 if you count private repositories) across dozens of repositories. I now maintain 8 published Rust crates (9 to be really correct) as well as 5 unreleased Rust libraries. I hope to finish up the remaining crates and polish the existing ones (and maybe bump them to a stable 1.0)

I released 6 versions of hiredis, 2 of hiredis-rb, 4 of hiredis-node and 2 of hiredis-py. My plan for 2016 is to mark hiredis as stable, push out the 1.0 release and then update the others as well (considering it might be used in Rails soon, this seems to be a good idea)

About 158 mails on the Redis mailing list were sent by me.

I posted 196 photos on Instagram and my ~/photos/2015 directory now contains about 4600 more photos.

I wrote more than 8600 tweets, more than half of them in reply to someone. I didn’t quite reach my own goal of writing a bit more, but 12 blog posts still come down to about one per month.

My year in photos and words

After the busy ending of 2014, 2015 started just as busy. Away from university, I spend more time working, but I found enough time for conferences and a quick one-week visit in San Francisco in April.

Golden Gate Bridge Bay Bridge

University and work kept me busy the whole summer, but I still catched a bit of the Sun.

After finishing up the Summer semester, I left for Norway in August.

Norway Flag

I came back for a weekend to run otsconf. It was a success, but see for yourself:

Back in Norway I had the absolute best time. For example hiking up a hill, sitting on the Kjeragbolten thousand meter over the Fjord.


My semester ended early on the first of December, so I had another chance to travel around Norway. Even without a lot of light, the Lofoten are a beautiful place to stay.


With 2015 coming to an end, so are other things. I had to replace my old Laptop, which stayed with me for the last four years.


I really enjoyed living & traveling in Norway and I already put it on my todo list to come back in the summer. I was lucky to be part of so many different but welcoming communities. It’s a joy every time I meet these people again.


This year was so great because of all the great people I met along the way.
Thanks to the otsconf team, Ola, Shelly, Carsten, Leif and Hendrik.
Again thanks to my employer rrbone and my boss Dominik for giving me opportunities to learn and work.
Thanks to the the new friends I made while living in Norway. I hope we manage to meet up again soon. Knowing people everywhere is great if you travel and I’m very grateful to all people that hosted me, like Lotte in Hamburg and Tobias in Tromsø. And thanks to all those I didn’t name explicitely, but who made this year so much fun.

2016 won’t start any less busy. With my next university semester only starting in April, I will make the best of that time. The next trip starts in less than five days.


Redis Dev Day London 2015

(by )

Last Monday the Redis Dev Day took place in London, followed by a small Unconference on Tuesday. The Redis Dev Day is a gathering of all people involved in the Redis development, that means Redis creator Salvatore as well developers and engineers from several companies are there to discuss the future development of Redis.

Thanks to Rackspace and especially Nikki I was able to attend as well.

The Dev Day itself was packed with proposals and interesting ideas about improvements and new features for Redis. In the following I’m trying to sum up some of them, listed by their relevance as I see them (most relevant first).

NoNoSQL for Redis

Salvatore itself proposed this one: Native indexing in Redis. He recently published an article on indexing based on Sorted Sets. While this method is manual, it could very well be hidden behind a nice client interface (and indeed there are some out there, I just can’t find a good example). But having it right inside Redis might be more memory-efficient, faster, avoids transactions and might be easier to use. Salvatore proposed new commands for that, for example to select based on a previously defined index:

IDXSELECT myindex IFI FIELDS $3 WHERE $1 == 56 and $2 >= 10.00 AND $2 <= 30.00

None of this is final yet, there are a lot of things to get right before this can be implemented. For example it’s not done with providing the commands for selection based on indexes, but needing to add, update and remove the index is necessary as well. More in-depth discussions happened the next day, prior to the Unconf.

Even though this kinda goes against the current idea of Redis – provide the basic tools with a simple API and not much more – there is the possibility to implement it right and make it as usuable as Redis is right now.

This proposal needs more design effort to get right (both on the exposed API and internal design).

Redis as a Cloud Native

Bill – yes, the real one – always was a heavy user of Sentinel and thus had the most insight on what works and what doesn’t. And in fact one big thing where Redis still does not work in a way that anyone can be satisfied with is inside a Docker container. Because of how Sentinel (or Cluster) announce themselves (or monitored instances) and the way Docker remaps ports, it is currently hardly possible to run it inside a container without unusual configuration (like --net=host).

This needs improvements like making it possible to specify the announce address and port for all running modes. Another thing that should be doable is configuration replication across nodes in a pod or Cluster. This could easily be handled by a new command. Instead of replicating all configuration automatically, this needs to be triggered by an admin, making it easy to only selectively replicate configuration options.

Both things seem necessary and not too hard.

Other proposals include:


Since the introduction of Lua scripting inside Redis more and more people use it as a way to abstract logic away behind a single (atomic) call. Just look at what is possible implementing Quadtrees inside Lua in Redis.

Because of a security issue access to the debug feature in Redis was disabled. This also breaks some of the available options to properly debug Lua scripts. Debug functionality is needed once you go this route, so bringing it back eventually is a good idea and maybe finally closing very old issues.

Command composition

For some commands we have STORE options (SORT has it as an option, SINTERSTORE and others are their own command).

A more general form like STORE dest SINTER keyA keyB could make some users happy. The current code base doesn’t support that in a generic way, but it’s not impossible to change that. This might need a bit more design effort to be applied to all data types though.


Every time Redis gets discussed the issue about modularity comes up. Most of the time I am a fan of making components reusable, modularize them where possible and abstract away the hard stuff.

Redis is different here. A lot of the stuff in Redis interacts with each other and there is hardly a clear cut to make.

Should all underlying data type implementations be extracted? They are useful for sure elsewhere, but then they won’t benefit from shortcuts made.

Should the IO be completely separated from parsing and dispatching the commands? Sounds useful for sure, especially now that the base is used in Disque as well. But again, the coupling allows for some shortcuts.

Should hiredis be integrated and be part of the project? No way, hiredis is also a stand-alone client used by many others. Keeping it in-tree would make it harder to develop on its on.

One thing we will do for sure is to unify the code base again. The in-tree hiredis is currently not the same as the stand-alone one, partly due to the updated sds (the string implementation) and partly because some bugs where fixed in the stand-alone project that don’t affect Redis (I hope so)