From time to time I try to write a piece of code or port some existing library or application just for fun.
So a while back in June I had some free time again and I came across signify.
I ported it to rust: signify-rs
signify is a small command line utility to create Ed25519 signatures of files.
It was developed to cryptographically sign and verify OpenBSD releases. Read “Securing OpenBSD From Us To You” for more details.
Now all you need to create signatures and verify them is a private & a public key.
Both parts are super short, so it is no problem to embed or print them.
This is what a public key would look like:
(Warning: Do NOT use above keys for anything! I included both for demonstration only)
signify will add an additional comment to the key files, but they are not used for any verification (besides making sure they are actually in the file).
Whereas pure Ed25519 keys are just 32 bytes, above keys already include some additional information used by signify to create and verify signatures. The private key can also be protected by a passphrase.
When I started porting this small application I knew nothing about Ed25519 or signify.
I more or less translated the existing C code into Rust.
Back then, I used rust-crypto, a pure Rust implementation various common cryptographic algorithms.
It provided all I needed: Ed25519 key generation, signing & verification and bcrypt for the passphrase handling.
In just one day I had a working application and less than 2 weeks later I also implemented proper passphrase-protection.
In August then the most promising Rust crypto library, *ring*, was released on crates.io as well (before that it could only be used as a git dependency).
I used *ring* before for nobsign and Brian Smith, the author of *ring*, already helped out with some code review & helping me use the API properly.
So I was not too surprised when Brian reached out to me asking if I would be willing to port signify-rs over to *ring* as well.
I was (however, it was shortly before RustFest, so I couldn’t dedicate much time to it).
I took a look at the *ring* documentation and immediately realised I had a completely wrong understanding of Ed25519.
Whereas in signify the public key is stored in 32 byte and the private key is stored in 64 byte,
*ring* had both keys as only 32 byte long.
With feedback from Brian I realised that the longer private key as used in signify
is actually just the 32-byte public key concatenated to the (real) 32-byte private key, resulting in 64 byte total.
Equipped with this information (and after the first RustFest day was over), it was easy to port over signify-rs to *ring*.
signify-rs still depends on rust-crypto though, as it provides the necessary bcrypt_pbkdf for encrypting the private key with a user-chosen passphrase. *ring* does not provide bcrypt and probably won’t do so anytime soon.
If anyone wants to implement a really good bcrypt crate, please contact me or Brian for feedback.
First you need to install it, do so with cargo:
cargo install signify
Generate a key pair:
signify -G -p public-key -s secret-key
This will ask you for a passphrase to protect the secret key. Remember that.
Now you can sign a file:
signify -S -s secret-key -m README.md
This will create README.md.sig containing the signature.
To verify it:
signify -V -p public-key -m README.md
If it prints Signature Verified it went well. Otherwise it will show an error.
This is the signature using above private key on the README.md:
untrusted comment: signature from signify secret key
If you put that signature in a file, put the public key from above into another, you can verify it!
I still have todos left for this project.
First I want this to be fully compatbile with the original implemenation.
The original implementation can embed signatures into the signed file and also verify an embedded signature. I want to add that.
I also want to distribute pre-compiled binaries for various platforms (hello, rust-everywhere)
and provide proper Ed25519 signatures on all those releases.
Last monday I attended the Rust Sthlm Meetup and gave a talk about using Rust for web development.
About 60 people attended, had pizza and listened to the two talks of the evening.
I started off with my talk Rust from the Back to the Front, giving an overview of the ecosystem around all things related to web programming in Rust.
This was an updated talk of the one I gave in Budapest last year (video online).
I had some technical difficulties this time in the beginning (yeah, computers…), but otherwise the talk went well.
People showed interest in the presented topics.
Sadly I only briefly touched the new way of doing asynchronous I/O using futures and tokio.
I definitely need to look deeper into this topic, as I think it can bring huge improvements to existing web frameworks and libraries as well.
This has to wait a bit though, as I will first dive deeper into Emscripten (and present that next week in Cologne and in Pittsburgh in October).
My slides are online and I will try to collect more resources in a Gist.
The second talk that evening was by Kristoffer Grönlund, giving us a quick introduction to some of Rust’s features,
followed by an overview of his work trying to get Rust into the openSUSE package repositories.
Turns out it is not that easy, especially if everything has to be built from source and offline, but at least there are some improvements
that might help make this easier.
His slides are online as well.
Even after the talks some of the people stuck around and we discussed several more things around Rust, how it is still evolving,
fast moving and a bit unstable from time to time.
All in all I had a great time, talked to a number of people about differen topics and I hope I could convince some to actually try Rust.
With such a large and interested tech community in Stockholm, I’m sure the Meetup will live on.
Today, right after finishing my only lecture of the day, I rented a longboard at a local skate shop
and then took the bus out of the city.
I went out to Kornelimünster, a small district of Aachen, about 10 km outside of the city.
In the last days it rained half of the day, but today it’s sunny and really warm.
The route goes mostly downhill from Kornelimünster, so I did not need much pushing and could just let it roll.
I collected the route with an app.
The results: 10,9 km in 53 minutes. Top speed: 31,5 km/h. 12,4 km/h on average.
I was much quicker than I thought, so I got some ice cream in the city of Aachen before heading home.
… because mine didn’t. At least not correctly in all cases.
I’m talking about my Rust library lzf-rs,
a port of the small compression library LibLZF.
It started as a wrapper around the C library, but I rewrote it in Rust for v0.3.
I now found three major bugs and I want to tell you how (tl;dr: Bug fixes and tests: PR #1).
For a university paper I’m currently looking into different methods for automatic test generation,
such as symbolic execution, fuzzing and random test generation.
One of the popular methods is property-based testing, with QuickCheck being the best known application of this method.
QuickCheck started as a Haskell library (see the original paper),
but is ported to several other languages, including C (see theft)
and of course Rust: QuickCheck.
I knew this library for some time now, but never used it.
So today I decided to use it for my lzf crate.
Let me walk you through the process on how to use it.
First, you need to add the dependency and load it in your code.
Add it to your Cargo.toml:
Add this to your src/lib.rs:
Next, you need to decide what property to test.
As the compression library needs data to compress and valid data to decompress,
I decided the easiest way to go through everything would be to test the round trip:
Compress some random input, then decompress the compressed data and check that it maches the initial input.
This should hold for all inputs, that can be compressed.
Everything that cannot be compressed can be ignored at this point (a first test allowing all input turned up too many false-positives).
$ cargo test
running 13 tests
thread 'safe' panicked at 'index out of bounds: the len is 67 but the index is 67', ../src/libcollections/vec.rs:1187
test quickcheck_test::qc_roundtrip ... FAILED
---- quickcheck_test::qc_roundtrip stdout ----
thread 'quickcheck_test::qc_roundtrip' panicked at '[quickcheck] TEST FAILED (runtime error). Arguments: ([0, 0, 0, 0, 1, 0, 0, 2, 0, 0, 3, 0, 0, 4, 0, 1, 1, 0, 1, 2, 0, 1, 3, 0, 1, 4, 0, 0, 5, 0, 0, 6, 0, 0, 7, 0, 0, 8, 0, 0, 9, 0, 0, 10, 0, 0, 11, 0, 1, 5, 0, 1, 6, 0, 1, 7, 0, 1, 8, 0, 1, 9, 0, 1, 10, 0, 0])
Error: "index out of bounds: the len is 67 but the index is 67"', /home/jer/.cargo/registry/src/github.com-88ac128001ac3a9a/quickcheck-0.2.27/src/tester.rs:116
It would be okay to return an error, but out-of-bounds indexing (and thus panicing) is a clear bug in the library.
Luckily, QuickCheck automatically collects the input the test failed on, tries to shrink it down to a minimal example and then displays it.
I figured this bug is happening in the compress step, so I added an explicit test case for that:
Taking a look at the full stack trace (run RUST_BACKTRACE=1 cargo test) lead to the exact location of the bug.
Turns out I was checking the bounds on the wrong variable.
I fixed it in 88242ffe.
After this fix, I re-run the QuickCheck tests and it discovered a second bug ( lead to another out-of-bounds access) and I fixed it in 5b2e8150.
I found a third bug, which I (hopefully) fixed, but I don’t fully understand how it’s happening yet.
Additionally to the above I added QuickCheck tests comparing the Rust functions to the output of the C library.
The full changeset is in PR #1 (currently failing tests, because of a broken Clippy on newest nightly).
Now quick, check your own code!
Update 2016-05-13: QuickCheck can be added as a dev dependency, instead of making it optional and activating it with a feature. Additionally it’s necessary to use names from the crate (or specify the full path). Thanks to RustMeUp and burntsushi in the reddit thread.
One of the strength of the Rust ecosystem is its package manager Cargo and the package system crates.io.
Pulling in some dependencies is as easy as adding it to your projects’ Cargo.toml and running cargo build.
Releasing your own project is nearly as easy. Make sure you got everything working, add a version number in your Cargo.toml and run cargo publish.
It will package the code and upload it.
Of course that’s not the whole story.
For a proper release that people will like to use you want to follow some good practices:
Have tests and make sure they are green. Most people already use Travis CI. The travis-cargo project makes it easy to test all channels (stable, beta, nightly, maybe a specific version), run documentation tests and upload coverage info and documentation.
Keep a changelog. Your software is not done with the first release. It changes, bugs get fixed, new features get introduced. Keeping a changelog helps users to understand what changed from version to version.
Pick a version number. This is not nearly as easy as it sounds. Your project’s version number carries a lot of information. Often more than we’d like. The Rust ecosystem recommends to strictly follow semver, but even that has ambiguities and requires a lot of thinking to do the right thing.
Release on the right platforms. Even though crates.io is the package system you want your project in, having a GitHub release is a nice to have. Maybe your project is an application and you want to distributed pre-compiled binaries.
At the moment a lot of people process each of these steps manually.
Maybe they have a few scripts lying around that help in reducing the number of errors that can happen.
All in all there’s still to much manual work required.
It does not have to be that way.
Stephan Bönnemann build semantic-release for the npm eco system a while ago.
It allows for fully automated package publishing by relying on a few conventions and a lot of automatisation.
I wanted to have a similar thing for the Rust eco system. That’s why Jan aka @neinasaservice and I sat down at last year’s 32c3 and started hacking on a tool to achieve that.
It took us a while to get something working, but now I can present to you:
semantic-rs gives you fully automatic crate publishing.
It runs after your tests are finished, analyzes the latest commits, picks out a version number, creates a commit and git tag, creates a release on GitHub and publishes your crate on crates.io.
All you have to do is follow the Angular.js commit message conventions, which are really easy.
Your commit message consists of a type, an optional scope, a subject and an optional body.
refactor: A code change that neither fixes a bug nor adds a feature
perf: A code change that improves performance
test: Adding missing tests
chore: Changes to the build process or auxiliary tools/libraries/documentation
The next version number is decided depending on type of commits since the last release.
A feat will trigger a minor version bump, a fix a patch version bump.
The other types don’t cause a release.
However, should you make a breaking change, you need to document this in the commit message as well.
Include BREAKING CHANGE in the body of the commit message and add information what changed
and how to change existing code to make it work again (if possible).
This will then trigger a major version bump.
The Happy Path.
If everything is configured properly and the tests succeed, semantic-rs will correctly pick a version,
add changes to a Changelog.md, create a release commit, tag it, create a GitHub release and publish on crates.io.
The test-project crate is published completely automatically now.
semantic-rs already has some safety features integrated.
It will only run when the build is on the master branch (or the branch you configure),
and it will make sure that it only runs once on the build leader (which is always the first job in your build matrix).
It also waits for the other jobs to finish and succeed before trying to do a release.
In case of problems, semantic-rs will just bail out.
That might leave you with changes pushed to GitHub, but not published on crates.io (at worst),
or with no visible changes but no new release (at best).
We’re working hard on making this safer to use with better error reporting.
Installing semantic-rs from source each time your tests run adds significant overhead to the build time, as it must be compiled again and again.
In the future we will provide binary releases that you can simpy drop into Travis and it will work.
It’s not released on crates.io yet, because we’re using a dependency from GitHub. That one should soon be fixed once they push out a release as well.
Now that we got that out of the way, let’s see how to actually use it.
How to use it
Right now usage of semantic-rs is not as straight-forward as it can be, we’re working on that.
To run it on Travis you have to follow these manual steps.
The first job of your build matrix will be used to do the publish, so make sure it is a full build.
Make it your stable build to be on the safe side.
Your .travis.yml should contain this:
Next, install semantic-rs on Travis by adding this to your .travis.yml:
First we need to make it more safe and easy to integrate into a project’s workflow.
We also want to look into how we can determine more information about a project to assist the developers.
Ideas we have include running integration tests from the previous version to detect breaking changes
and statically analyzing code changes to determine their impact. Rust’s RFC 1105 already defines the impact certain changes should have. Maybe it is possible to automatically check some of these things.
We would be happy to hear from you. If semantic-rs breaks or otherwise does not fit into your workflow, let us know. Open an issue to discuss this.
If you want to use it and have more ideas what is necessary or could be improved, talk to us!