Today I Learned

A collection of snippets, thoughts and notes about stuff I learned.


Everything is available in a Git repository at


So far there are 43 TILs.


Set the date in the emulator

To set the date & time of the running system:

adb shell su root date 061604052021.00

The datetime is in the format:



WebAssembly in BigQuery

So you can run WebAssembly code as part of a BigQuery SQL query.

Rust code:

fn main() {
extern "C" fn sum(a: i32, b: i32) -> i32 {
  a + b

Compiled using:

cargo build --target wasm32-unknown-unknown --release

with these compile settings in your Cargo.toml:

crate-type = ["cdylib"]

opt-level = "s"
debug = false
lto = true

Turn the Wasm file into a C-like array:

xxd -i target/wasm32-unknown-unknown/release/add.wasm

Then drop the output into the below query:

async function main() {
    const memory = new WebAssembly.Memory({ initial: 256, maximum: 256 });
    const env = {
        'abortStackOverflow': _ => { throw new Error('overflow'); },
        'table': new WebAssembly.Table({ initial: 0, maximum: 0, element: 'anyfunc' }),
        'tableBase': 0,
        'memory': memory,
        'memoryBase': 1024,
        'STACKTOP': 0,
        'STACK_MAX': memory.buffer.byteLength,
    const imports = { env };
    const bytes = new Uint8Array([
      0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00, 0x01, 0x07, 0x01, 0x60,
      0x02, 0x7f, 0x7f, 0x01, 0x7f, 0x03, 0x02, 0x01, 0x00, 0x05, 0x03, 0x01,
      0x00, 0x10, 0x06, 0x19, 0x03, 0x7f, 0x01, 0x41, 0x80, 0x80, 0xc0, 0x00,
      0x0b, 0x7f, 0x00, 0x41, 0x80, 0x80, 0xc0, 0x00, 0x0b, 0x7f, 0x00, 0x41,
      0x80, 0x80, 0xc0, 0x00, 0x0b, 0x07, 0x2b, 0x04, 0x06, 0x6d, 0x65, 0x6d,
      0x6f, 0x72, 0x79, 0x02, 0x00, 0x03, 0x73, 0x75, 0x6d, 0x00, 0x00, 0x0a,
      0x5f, 0x5f, 0x64, 0x61, 0x74, 0x61, 0x5f, 0x65, 0x6e, 0x64, 0x03, 0x01,
      0x0b, 0x5f, 0x5f, 0x68, 0x65, 0x61, 0x70, 0x5f, 0x62, 0x61, 0x73, 0x65,
      0x03, 0x02, 0x0a, 0x09, 0x01, 0x07, 0x00, 0x20, 0x01, 0x20, 0x00, 0x6a,
      0x0b, 0x00, 0x0f, 0x0e, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67, 0x5f, 0x61,
      0x72, 0x61, 0x6e, 0x67, 0x65, 0x73, 0x00, 0x21, 0x04, 0x6e, 0x61, 0x6d,
      0x65, 0x01, 0x06, 0x01, 0x00, 0x03, 0x73, 0x75, 0x6d, 0x07, 0x12, 0x01,
      0x00, 0x0f, 0x5f, 0x5f, 0x73, 0x74, 0x61, 0x63, 0x6b, 0x5f, 0x70, 0x6f,
      0x69, 0x6e, 0x74, 0x65, 0x72, 0x00, 0x4d, 0x09, 0x70, 0x72, 0x6f, 0x64,
      0x75, 0x63, 0x65, 0x72, 0x73, 0x02, 0x08, 0x6c, 0x61, 0x6e, 0x67, 0x75,
      0x61, 0x67, 0x65, 0x01, 0x04, 0x52, 0x75, 0x73, 0x74, 0x00, 0x0c, 0x70,
      0x72, 0x6f, 0x63, 0x65, 0x73, 0x73, 0x65, 0x64, 0x2d, 0x62, 0x79, 0x01,
      0x05, 0x72, 0x75, 0x73, 0x74, 0x63, 0x1d, 0x31, 0x2e, 0x35, 0x32, 0x2e,
      0x31, 0x20, 0x28, 0x39, 0x62, 0x63, 0x38, 0x63, 0x34, 0x32, 0x62, 0x62,
      0x20, 0x32, 0x30, 0x32, 0x31, 0x2d, 0x30, 0x35, 0x2d, 0x30, 0x39, 0x29
    return WebAssembly.instantiate(bytes, imports).then(wa => {
        const exports = wa.instance.exports;
        const sum = exports.sum;
        return sum(x, y);
return main();

WITH numbers AS
  (SELECT 1 AS x, 5 as y
  SELECT 2 AS x, 10 as y
  SELECT 3 as x, 15 as y)
SELECT x, y, sumInputs(x, y) as sum
FROM numbers;



The date of Easter

def easter(year):
    y = year;
    c = y//100;
    n = y-19*(y//19);
    k = (c-17)//25;
    i = c-c//4-(c-k)//3+19*n+15;
    i = i-30*(i//30);
    i = i-(i//28)*(1-(i//28)*(29//(i+1))*((21-n)//11));
    j = y+y//4+i+2-c+c//4;
    j = j-7*(j//7);
    l = i-j;
    m = 3+(l+40)//44;
    d = l+28-31*(m//4);

    return (m, d)

print(easter(2022))  # (4, 17)

Based on the wonderful explanation in §3. Calendrical. of the Inform7 documentation


Docker on a remote host

docker context create remote --docker "host=ssh://hostname"
docker context use remote

Is the docker daemon running?

If the Docker daemon is not running on the remote host, you might see this error message:

Cannot connect to the Docker daemon at Is the docker daemon running?

The host is of course nonsense. The solution: Start the Docker daemon on the remote host and it should work.

Run a shell with a Docker image

docker run -t -i --rm ubuntu:20.04 bash

Changing the platform, e.g. to use x86_64 when running on an M1 MacBook:

docker run -t -i --rm --platform linux/amd64 ubuntu:20.04 bash

Override the entrypoint:

docker run -t -i --rm --entrypoint /bin/bash ubuntu:20.04

SSH into the Docker VM on macOS

Run socat first:

socat -d -d ~/Library/Containers/com.docker.docker/Data/debug-shell.sock pty,rawer

This will print some lines, including the PTY device opened, like

PTY is /dev/ttys029

Use that to connect using screen:

screen /dev/ttys029


From build IDs to push log

via @chutten:

Found a regression? Here's how to get a pushlog:

  1. You have the build dates and you're gonna need revisions. Find the build before the regression and the build after the regression in this list: You want to record the Revision column someplace.

    May 10 final f44e64a61ed1
    May 11 final 61a83cc0b74b
  2. Put the revisions in this template:{}&tochange={}


Fixup commits

Fixup commits are commits that build on top of an already existing commit. They can be squashed into the existing commit as a later fixup, e.g. to fix typos or formatting.

git commit comes with builtin support for that: git commit --fixup=<commit>, where <commit> is the existing commit to be modified. See the documentation for details.

See also git helpers.

Git helpers


git commit --fixup, but automatic

See See also Fixup commits.


A handy tool for doing efficient in-memory commit rebases & fixups


Last modification date of a file

Shows the date of the last commit that modified this file:

git log -1 --pretty="format:%ci" path/to/file

See PRETTY FORMATS in git-log(1) for all available formats.

Rebase dependent branches with --update-refs

To automatically adjust all intermediary branches of a larger patch stack rebase with --update-refs on the latest commit:

git rebase -i main --autosquash --update-refs

via git 2.38 release notes


GitHub Webhooks

GitHub can send webhooks to a configured server on events. By default this is done on any push event to the repository.

GitHub attaches an HMAC signature using the provided secret, which allows to verify that the content is really coming from GitHub. Documentation about this is available in Securing your webhooks.

In Rust one can verify the signature like this:

fn main() {
use hex::FromHex;
use hmac::{Hmac, Mac, NewMac};
use sha2::Sha256;

fn authenticate(key: &str, content: &[u8], signature: &str) -> bool {
    const SIG_PREFIX: &str = "sha256=";
    let sans_prefix = signature[SIG_PREFIX.len()..].as_bytes();
    match Vec::from_hex(sans_prefix) {
        Ok(sigbytes) => {
            let mut mac =
                HmacSha256::new_from_slice(key.as_bytes()).expect("HMAC can take key of any size");
        _ => false,


Run tests using Gradle

Run a single test:

./gradlew testDebugUnitTest --tests TestClassNameHere.testFunctionHere

Rerun tests when up-to-date



test.outputs.upToDateWhen {false} in the config


iOS log output from device or simulator

On macOS: Use the Console tool. Alternatively: idevicesyslog from libimobiledevice.


brew install libimobiledevice


idevicesyslog --process Client

Client stands in for the process name (Client is the name of Firefox iOS).


Trigger notifications in the simulator

The notification goes into a notification.apns file:

  "aps": {
    "alert": {
      "title": "Push Notification",
      "subtitle": "Test Push Notifications",
      "body": "Testing Push Notifications on iOS Simulator"

Trigger the notification with:

xcrun simctl push booted notification.apns

This sends it to the application Your own application's bundle identifier can be used if it handles notifications.

The application's bundle identifier can also be specified in the APNS file with the "Simulator Target Bundle" key. It can be left out on the command-line in that case.

APNS files can also be dropped on the simulator to be send.


var vs. val - Difference


  • var: Mutable. Used to declare a mutable variable. It means the value of variable can be changed multiple times.
  • val: Immutable Used to declare a read only variable. It means once the value is assigned to variable, that can’t be changed later.

val is same as final in Java.


Runing parallel tasks from make

With the combination of multiple tools, you can serve static files over HTTP and rerun a build step whenever any input file changes.

I use these tools:

  • https - static file server
  • fd - a faster find
  • entr - run arbitrary commands when files change
  • make

With this Makefile:

	$(MAKE) MAKEFLAGS=--jobs=2 dev
.PHONY: default

dev: serve rerun
.PHONY: dev

	# Put your build task here.
	# I generate a book using
	mdbook build
.PHONY: build

serve: build
	@echo "Served on http://localhost:8000"
	# Change to the generate build directory, then serve it.
	cd _book && http
.PHONY: serve

	# fd respects your `.gitignore`
	fd | entr -s 'make build'
.PHONY: rerun

All it takes to continously serve and build the project is:


Symbols in shared libraries

List all exported symbols of a dynamic library:

nm -gD path/to/

To look at the largest objects/functions in libxul:

readelf -sW $NIGHTLY/ | sort -k 3 -g -r | head -n 100

To look at the disassembly:

objdump -dr $OBJ | c++filt

On macOS:

otool -tV $OBJ | c++filt


List linked dynamic libraries

otool -L path/to/liblib.dylib

Check who holds SecureInput lock

Individual applications on macOS can request SecureInput mode, which disables some functionality that would otherwise allow to capture input. One can check if SecureInput is active and which process holds the lock:

$ ioreg -l -w 0 | grep SecureInput
  |   "IOConsoleUsers" = ({"kCGSSessionOnConsoleKey"=Yes,"kSCSecuritySessionID"=100024,"kCGSSessionSecureInputPID"=123,"kCGSSessionGroupIDKey"=20,

The kCGSSessionSecureInputPID holds the PID of the process that holds the SecureInput lock. Find that process with ps:

ps aux | grep $pid


Match and act on query parameters

In order to serve different files based on query parameters, first create a map:

map $query_string $resource_name {
    ~resource=alice$ alice;
    ~resource=bob$ bob;

Then in your server block match the location and inside rewrite the URL the way you need it:

location = /.well-known/thing {
    root  /var/www/thing;

    if ($resource_name) {
      rewrite ^(.*)$ /$resource_name.json break;

    try_files $uri = 404;

Now if someone requests /.well-known/thing?resource=alice nginx will serve /var/www/thing/alice.json.


home-manager: Allow unfree packages

Some packages are unfree, due to their licenses, e.g. android-studio. To use them one needs to allow unfree packages.

In a home-manager flake this can be done as follows.

In one of your modules add:

{ pkgs, ... }: {
  nixpkgs = {
    config = {
      allowUnfree = true;
      allowUnfreePredicate = (_: true);

The allowUnfreePredicate is due to home-manager#2942 (I haven't actually checked that it is necessary)

Home Manager and how to use it

Configuration is located in ~/.config/home-manager

Activate configuration

home-manager switch

Update everything

cd ~/.config/home-managaer
nix flake update
home-manager switch


List all available attributes of a flake

List all names of packages for a certain target architecture:

$ nix eval .#packages.x86_64-linux --apply builtins.attrNames --json

List all supported architectures of a package:

$ nix eval .#packages --apply builtins.attrNames --json

List everything from a flake:

$ nix eval . --apply builtins.attrNames --json

(via garnix: steps)

A minimal flake for a shell

  description = "A very basic flake";

  outputs = { self, nixpkgs, ... }:
      supportedSystems = [ "aarch64-linux" "aarch64-darwin" "x86_64-darwin" "x86_64-linux" ];
      forAllSystems = nixpkgs.lib.genAttrs supportedSystems;
      devShells = forAllSystems (system:
          pkgs = nixpkgs.legacyPackages.${system};
          default = pkgs.mkShell {
            buildInputs = with pkgs;

Start a shell with:

nix develop

Old way:

A shell.nix with

with import <nixpkgs> {};

pkgs.mkShell {
  nativeBuildInputs = with pkgs; [

And run it with


Changes after updating home-manager

Using nvd, diff the latest 2 home-manager generations:

home-manager generations | head -n 2 | cut -d' ' -f 7 | tac | xargs nvd diff

Of course only gets you the changes after they are installed.

This is also built-in now using nix store diff-closures:

$ nix store diff-closures /nix/store/xfy75lrmsh23hj2c8kzqr4n1cfvzh1s2-home-manager-generation /nix/store/5rdyvvk6jngd2hd44bsa14bpzxraigbi-home-manager-generation
gnumake: 4.4.1 → ∅, -1546.9 KiB
home-manager: -9.0 KiB

via Nix Manual: nix store diff-closures

Remote Builds

After reading Using Nix with Dockerfiles I wanted to understand how to use dockerTools.buildImage to build a Docker image, instead of relying on a Dockerfile.

The issue: I'm on a M1 MacBook, an aarch64 machine. The Docker on this machine runs within an aarch64 Linux VM. Naively building a nix flake means it will build for aarch64 macOS and that then cannot run within the Docker container. So I needed to understand how to either cross-compile or use a remote builder.

I went for the latter, using my x86_64-linux server with nix installed as a remote builder.

I started with the test command from the Remote Builds docs, slightly modified:

nix build --impure --expr '(with import <nixpkgs> { system = "x86_64-linux"; }; runCommand "foo" {} "uname > $out")' --builders 'ssh://builder x86_64-linux' --max-jobs 0 -vvv

--max-jobs 0 will ensure it won't run any local tasks and -vvv will show all the debug output.

This starts downloading nixpkgs and stuff and then ... fail:

error: unable to start any build; either increase '--max-jobs' or enable remote builds.

Unhelpful. The -vvv was necessary to even get any understanding of what's failing. Close to the top one can see this:

ignoring the client-specified setting 'builders', because it is a restricted setting and you are not a trusted user

The docs about trusted-users say that adding users there essentially give that user root rights. So let's not do that and instead configure the builder machine in /etc/nix/machines:

ssh://builder x86_64-linux

I also needed to set the user and the SSH key:


Apparently builders = @/etc/nix/machines is the default, but if not you can set that in /etc/nix/nix.conf. After that a restart of the nix daemon will be necessary:

sudo launchctl kickstart -k system/org.nixos.nix-daemon

Re-running the nix build --impure ... will fail again:

error: unexpected end-of-file
error: builder for '/nix/store/6ji85w7v51fs3x21szvbgmx4dj0vpjqs-foo.drv' failed with exit code 1;
       last 10 log lines:
       > error: you are not privileged to build input-addressed derivations
       > debug1: Exit status 1
       For full logs, run 'nix log /nix/store/6ji85w7v51fs3x21szvbgmx4dj0vpjqs-foo.drv'.

Sounds very similar to the initial issue. This time I set trusted-users = jer in /etc/nix/nix.conf on the builder machine. Then restarted the nix daemon with:

systemctl restart nix-daemon

Now the nix build on macOS succeeds and:

$ cat result

Building Docker images for x86_64

Last but not least I can build the Docker image for x86_64 now. The full example is in github:badboy/flask-nix-example.

nix build '.#packages.x86_64-linux.dockerImage' --max-jobs 0

Then load it:

docker load < result

And finally run the container:

docker run -it --rm -p 5001:5000 --platform linux/amd64 flask-example


Replacing/Adding another cache server

Add the following to /etc/nix/nix.conf:

substituters = is the default, with priority 40. See the reference manual. Lower value means higher priority. According to that the substituters are only used when called by a trusted user or in a trusted substituter list.

According to the aesipp-nix-cache is new. It's unclear from when that info is and what the current status of this project is (as of May 2023).

Update nix

nix upgrade-nix

This might require to be run as root:

sudo -i nix upgrade-nix

-i to inherit the environment and have nix actually available in the $PATH



Meta commands in psql

\lList databases
\cConnect to database
\dtList tables
\d $tableList schema of $table


Resize disks of a VM


  1. Extend the disk in the web UI
  2. Run partprobe on the machine
  3. Run parted
  • print to show current layout. This will ask you to fix the GPT. Say "Fix"
  • resizepart 1 -- 1 being the partition ID
  • It asks for the end. Type 100%
  1. Resize the filesystem: resize2fs /dev/vda1


Modify integer literals

Integer literals in Python refer to the same object every time they are used. One can modify those objects:

from sys import getsizeof
from ctypes import POINTER, c_void_p, c_char, cast

def read_int(obj: int, vv=True) -> bytes:
    size = getsizeof(obj)
    ptr = cast(c_void_p(id(obj)), POINTER(c_char))
    buf = ptr[0:size]
    if vv:
        print(f"int obj @ {hex(id(obj))}: {buf.hex(' ')}")
    return buf

def write_int(dst: int, src: int):
    raw_src = read_int(src, False)
    dst_ptr = cast(c_void_p(id(dst)), POINTER(c_char))

    for (idx, c) in enumerate(raw_src):
        dst_ptr[idx] = c

write_int(1, 2)

a = 1
b = 2
print(a + b)


Deduplicate a list and keep the order

duplicated_list = [1,1,2,1,3,4,1,2,3,4]
ordered = list(dict.fromkeys(duplicated_list)) # => [1, 2, 3, 4]

pip - Install from Git

To install a Python package from Git instead of a PyPi-released version do this:

pip install git+ssh://

See also: Useful tricks with pip install URL and GitHub

Strip Markdown syntax

In order to strip Markdown syntax and leave only the plain text output one can patch the Markdown parser:

from markdown import Markdown
from io import StringIO

def unmark_element(element, stream=None):
    if stream is None:
        stream = StringIO()
    if element.text:
    for sub in element:
        unmark_element(sub, stream)
    if element.tail:
    return stream.getvalue()

Markdown.output_formats["plain"] = unmark_element

__md = Markdown(output_format="plain")
__md.stripTopLevelTags = False

def strip_markdown(text):
    return __md.convert(text)

Then call the strip_markdown function:

text = """
# Hello *World*!

[Today I learned](


This results in:

Hello World!
Today I learned



No-op allocator

use std::alloc::{Layout, GlobalAlloc};

static ALLOCATOR: NoopAlloc = NoopAlloc;

struct NoopAlloc;

unsafe impl GlobalAlloc for NoopAlloc {
    unsafe fn alloc(&self, _layout: Layout) -> *mut u8 {
    unsafe fn dealloc(&self, _ptr: *mut u8, _layout: Layout) {}

pub extern "C" fn add(left: i32, right: i32) -> i32 {
    left + right

fn main() {}

Rust Playground.

This also reduces the generated wat (WebAssembly text format) to a short and readable output:

  (type $t0 (func (param i32 i32) (result i32)))
  (func $add (export "add") (type $t0) (param $p0 i32) (param $p1 i32) (result i32)
      (local.get $p1)
      (local.get $p0)))
  (func $main (export "main") (type $t0) (param $p0 i32) (param $p1 i32) (result i32)
  (memory $memory (export "memory") 16)
  (global $__data_end (export "__data_end") i32 (i32.const 1048576))
  (global $__heap_base (export "__heap_base") i32 (i32.const 1048576)))

Not-equal types

// requires nightly!


use std::marker::PhantomData;

auto trait NotSame {}

impl<A> !NotSame for (A, A) {}

struct Is<S, T>(PhantomData<(S,T)>);

impl<S,T> Is<S,T> where (S,T): NotSame {
  fn absurd(&self) {

fn main() {
  let t : Is<u32, u32> = Is(PhantomData);
  let z : Is<u32, i32> = Is(PhantomData);

Random values using only libstd

fn main() {
use std::collections::hash_map::RandomState;
use std::hash::{BuildHasher, Hasher};

let random_value = RandomState::new().build_hasher().finish() as usize;
println!("Random: {}", random_value);

via ffi-support

Testing code blocks in the README


fn main() {
// Ensure code blocks in compile
macro_rules! readme {
    ($x:expr) => {
        #[doc = $x]
        mod readme {}
    () => {


Recursive Queries

SQLite documentation.

Generate a series of integers, per ID, with a given min and max.

WITH t (id, min, max) AS (
  (1, 4, 6),
  (2, 6, 6),
  (3, 7, 9)

    id, min, max, min as x
  FROM data
    id, min, max, x+1 as x
  WHERE x < max

SELECT * from exp ORDER BY id;

Temporary values in SQLite

To select from some values:

WITH vals (k,v) AS (
    (1, 100)

To actually create a temporary table:

CREATE TEMP TABLE temp_table AS                                     
WITH t (k, v) AS (
 (0, -99999),
 (1, 100)

Working with dates

Full docs: Date And Time Functions

Datetime of now

SELECT datetime('now');

Timestamp to datetime

SELECT datetime(1092941466, 'unixepoch');

Datetime to timestamp

SELECT strftime('%s', 'now');


Exporting Twitter Spaces recording

For better or for worse people are using Twitter Spaces more and more: audio-only conversations on Twitter. Simon Willison recently hosted one and wrote a TIL how to download it. I helped because it's actually easier than his initial solution, so I'm copying that here:

Exporting the recording using youtube-dl

Open the Twitter Spaces page, open your Firefox developer tools console in the network tab, filter for "m3u", then hit "Play" on the page. The network tab will capture the URL to the playlist file. Copy that.

Then use youtube-dl (or one of its more recent forks like yt-dlp) to download the audio:

youtube-dl ""

This will result in a .mp4 file (media container):

$ mediainfo "playlist_16798763063413909336 [playlist_16798763063413909336].mp4"
Complete name                            : playlist_16798763063413909336 [playlist_16798763063413909336].mp4
Format                                   : ADTS
Format/Info                              : Audio Data Transport Stream
Format                                   : AAC LC
Format/Info                              : Advanced Audio Codec Low Complexity

To extract only the audio part you can use ffmpeg:

ffmpeg -i "playlist_16798763063413909336 [playlist_16798763063413909336].mp4" -vn -acodec copy twitter-spaces-recording.aac