Today I Learned
A collection of snippets, thoughts and notes about stuff I learned.
Code
Everything is available in a Git repository at github.com/badboy/til.
Summary
So far there are 50 TILs.
Android
- Disable the Pixel Launcher - 2024-09-03
- Set the date in the emulator - 2022-11-06
Disable the Pixel Launcher
As I was dealing with an issue on my Android test phone the other day I wanted to disable the default-installed Pixel Launcher.
To do this you run:
adb shell pm disable-user --user 0 com.google.android.apps.nexuslauncher
If you have no alternative launcher installed this will get your phone stuck without a launcher.
You might see this in logcat
:
D User unlocked but no home; let's hope someone enables one soon?
To re-enable the launcher run:
adb shell pm enable com.google.android.apps.nexuslauncher
Set the date in the emulator
To set the date & time of the running system:
adb shell su root date 061604052021.00
The datetime is in the format:
MMDDhhmm[[CC]YY][.ss]
BigQuery
- WebAssembly in BigQuery - 2022-11-06
WebAssembly in BigQuery
So you can run WebAssembly code as part of a BigQuery SQL query.
Rust code:
#![allow(unused)] fn main() { #[no_mangle] extern "C" fn sum(a: i32, b: i32) -> i32 { a + b } }
Compiled using:
cargo build --target wasm32-unknown-unknown --release
with these compile settings in your Cargo.toml
:
[lib]
crate-type = ["cdylib"]
[profile.release]
opt-level = "s"
debug = false
lto = true
Turn the Wasm file into a C-like array:
xxd -i target/wasm32-unknown-unknown/release/add.wasm
Then drop the output into the below query:
CREATE TEMP FUNCTION sumInputs(x FLOAT64, y FLOAT64)
RETURNS FLOAT64
LANGUAGE js AS r"""
async function main() {
const memory = new WebAssembly.Memory({ initial: 256, maximum: 256 });
const env = {
'abortStackOverflow': _ => { throw new Error('overflow'); },
'table': new WebAssembly.Table({ initial: 0, maximum: 0, element: 'anyfunc' }),
'tableBase': 0,
'memory': memory,
'memoryBase': 1024,
'STACKTOP': 0,
'STACK_MAX': memory.buffer.byteLength,
};
const imports = { env };
const bytes = new Uint8Array([
0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00, 0x01, 0x07, 0x01, 0x60,
0x02, 0x7f, 0x7f, 0x01, 0x7f, 0x03, 0x02, 0x01, 0x00, 0x05, 0x03, 0x01,
0x00, 0x10, 0x06, 0x19, 0x03, 0x7f, 0x01, 0x41, 0x80, 0x80, 0xc0, 0x00,
0x0b, 0x7f, 0x00, 0x41, 0x80, 0x80, 0xc0, 0x00, 0x0b, 0x7f, 0x00, 0x41,
0x80, 0x80, 0xc0, 0x00, 0x0b, 0x07, 0x2b, 0x04, 0x06, 0x6d, 0x65, 0x6d,
0x6f, 0x72, 0x79, 0x02, 0x00, 0x03, 0x73, 0x75, 0x6d, 0x00, 0x00, 0x0a,
0x5f, 0x5f, 0x64, 0x61, 0x74, 0x61, 0x5f, 0x65, 0x6e, 0x64, 0x03, 0x01,
0x0b, 0x5f, 0x5f, 0x68, 0x65, 0x61, 0x70, 0x5f, 0x62, 0x61, 0x73, 0x65,
0x03, 0x02, 0x0a, 0x09, 0x01, 0x07, 0x00, 0x20, 0x01, 0x20, 0x00, 0x6a,
0x0b, 0x00, 0x0f, 0x0e, 0x2e, 0x64, 0x65, 0x62, 0x75, 0x67, 0x5f, 0x61,
0x72, 0x61, 0x6e, 0x67, 0x65, 0x73, 0x00, 0x21, 0x04, 0x6e, 0x61, 0x6d,
0x65, 0x01, 0x06, 0x01, 0x00, 0x03, 0x73, 0x75, 0x6d, 0x07, 0x12, 0x01,
0x00, 0x0f, 0x5f, 0x5f, 0x73, 0x74, 0x61, 0x63, 0x6b, 0x5f, 0x70, 0x6f,
0x69, 0x6e, 0x74, 0x65, 0x72, 0x00, 0x4d, 0x09, 0x70, 0x72, 0x6f, 0x64,
0x75, 0x63, 0x65, 0x72, 0x73, 0x02, 0x08, 0x6c, 0x61, 0x6e, 0x67, 0x75,
0x61, 0x67, 0x65, 0x01, 0x04, 0x52, 0x75, 0x73, 0x74, 0x00, 0x0c, 0x70,
0x72, 0x6f, 0x63, 0x65, 0x73, 0x73, 0x65, 0x64, 0x2d, 0x62, 0x79, 0x01,
0x05, 0x72, 0x75, 0x73, 0x74, 0x63, 0x1d, 0x31, 0x2e, 0x35, 0x32, 0x2e,
0x31, 0x20, 0x28, 0x39, 0x62, 0x63, 0x38, 0x63, 0x34, 0x32, 0x62, 0x62,
0x20, 0x32, 0x30, 0x32, 0x31, 0x2d, 0x30, 0x35, 0x2d, 0x30, 0x39, 0x29
]);
return WebAssembly.instantiate(bytes, imports).then(wa => {
const exports = wa.instance.exports;
const sum = exports.sum;
return sum(x, y);
});
}
return main();
""";
WITH numbers AS
(SELECT 1 AS x, 5 as y
UNION ALL
SELECT 2 AS x, 10 as y
UNION ALL
SELECT 3 as x, 15 as y)
SELECT x, y, sumInputs(x, y) as sum
FROM numbers;
References
- Running Python Code in BigQuery by Anna Scholtz
Dates
- The date of Easter - 2022-11-06
The date of Easter
def easter(year):
y = year;
c = y//100;
n = y-19*(y//19);
k = (c-17)//25;
i = c-c//4-(c-k)//3+19*n+15;
i = i-30*(i//30);
i = i-(i//28)*(1-(i//28)*(29//(i+1))*((21-n)//11));
j = y+y//4+i+2-c+c//4;
j = j-7*(j//7);
l = i-j;
m = 3+(l+40)//44;
d = l+28-31*(m//4);
return (m, d)
print(easter(2022)) # (4, 17)
Based on the wonderful explanation in §3. Calendrical. of the Inform7 documentation
Docker
- Docker on a remote host - 2022-11-06
- Run a shell with a Docker image - 2022-11-21
- SSH into the Docker VM on macOS - 2022-11-06
Docker on a remote host
docker context create remote --docker "host=ssh://hostname"
docker context use remote
Is the docker daemon running?
If the Docker daemon is not running on the remote host, you might see this error message:
Cannot connect to the Docker daemon at http://docker.example.com. Is the docker daemon running?
The docker.example.com
host is of course nonsense.
The solution: Start the Docker daemon on the remote host and it should work.
Run a shell with a Docker image
docker run -t -i --rm ubuntu:20.04 bash
Changing the platform, e.g. to use x86_64
when running on an M1 MacBook:
docker run -t -i --rm --platform linux/amd64 ubuntu:20.04 bash
Override the entrypoint:
docker run -t -i --rm --entrypoint /bin/bash ubuntu:20.04
SSH into the Docker VM on macOS
Run socat
first:
socat -d -d ~/Library/Containers/com.docker.docker/Data/debug-shell.sock pty,rawer
This will print some lines, including the PTY device opened, like
PTY is /dev/ttys029
Use that to connect using screen
:
screen /dev/ttys029
ffmpeg
- Concatenate videos of the same format - 2023-12-18
Concatenate videos of the same format
Sometimes you end up with several little cuts of a longer video file and just want to concatenate those together. Easy and fast to do with the concat demuxer.
List all input files in a file mylist.txt
:
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'
Then use the concat demuxer with ffmpeg:
ffmpeg -f concat -safe 0 -i mylist.txt -c copy output.mp4
(via StackOverflow)
Firefox
- From build IDs to push log - 2022-06-23
From build IDs to push log
via @chutten:
Found a regression? Here's how to get a pushlog:
-
You have the build dates and you're gonna need revisions. Find the build before the regression and the build after the regression in this list: https://hg.mozilla.org/mozilla-central/firefoxreleases You want to record the Revision column someplace.
May 10 final f44e64a61ed1 May 11 final 61a83cc0b74b
-
Put the revisions in this template:
https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange={}&tochange={}
E.g. https://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=f44e64a61ed1&tochange=61a83cc0b74b
git
- Fixup commits - 2022-05-12
- Git helpers - 2022-05-13
- Last modification date of a file - 2021-06-15
- Rebase dependent branches with
--update-refs
- 2023-02-03
Fixup commits
Fixup commits are commits that build on top of an already existing commit. They can be squashed into the existing commit as a later fixup, e.g. to fix typos or formatting.
git commit
comes with builtin support for that: git commit --fixup=<commit>
,
where <commit>
is the existing commit to be modified.
See the documentation for details.
See also git helpers.
Git helpers
git-absorb
git commit --fixup, but automatic
See github.com/tummychow/git-absorb. See also Fixup commits.
git-revise
A handy tool for doing efficient in-memory commit rebases & fixups
See github.com/mystor/git-revise.
Last modification date of a file
Shows the date of the last commit that modified this file:
git log -1 --pretty="format:%ci" path/to/file
See PRETTY FORMATS
in git-log(1)
for all available formats.
Rebase dependent branches with --update-refs
To automatically adjust all intermediary branches of a larger patch stack rebase with --update-refs
on the latest commit:
git rebase -i main --autosquash --update-refs
GitHub
- GitHub Webhooks - 2022-11-06
GitHub Webhooks
GitHub can send webhooks to a configured server on events. By default this is done on any push event to the repository.
GitHub attaches an HMAC signature using the provided secret, which allows to verify that the content is really coming from GitHub. Documentation about this is available in Securing your webhooks.
In Rust one can verify the signature like this:
#![allow(unused)] fn main() { use hex::FromHex; use hmac::{Hmac, Mac, NewMac}; use sha2::Sha256; fn authenticate(key: &str, content: &[u8], signature: &str) -> bool { // https://developer.github.com/webhooks/securing/#validating-payloads-from-github const SIG_PREFIX: &str = "sha256="; let sans_prefix = signature[SIG_PREFIX.len()..].as_bytes(); match Vec::from_hex(sans_prefix) { Ok(sigbytes) => { let mut mac = HmacSha256::new_from_slice(key.as_bytes()).expect("HMAC can take key of any size"); mac.update(content); mac.verify(&sigbytes).is_ok() } _ => false, } } }
Gradle
- Run tests using Gradle - 2022-11-06
Run tests using Gradle
Run a single test:
./gradlew testDebugUnitTest --tests TestClassNameHere.testFunctionHere
Rerun tests when up-to-date
--rerun-tasks
or
test.outputs.upToDateWhen {false}
in the config
iOS
- iOS log output from device or simulator - 2022-12-07
- Trigger notifications in the simulator - 2022-11-03
iOS log output from device or simulator
On macOS: Use the Console
tool.
Alternatively: idevicesyslog
from libimobiledevice
.
Install:
brew install libimobiledevice
Usage:
idevicesyslog --process Client
Client
stands in for the process name (Client
is the name of Firefox iOS).
via https://bsddaemonorg.wordpress.com/2021/04/12/analysing-ios-apps-log-output/
Trigger notifications in the simulator
The notification goes into a notification.apns
file:
{
"aps": {
"alert": {
"title": "Push Notification",
"subtitle": "Test Push Notifications",
"body": "Testing Push Notifications on iOS Simulator"
}
}
}
Trigger the notification with:
xcrun simctl push booted com.apple.MobileSMS notification.apns
This sends it to the application com.apple.MobileSMS
.
Your own application's bundle identifier can be used if it handles notifications.
The application's bundle identifier can also be specified in the APNS file with the "Simulator Target Bundle"
key.
It can be left out on the command-line in that case.
APNS files can also be dropped on the simulator to be send.
Kotlin
var
vs.val
- Difference - 2022-11-06
var
vs. val
- Difference
Source: https://www.kotlintutorialblog.com/kotlin-var-vs-val/
var
: Mutable. Used to declare a mutable variable. It means the value of variable can be changed multiple times.val
: Immutable Used to declare a read only variable. It means once the value is assigned to variable, that can’t be changed later.
val
is same as final
in Java.
Linux
- Runing parallel tasks from make - 2022-11-06
- sudo alternatives - 2024-05-03
- Symbols in shared libraries - 2022-11-06
Runing parallel tasks from make
With the combination of multiple tools, you can serve static files over HTTP and rerun a build step whenever any input file changes.
I use these tools:
With this Makefile
:
default:
$(MAKE) MAKEFLAGS=--jobs=2 dev
.PHONY: default
dev: serve rerun
.PHONY: dev
build:
# Put your build task here.
# I generate a book using https://github.com/rust-lang/mdBook
mdbook build
.PHONY: build
serve: build
@echo "Served on http://localhost:8000"
# Change to the generate build directory, then serve it.
cd _book && http
.PHONY: serve
rerun:
# fd respects your `.gitignore`
fd | entr -s 'make build'
.PHONY: rerun
All it takes to continously serve and build the project is:
make
sudo alternatives
How to gain privileges to run commands as another user (most likely root
).
- doas (repository)
- from the BSD world
- really
- minimal suid binary, checking only a user's group and write access to a file
- sudo-rs
- sudo reimplementation in Rust
- run0
- systemd-powered tool, based on systemd-run and polkit
- userv
- old unix tool
Symbols in shared libraries
List all exported symbols of a dynamic library:
nm -gD path/to/libyourcode.so
To look at the largest objects/functions in libxul:
readelf -sW $NIGHTLY/libxul.so | sort -k 3 -g -r | head -n 100
To look at the disassembly:
objdump -dr $OBJ | c++filt
On macOS:
otool -tV $OBJ | c++filt
Machine Translation
- Using machine translation for subtitles in mpv - 2024-07-01
Using machine translation for subtitles in mpv
The Bergamot project built a small and fast machine translation model that runs completely local on your machine. Mozilla helped turn this into an addon1, a website and integrated it into Firefox. You can also go to about:translations directly.
The Bergamot translator is available as a C++ library, which can also be compiled to WebAssembly2.
I wanted to use it to translate subtitles in a movie on-the-fly. The movie only contained subtitles in languages I barely know (Norwegian3, Swedish, Finnish, Danish), so getting some help with those subtitles translated to English is required.
Getting Bergamot Translator to work
Getting a CLI tool to use Bergamot translator is properly documented, but of course I had to run into issues first before reading that and figuring out.
- Clone the repository
- Configure and build the project
; git clone https://github.com/browsermt/bergamot-translator
; cd bergamot-translator
; mkdir build
; cd build
; cmake ..
; make -j8
That will give you a CLI tool in build/app/bergamot
:
; build/app/bergamot --build-info
AVX2_FOUND=false
AVX512_FOUND=false
AVX_FOUND=false
BUILD_ARCH=native
<snip>
which just works:
; build/app/bergamot
[1] 22529 segmentation fault build/app/bergamot
or not.
You need the models from http://data.statmt.org/bergamot/models/ first.
I wanted to translate Norwegian, so went with nben
4. That is this file:
https://data.statmt.org/bergamot/models/nben/nben.student.tiny11.v1.e410ce34f8337aab.tar.gz
; mkdir -p models
; wget --quiet --continue --directory models/ \
https://data.statmt.org/bergamot/models/nben/nben.student.tiny11.v1.e410ce34f8337aab.tar.gz
; (cd models && tar -xzf nben.student.tiny11.v1.e410ce34f8337aab.tar.gz)
Then some patching is required, Bergamot has a tool for that:
; python3 bergamot-translator-tests/tools/patch-marian-for-bergamot.py --config-path models/nben.student.tiny11/config.intgemm8bitalpha.yml --ssplit-prefix-file $(realpath 3rd_party/ssplit-cpp/nonbreaking_prefixes/nonbreaking_prefix.en)
I don't actually know what exactly it patches nor why they don't offer the already patched files.
Last but not least the tool now translates text on stdin from Norwegian to English:
; CONFIG=models/nben.student.tiny11/config.intgemm8bitalpha.yml.bergamot.yml
; build/app/bergamot --model-config-paths $CONFIG --cpu-threads 4 <<< "Jeg snakker litt norsk"
I'm talking a little Norwegian.
For ease of use later I wrapped this in a short shell script:
; cat ~/bin/translate-nben
#!/bin/bash
CONFIG=~/code/bergamot-translator/models/nben.student.tiny11/config.intgemm8bitalpha.yml.bergamot.yml
printf "%s" "$1" | ~/code/bergamot-translator/build/app/bergamot --model-config-paths $CONFIG --cpu-threads 4
Make it executable, then run it with the text to translate as an argument:
; chmod +x ~/bin/translate-nben
; translate-nben "Jeg snakker litt norsk."
I speak a little Norwegian.
Translating subtitles in mpv
I found a script for mpv5 that does the heavy lifting of extracting the current subtitle line, sending it to a translator and displaying the answer: subtitle-translate-mpv. It uses Crow, another translator tool that uses a number of web services for the translation.We replace that later with our local-only Bergamot-powered translator.
Let's install the script (this is on macOS, adjust paths accordingly for other systems)
; cd ~/.config/mpv
; mkdir -p scripts && cd scripts
; git clone --depth 1 https://github.com/EnergoStalin/subtitle-translate-mpv.git
Add the configuration for hotkeys:
; cat ~/.config/mpv/input.conf
CTRL+t script-message enable-sub-translator
CTRL+T script-message disable-sub-translator
ALT+t script-message sub-translated-only
ALT+o script-message sub-primary-original
Now edit ~/.config/mpv/scripts/subtitle-translate-mpv/modules/translators/crow.lua
.
Find the line where it puts together the command to run.
At the time of this writing this is line 35 and following.
Change that to run your script instead:
local args = {
'/Users/jer/bin/translate-nben',
escaped
}
Save this and you're set.
Start your movie, enable subtitles (press v
and select the right language with j
).
Then press Ctrl-t
to enable the translation (disable it again with Ctrl-T
, so that's Ctrl-Shift-t
).
Toggle the original one on or off with Option-t
(or Alt-t
on not-macOS).
This is what it will look like, original subtitle on the bottom, translated one above6:
Now part of Firefox directly.
That's what the website and the Firefox integration use.
nb=Norwegian Bokmål, one of the official written standards of the Norwegian language; en=English.
mpv.io, my media player of choice.
I have mine configured to be a bit smaller, sub-scale=0.5
in ~/.config/mpv/mpv.conf
.
macOS
- List linked dynamic libraries - 2022-11-23
- Check who holds SecureInput lock - 2021-05-21
List linked dynamic libraries
otool -L path/to/liblib.dylib
Check who holds SecureInput lock
Individual applications on macOS can request SecureInput
mode, which disables some functionality that would otherwise allow to capture input.
One can check if SecureInput
is active and which process holds the lock:
$ ioreg -l -w 0 | grep SecureInput
| "IOConsoleUsers" = ({"kCGSSessionOnConsoleKey"=Yes,"kSCSecuritySessionID"=100024,"kCGSSessionSecureInputPID"=123,"kCGSSessionGroupIDKey"=20,
"kCGSSessionIDKey"=257,"kCGSessionLoginDoneKey"=Yes,"kCGSSessionSystemSafeBoot"=No,"kCGSSessionUserNameKey"="user",
"kCGSessionLongUserNameKey"="username","kCGSSessionAuditIDKey"=100001,"kCGSSessionLoginwindowSafeLogin"=No,"kCGSSessionUserIDKey"=101})
The kCGSSessionSecureInputPID
holds the PID of the process that holds the SecureInput
lock.
Find that process with ps
:
ps aux | grep $pid
nginx
- Match and act on query parameters - 2022-12-20
- Set and return a cookie - 2024-07-22
Match and act on query parameters
In order to serve different files based on query parameters, first create a map:
map $query_string $resource_name {
~resource=alice$ alice;
~resource=bob$ bob;
}
Then in your server
block match the location and inside rewrite the URL the way you need it:
location = /.well-known/thing {
root /var/www/thing;
if ($resource_name) {
rewrite ^(.*)$ /$resource_name.json break;
}
try_files $uri = 404;
}
Now if someone requests /.well-known/thing?resource=alice
nginx will serve /var/www/thing/alice.json
.
Set and return a cookie
nginx can directly set and read a cookie.
This snippet sets a cookie i_was_here
to the value hello_world
and also renders a cookie it received into a JSON payload:
location = /cookies {
add_header Content-Type application/json;
add_header Set-Cookie i_was_here=hello_world;
return 200 '{"cookie": "$cookie_i_was_here" }';
}
Test it with:
curl https://example.com/cookies -H 'Cookie: i_was_here=hello_world'
Note that the cookie is client-supplied, so it might contain whatever and there's no escape happening, so something like the following request will result in invalid JSON:
curl https://example.com/cookies -H 'Cookie: i_was_here=foo" test'
nix
- home-manager: Allow unfree packages - 2023-05-03
- Home Manager and how to use it - 2023-04-27
- List all available attributes of a flake - 2023-05-16
- A minimal flake for a shell - 2023-07-16
- Changes after updating home-manager - 2023-04-28
- Remote Builds - 2023-04-27
- Replacing/Adding another cache server - 2023-05-04
- Update nix - 2023-08-19
home-manager: Allow unfree packages
Some packages are unfree
, due to their licenses, e.g. android-studio
.
To use them one needs to allow unfree packages.
In a home-manager flake this can be done as follows.
In one of your modules add:
{ pkgs, ... }: {
nixpkgs = {
config = {
allowUnfree = true;
allowUnfreePredicate = (_: true);
};
};
}
The allowUnfreePredicate
is due to home-manager#2942 (I haven't actually checked that it is necessary)
Home Manager and how to use it
Configuration is located in ~/.config/home-manager
Activate configuration
home-manager switch
Update everything
cd ~/.config/home-managaer
nix flake update
home-manager switch
Resources
List all available attributes of a flake
List all names of packages for a certain target architecture:
$ nix eval .#packages.x86_64-linux --apply builtins.attrNames --json
["default"]
List all supported architectures of a package:
$ nix eval .#packages --apply builtins.attrNames --json
["aarch64-darwin","aarch64-linux","i686-linux","x86_64-darwin","x86_64-linux"]
List everything from a flake:
$ nix eval . --apply builtins.attrNames --json
["__darwinAllowLocalNetworking","__ignoreNulls","__impureHostDeps","__propagatedImpureHostDeps","__propagatedSandboxProfile","__sandboxProfile","__structuredAttrs","all","args","buildInputs","buildPhase","builder","cargoArtifacts","cargoVendorDir","checkPhase","cmakeFlags","configureFlags","configurePhase","depsBuildBuild","depsBuildBuildPropagated","depsBuildTarget","depsBuildTargetPropagated","depsHostHost","depsHostHostPropagated","depsTargetTarget","depsTargetTargetPropagated","doCheck","doInstallCargoArtifacts","doInstallCheck","drvAttrs","drvPath","inputDerivation","installPhase","mesonFlags","meta","name","nativeBuildInputs","out","outPath","outputName","outputs","overrideAttrs","passthru","patches","pname","propagatedBuildInputs","propagatedNativeBuildInputs","src","stdenv","strictDeps","system","type","userHook","version"]
(via garnix: steps)
A minimal flake for a shell
{
description = "A very basic flake";
outputs = { self, nixpkgs, ... }:
let
supportedSystems = [ "aarch64-linux" "aarch64-darwin" "x86_64-darwin" "x86_64-linux" ];
forAllSystems = nixpkgs.lib.genAttrs supportedSystems;
in
{
devShells = forAllSystems (system:
let
pkgs = nixpkgs.legacyPackages.${system};
in
{
default = pkgs.mkShell {
buildInputs = with pkgs;
[
ripgrep
];
};
});
};
}
Start a shell with:
nix develop
Old way:
A shell.nix
with
with import <nixpkgs> {};
pkgs.mkShell {
nativeBuildInputs = with pkgs; [
ripgrep
];
}
And run it with
nix-shell
Changes after updating home-manager
Using nvd
, diff the latest 2 home-manager generations:
home-manager generations | head -n 2 | cut -d' ' -f 7 | tac | xargs nvd diff
Of course only gets you the changes after they are installed.
This is also built-in now using nix store diff-closures
:
$ nix store diff-closures /nix/store/xfy75lrmsh23hj2c8kzqr4n1cfvzh1s2-home-manager-generation /nix/store/5rdyvvk6jngd2hd44bsa14bpzxraigbi-home-manager-generation
gnumake: 4.4.1 → ∅, -1546.9 KiB
home-manager: -9.0 KiB
via Nix Manual: nix store diff-closures
Remote Builds
After reading Using Nix with Dockerfiles I wanted to understand how to use dockerTools.buildImage
to build a Docker image,
instead of relying on a Dockerfile
.
The issue: I'm on a M1 MacBook, an aarch64 machine. The Docker on this machine runs within an aarch64 Linux VM. Naively building a nix flake means it will build for aarch64 macOS and that then cannot run within the Docker container. So I needed to understand how to either cross-compile or use a remote builder.
I went for the latter, using my x86_64-linux
server with nix installed as a remote builder.
I started with the test command from the Remote Builds docs, slightly modified:
nix build --impure --expr '(with import <nixpkgs> { system = "x86_64-linux"; }; runCommand "foo" {} "uname > $out")' --builders 'ssh://builder x86_64-linux' --max-jobs 0 -vvv
--max-jobs 0
will ensure it won't run any local tasks
and -vvv
will show all the debug output.
This starts downloading nixpkgs and stuff and then ... fail:
error: unable to start any build; either increase '--max-jobs' or enable remote builds.
https://nixos.org/manual/nix/stable/advanced-topics/distributed-builds.html
Unhelpful.
The -vvv
was necessary to even get any understanding of what's failing.
Close to the top one can see this:
ignoring the client-specified setting 'builders', because it is a restricted setting and you are not a trusted user
The docs about trusted-users
say that adding users there essentially give that user root rights.
So let's not do that and instead configure the builder machine in /etc/nix/machines
:
ssh://builder x86_64-linux
I also needed to set the user and the SSH key:
ssh://jer@builder?ssh-key=/Users/jer/.ssh/id_ed25519
Apparently builders = @/etc/nix/machines
is the default,
but if not you can set that in /etc/nix/nix.conf
.
After that a restart of the nix daemon will be necessary:
sudo launchctl kickstart -k system/org.nixos.nix-daemon
Re-running the nix build --impure ...
will fail again:
error: unexpected end-of-file
error: builder for '/nix/store/6ji85w7v51fs3x21szvbgmx4dj0vpjqs-foo.drv' failed with exit code 1;
last 10 log lines:
[...]
> error: you are not privileged to build input-addressed derivations
[...]
> debug1: Exit status 1
For full logs, run 'nix log /nix/store/6ji85w7v51fs3x21szvbgmx4dj0vpjqs-foo.drv'.
Sounds very similar to the initial issue.
This time I set trusted-users = jer
in /etc/nix/nix.conf
on the builder machine.
Then restarted the nix daemon with:
systemctl restart nix-daemon
Now the nix build
on macOS succeeds and:
$ cat result
Linux
Building Docker images for x86_64
Last but not least I can build the Docker image for x86_64
now.
The full example is in github:badboy/flask-nix-example
.
nix build '.#packages.x86_64-linux.dockerImage' --max-jobs 0
Then load it:
docker load < result
And finally run the container:
docker run -it --rm -p 5001:5000 --platform linux/amd64 flask-example
Resources
- Using Nix with Dockerfiles
- NixOS/hydra#584: you are not privileged to build derivations
- Nix Reference Manual: Nix configuration file: trusted-users
- Nix Reference Manual: Distributed Builds
github:badboy/flask-nix-example
Replacing/Adding another cache server
Add the following to /etc/nix/nix.conf
:
substituters = https://aseipp-nix-cache.freetls.fastly.net?priority=10 https://cache.nixos.org
cache.nixos.org
is the default, with priority 40.
See the reference manual.
Lower value means higher priority.
According to that the substituters are only used when called by a trusted user or in a trusted substituter list.
According to https://nixos.wiki/wiki/Maintainers:Fastly#Beta_.2B_IPv6_.2B_HTTP.2F2 the aesipp-nix-cache
is new.
It's unclear from when that info is and what the current status of this project is (as of May 2023).
Update nix
nix upgrade-nix
This might require to be run as root
:
sudo -i nix upgrade-nix
-i
to inherit the environment and have nix
actually available in the $PATH
Sources:
- https://github.com/DeterminateSystems/nix-installer/issues/421
- https://github.com/DeterminateSystems/nix-installer/issues/508
- https://github.com/DeterminateSystems/nix-installer/issues/596
PostgreSQL
- Meta commands in
psql
- 2022-11-06
Meta commands in psql
Command | Note |
---|---|
\l | List databases |
\c | Connect to database |
\dt | List tables |
\d $table | List schema of $table |
Proxmox
- Dropping sys_rawio capabilities for LXC container - 2024-01-09
- Resize disks of a VM - 2022-11-06
Dropping sys_rawio capabilities for LXC container
Proxmox can launch leightweight LXC-powered containers. By default they run unprivileged, meaning root (UID 0) inside the container is mapped to a non-root ID (e.g. UID 100000) on the host (see also Proxmox Wiki: Unprivileged LXC containers.
Launching Debian in such a container works, but some services might fail to start:
; sudo systemctl | grep failed
* sys-kernel-config.mount loaded failed failed Kernel Configuration File System
Instead of just masking that service (so that it never launches) we can take away the sys_rawio
capability.
The service then handles it correctly: If the capability is not available it won't even try.
To do that edit /etc/pve/lxc/$ID.conf
, where $ID
is the ID of your container, e.g. 102.
Add this line:
lxc.cap.drop: sys_rawio
Save, restart the container and it should all be fine again.
Source: Proxmox LXC, Systemd, and Linux Capabilities
Resize disks of a VM
Source: https://pve.proxmox.com/wiki/Resize_disks
- Extend the disk in the web UI
- Run
partprobe
on the machine - Run
parted
print
to show current layout. This will ask you to fix the GPT. Say "Fix"resizepart 1
--1
being the partition ID- It asks for the end. Type
100%
- Resize the filesystem:
resize2fs /dev/vda1
pyinfra
- Download and unpack a compressed file using pyinfra - 2024-01-05
Download and unpack a compressed file using pyinfra
pyinfra is a tool to automate infrastructure deployments. Instead of defining everything in YAML like Ansible it uses plain Python code.
I heard about it a while ago and finally started to use it for a single server deployment.
One of the things deployed is a web frontend.
To update it I would download the new tar.gz
file from the releases page,
extract it and move it to the final location to be used by my webserver.
I wanted to automate that.
pyinfra has an operation to download files, but these are just placed in the filesystem. The file is not extracted nor its contents moved to a specific place.
I thus combined that with some additional shell commands to extract the file and put the content in the final destination. I'm not sure if this is a bit of a hack or the right way to do things in pyinfra.
First of define the tool version and checksum in group_data/all.py
:
tool_version = "v1.0.4"
tool_sha256 = "01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b"
Then we define the download and extraction step in deploy.py
.
from pyinfra import host
from pyinfra.operations import files, server
# The URL to download from
tool_url = f"https://tool-homepage.example.com/releases/download/tool-{host.data.tool_version}.tar.gz"
# Where to store the downloaded file
tool_target = "/opt/tool/tool.tar.gz"
tool_downloaded = files.download(
name=f"Download tool {host.data.tool_version}",
src=tool_url,
dest=tool_target,
sha256sum=host.data.tool_sha256,
)
if tool_downloaded.changed:
server.shell(
name="Unpack tool and move content to destination",
commands=[
f"tar -C /opt/tool -zxvf {tool_target}",
f"rsync -a /opt/tool/tool-{host.data.tool_version}/ /var/www/tool/"
]
)
Some explanation:
- The version and checksum are defined as data. To update change those.
- This downloads the file to a fixed location. That way we can rely on pyinfra doing this only if the file is missing or has the wrong checksum. No unnecessary re-downloads.
- Only when the tool is actually downloaded, we extract it. We expect the compressed tar file to have one top-level directory
tool-{version}
. - We rsync this folder into the final location, in the code above that's
/var/www/tool
. Depending on your use case you might want to add--delete
to delete files from the destination that don't exist in the new version of the tool anymore. - This doesn't clean up old downloads of the compressed file. This could be done with some globbing and skipping the new file in that list.
Python
- Modify integer literals - 2022-11-06
- Deduplicate a list and keep the order - 2023-03-30
- pip - Install from Git - 2022-11-06
- Strip Markdown syntax - 2022-11-06
Modify integer literals
Integer literals in Python refer to the same object every time they are used. One can modify those objects:
from sys import getsizeof
from ctypes import POINTER, c_void_p, c_char, cast
def read_int(obj: int, vv=True) -> bytes:
size = getsizeof(obj)
ptr = cast(c_void_p(id(obj)), POINTER(c_char))
buf = ptr[0:size]
if vv:
print(f"int obj @ {hex(id(obj))}: {buf.hex(' ')}")
return buf
def write_int(dst: int, src: int):
raw_src = read_int(src, False)
dst_ptr = cast(c_void_p(id(dst)), POINTER(c_char))
for (idx, c) in enumerate(raw_src):
dst_ptr[idx] = c
read_int(1)
write_int(1, 2)
read_int(1)
a = 1
b = 2
print(a + b)
(via https://twitter.com/segfault_witch/status/1512160978129068032)
Deduplicate a list and keep the order
duplicated_list = [1,1,2,1,3,4,1,2,3,4]
ordered = list(dict.fromkeys(duplicated_list)) # => [1, 2, 3, 4]
pip - Install from Git
To install a Python package from Git instead of a PyPi-released version do this:
pip install git+ssh://git@github.com/account/repository@branch#egg=package-name
See also: Useful tricks with pip install URL and GitHub
Strip Markdown syntax
In order to strip Markdown syntax and leave only the plain text output one can patch the Markdown parser:
from markdown import Markdown
from io import StringIO
def unmark_element(element, stream=None):
if stream is None:
stream = StringIO()
if element.text:
stream.write(element.text)
for sub in element:
unmark_element(sub, stream)
if element.tail:
stream.write(element.tail)
return stream.getvalue()
Markdown.output_formats["plain"] = unmark_element
__md = Markdown(output_format="plain")
__md.stripTopLevelTags = False
def strip_markdown(text):
return __md.convert(text)
Then call the strip_markdown
function:
text = """
# Hello *World*!
[Today I learned](https://fnordig.de/til)
"""
print(strip_markdown(text))
This results in:
Hello World!
Today I learned
(via https://stackoverflow.com/a/54923798)
Rust
- No-op allocator - 2022-11-06
- Not-equal types - 2022-11-06
- Random values using only libstd - 2022-11-06
- Testing code blocks in the README - 2022-11-06
No-op allocator
use std::alloc::{Layout, GlobalAlloc}; #[global_allocator] static ALLOCATOR: NoopAlloc = NoopAlloc; struct NoopAlloc; unsafe impl GlobalAlloc for NoopAlloc { unsafe fn alloc(&self, _layout: Layout) -> *mut u8 { std::ptr::null_mut() } unsafe fn dealloc(&self, _ptr: *mut u8, _layout: Layout) {} } #[no_mangle] pub extern "C" fn add(left: i32, right: i32) -> i32 { left + right } fn main() {}
This also reduces the generated wat (WebAssembly text format) to a short and readable output:
(module
(type $t0 (func (param i32 i32) (result i32)))
(func $add (export "add") (type $t0) (param $p0 i32) (param $p1 i32) (result i32)
(i32.add
(local.get $p1)
(local.get $p0)))
(func $main (export "main") (type $t0) (param $p0 i32) (param $p1 i32) (result i32)
(unreachable)
(unreachable))
(memory $memory (export "memory") 16)
(global $__data_end (export "__data_end") i32 (i32.const 1048576))
(global $__heap_base (export "__heap_base") i32 (i32.const 1048576)))
Not-equal types
// requires nightly! #![feature(auto_traits)] #![feature(negative_impls)] use std::marker::PhantomData; auto trait NotSame {} impl<A> !NotSame for (A, A) {} struct Is<S, T>(PhantomData<(S,T)>); impl<S,T> Is<S,T> where (S,T): NotSame { fn absurd(&self) { } } fn main() { let t : Is<u32, u32> = Is(PhantomData); //t.absurd(); let z : Is<u32, i32> = Is(PhantomData); z.absurd(); }
Random values using only libstd
#![allow(unused)] fn main() { use std::collections::hash_map::RandomState; use std::hash::{BuildHasher, Hasher}; let random_value = RandomState::new().build_hasher().finish() as usize; println!("Random: {}", random_value); }
Testing code blocks in the README
via github.com/artichoke/intaglio
#![allow(unused)] fn main() { // Ensure code blocks in README.md compile #[cfg(doctest)] macro_rules! readme { ($x:expr) => { #[doc = $x] mod readme {} }; () => { readme!(include_str!("../README.md")); }; } #[cfg(doctest)] readme!(); }
SQLite
- Recursive Queries - 2022-12-04
- Temporary values in SQLite - 2022-11-06
- Working with dates - 2022-11-06
Recursive Queries
Generate a series of integers, per ID, with a given min
and max
.
CREATE TABLE data AS
WITH t (id, min, max) AS (
VALUES
(1, 4, 6),
(2, 6, 6),
(3, 7, 9)
)
SELECT * FROM t;
WITH RECURSIVE exp AS (
SELECT
id, min, max, min as x
FROM data
UNION ALL
SELECT
id, min, max, x+1 as x
FROM
exp
WHERE x < max
)
SELECT * from exp ORDER BY id;
Temporary values in SQLite
To select from some values:
WITH vals (k,v) AS (
VALUES
(0,-9999),
(1, 100)
)
SELECT * FROM vals;
To actually create a temporary table:
CREATE TEMP TABLE temp_table AS
WITH t (k, v) AS (
VALUES
(0, -99999),
(1, 100)
)
SELECT * FROM t;
Working with dates
Full docs: Date And Time Functions
Datetime of now
SELECT datetime('now');
Timestamp to datetime
SELECT datetime(1092941466, 'unixepoch');
Datetime to timestamp
SELECT strftime('%s', 'now');
- Exporting Twitter Spaces recording - 2022-11-06
Exporting Twitter Spaces recording
For better or for worse people are using Twitter Spaces more and more: audio-only conversations on Twitter. Simon Willison recently hosted one and wrote a TIL how to download it. I helped because it's actually easier than his initial solution, so I'm copying that here:
Exporting the recording using youtube-dl
Open the Twitter Spaces page, open your Firefox developer tools console in the network tab, filter for "m3u", then hit "Play" on the page. The network tab will capture the URL to the playlist file. Copy that.
Then use youtube-dl
(or one of its more recent forks like yt-dlp) to download the audio:
youtube-dl "https://prod-fastly-us-west-1.video.pscp.tv/Transcoding/v1/hls/GPI6dSzgZcfqRfMLplfNp_0xu1QXQ8iDEEA0KymUd5WuqOZCZ9LGGKY6vBQdumX7YV1TT2fGtMdXdl2qqtVvPA/non_transcode/us-west-1/periscope-replay-direct-prod-us-west-1-public/audio-space/playlist_16798763063413909336.m3u8?type=replay"
This will result in a .mp4
file (media container):
$ mediainfo "playlist_16798763063413909336 [playlist_16798763063413909336].mp4"
General
Complete name : playlist_16798763063413909336 [playlist_16798763063413909336].mp4
Format : ADTS
Format/Info : Audio Data Transport Stream
[...]
Audio
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
[...]
To extract only the audio part you can use ffmpeg:
ffmpeg -i "playlist_16798763063413909336 [playlist_16798763063413909336].mp4" -vn -acodec copy twitter-spaces-recording.aac