Zellij is a terminal multiplexer written in Rust, that aims to provide an intuitive and easy to extend working environment. With version 0.37.0 Zellij introduced a plugin system, that allows users to write plugins with WebAssembly. Additionally the plugin system provides a rust crate with an API to interact with the Zellij core.
Since zellij to the date of writing this article has not provided customizations for the status bar, I decided to write one that is easily customizable by writing KDL configuration into layout files - zjstatus.
While writing, extending and refactoring the plugin, I learned a lot about the plugin system, WebAssembly and Rust in general. Especially about performance impact in Rust when memory management is not implemented correctly. In this article I want to share some of these insights.
In order to follow the article, please check out the official rust-plugin-example, which I used as a starting point.
Initial setup
This section describes some actions I took to set up the development environment for simplifying the development process.
Development dependencies
Since I’m a big fan of nix and nixOS, I started by creating a shell.nix
file, that provides all the dependencies needed for development.
It contains a pinned version of the nixpkgs repository, that holds build instructions for used dependencies and their respective versions.
|
|
It will install clang
, llvm
, rustup
, libiconv
and watchexec
into a nix shell, additionally to configure build dependencies for the rust compiler.
To automatically enable usage of these dependencies, I added a .envrc
to utilze direnv for automatically loading the nix shell when entering the project directory.
|
|
Later on I switched to a flake.nix
file to provide a convenient, reproducible and declarative way to build the project.
Task runner
Even if most of the tasks while developing can be done by directly running cargo
commands, I start by setting up a justfile
, that is used by just, an alternative to make
.
Then justfile
will provide a unified interface to run all the commands needed for development and also lists them conveniently.
|
|
Commands can be run by calling just <command>
, e.g. just run
.
The syntax for releases allows to bump the version and create a git tag with a single command: just release 0.1.0
.
Splitting code to binary and library
As we want to write tests and benchmarks for the code, we need to split the entrypoint of the plugin from the library code. All parts that we want to test and benchmark need to be part of the library crate. To split the binary from the library, we need to modify the file structure in the following way:
|
|
The current main file will be moved to src/bin/plugin.rs
and the src/lib.rs
file will be created.
src/lib.rs
will contain the library code, that contains logic for the plugin.
In addition to the directory structure, we need to modify the Cargo.toml
file to reflect the changes:
|
|
This will prevent cargo from directly running library or binary code, when only benchmarks should be run.
It is important to configure the project like that because zellij’s API with register_plugin
will try to connect to the running zellij instance, which is not present when running benchmarks.
Since all logic related code is now located in the library, we need to import it in the binary or benchmarks with use plugin_name::*;
.
Unit testing
For ensuring the correctness of plugin logic and to validate, that the logic works as expected when refactoring the code, I am a big fan of unit testing. It not only helps to ensure correctness, but also provides a way to document the code and its usage. In addition it helps to develop new features in a test driven way, such that the test is written before the actual implementation.
Unit tests in zellij plugins can be written by adding a #[cfg(test)]
attribute to the test module and writing tests with the #[test]
attribute - like in any other rust project.
You just need to make sure, that code that interacts with zellij’s API is not executed when running tests, as it will fail otherwise.
This can be done by using the #[cfg(not(test))]
attribute on the code that should not be executed when running tests.
Another tricky part is to run the unit tests, as they need to be run within a WebAssembly runtime (like wasmtime
).
To simplify running the tests, I added cargo-component to the project dependencies, which runs code with wasmtime
after it has been compiled.
A shortcut for running the tests was also added to the justfile
before.
Here’s an example of a unit test, that verifies the correctness of the parse_color
function:
|
|
Benchmarks
Benchmarks can be used to check for performance regressions when refactoring code. They can also be used to measure the performance impact of new features. Therefore it is important to store results of the benchmarks, to compare them with future runs.
Normally, I like to use divan for writing benchmarks, but it does not support WebAssembly yet. Therefore I switched to criterion since it supports WebAssembly and provides some convenient features for storing and comparing benchmark results.
To start with benchmarks, first add criterion
and the benchmark targets to the Cargo.toml
file:
|
|
Benchmark targets must be named like the benchmark files in the benches directory, e.g. benches/benches.rs
.
Follow the criterion documentation for writing benchmarks. Here’s a quick example:
|
|
Similar to unit tests we need to make sure that code which interacts with zellij’s API is not executed when running benchmarks.
Therefore we need to add a new feature to the Cargo.toml
and add the #[cfg(not(feature = "bench"))]
attribute to the code that should not be executed when running benchmarks.
|
|
Since criterion needs to store benchmark results in the target
directory, we cannot run benchmarks with cargo component bench
.
cargo component bench
will not only try to run the plugin wasm binary outside of zellij, but also runs the benchmarks in wasmtime
without granting permissions to write to the target
directory.
Therefore we need to compile benchmarks with cargo bench --target wasm32-wasi --benches --no-run --feature=benches
.
After compilation it will print the path to the compiled binaries.
These binaries then must be executed with wasmtime
and the --dir
flag to grant permissions to write to the target
directory.
The following target in the justfile
will automate this process:
|
|
Since the benchmarks are now able to store the json results in the target
directory, criterion
is able to compare the result with previous runs.
This technique allowed me to improve the performance of zjstatus a lot, after I learned how to better utilize the borrow checker and be aware of memory allocations.
To improve performance, I tried to avoid .clone()
calls where possible and instead use references, since references are cheap to copy, whereas .clone()
needs to allocate memory of the size of the cloned object and copy the data.