Develop
For development, the clang toolchain is recommended.
nix-shell toolchain/clang_dev.nix # from repo root
cmake -S . -B build -GNinja
cd build
ninja
ctest -j
ninja format
ninja docs
ninja coverage
Then when opening vscode from the nix-shell, the correct clangd & library paths are used:
For using infer, infer must be installed separately (it is not yet packaged with nix - see here)
- Build with clang
cmake -S . -B build -DEXTERNALS=Off
infer run --compilation-database build/compile_commands.json --bufferoverrun --liveness --pulse
infer explore
- Statically detects generic bugs (e.g. use after free, buffer overrun, integer overflow)
To check for dependency upgrades in the cpp code:
nix-shell toolchain/renovate.nix
LOG_LEVEL=debug renovate --platform=local --dry-run=full
Verifying Changes
All tests, linting & benchmarks are checked under ctest:
ctest --test-dir build -N # List the test names
ctest --test-dir build -R 'derive-c-custom-lint' -V # See results for custom derive-c lints
ctest --test-dir build -j --output-on-failure # To test all
ctest --test-dir build -L bench -R allocs -V # To just run the alloc benchmarks
Individual test binaries are presenting in the:
ls build/bench # all benchmarking binaries
ls build/test # all test binaries
We also verify for both clang and gcc, in release and with sanitizers.
# Used for normal development (include intellisense)
nix-shell toolschain/clang_dev
# Verifying code works (e.g. cutsom poisoning under msan)
nix-shell toolschain/clang_msan --run 'cmake -S . -B build_msan -DUSE_ASAN=Off _DUSE_UBSAN=Off -DUSE_MSAN=On -DDOCS=Off && ninja -C build_msan && ctest --test-dir build_msan -j --output-on-failure'
Finally we have benchmarks covering:
- Basic cases so we can check for obvious performance regressions
- Worst case scenarios for the library
A normal development cycle should occur in the clang dev toolchain, with tests run with asan & ubsan.
- The rest can be run in ci, and specific toolchains & builds debugged locally.
Design Principles
This library should focus on maintaining yeetability.
Only from passing CI, a PR is good to yeet to prod.
- All tests should run in full, in CI, in an easily reproducible environment. Manual testing is never valid evidence of testing. Reviewers should always check coverage.
- Odd design decisions should be justified in code with JUSTIFY: comments
- Derive-c specific idioms should be where possible enforced in CI, using the linting scripts.
TODO
- Improved testing coverage (e.g. delete in containers)
- Roaring bitset
- Improved set