Persistent store with Git semantics: lazy reads, delayed writes, content-addressing

Explain irmin-pack concurrent slowness in benchmark report

The comparison with Lavyek is not apples-to-apples: irmini/Lavyek
does raw backend read/write, while irmin-pack goes through the full
Irmin stack (tree, inodes, pack file, index). Also irmin-pack
serializes all writes behind a single mutex on the append-only pack.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

+9 -1
+9 -1
bench/README.md
··· 138 138 backend under contention (100 fibers / 12 domains). Lavyek is lock-free; 139 139 the disk backend serializes writes behind `Eio.Mutex`. irmin-pack 140 140 achieves ~1.5–1.7 k ops/s (per-branch writes), **~6× faster** than 141 - irmini's disk but **~270 000× slower** than Lavyek. 141 + irmini's disk but **~270 000× slower** than Lavyek. Note: this 142 + comparison is not apples-to-apples — the irmini/Lavyek scenario measures 143 + raw backend operations (read/write a blob), while the irmin-pack scenario 144 + goes through the full Irmin stack (tree construction, inode hashing, 145 + serialization, writing to the append-only pack file, index update). 146 + Furthermore, irmin-pack serializes all writes behind a single writer 147 + (the pack file is protected by a mutex), which nullifies the parallelism 148 + of the 12 domains. Lavyek, by contrast, is lock-free (Atomic.t + KCAS) 149 + and its operations are much lighter (no tree/commit/inode layer). 142 150 - **Reads (irmini)**: Memory is fastest (9.6 k ops/s), Lavyek close behind 143 151 (8.3 k), disk significantly slower (4 k). By comparison, Irmin-Eio 144 152 reads are ~160× faster at 1.5 M ops/s.