The behaviour of an operation is often specified as pairs of invocations and immediate responses. While such sequential specifications are intuitively understandable, the assumption of immediate responses does not hold when multiple threads concurrently access a shared memory. When large critical sections are protected by locks, it can be easy to see that the behaviour of a concurrent program fulfills a sequential specification. On the other hand, locking large sections of code negates the advantages of concurrent execution. So in performance critical areas, fine-grained concurrency typically is the choice. In the context of fine-grained concurrency, deciding whether or not a program holds up to a sequential specification is a hard problem. Linearizability describes the property of a program to have the same behaviour when executed concurrently and sequentially. So it is adequate to think of a linearizable program as pairs of invocations and immediate responses even when it is concurrently executed. We develop techniques to verify linearizability for concurrent data structures.
(CAV 2023) nekton: A Linearizability Proof Checker. Roland Meyer, Anton Opaterny, Thomas Wies, Sebastian Wolff.
(PLDI 2023) Embedding Hindsight Reasoning in Separation Logic. Roland Meyer, Thomas Wies, Sebastian Wolff.
(TACAS 2023) Make Flows Small Again: Revisiting the Flow Framework. Roland Meyer, Thomas Wies, Sebastian Wolff.
(OOPSLA2 2022) A concurrent program logic with a future and history. Roland Meyer, Thomas Wies, Sebastian Wolff.
(POPL 2019) Pointer life cycle types for lock-free data structures with memory reclamation. Roland Meyer, Sebastian Wolff.