The behaviour of an operation is often specified as pairs of invocations and immediate responses. While such sequential specifications are intuitively understandable, the assumption of immediate responses does not hold when multiple threads concurrently access a shared memory. When large critical sections are protected by locks, it can be easy to see that the behaviour of a concurrent program fulfills a sequential specification. On the other hand, locking large sections of code negates the advantages of concurrent execution. So in performance critical areas, fine-grained concurrency typically is the choice. In the context of fine-grained concurrency, deciding whether or not a program holds up to a sequential specification is a hard problem. Linearizability describes the property of a program to have the same behaviour when executed concurrently and sequentially. So it is adequate to think of a linearizable program as pairs of invocations and immediate responses even when it is concurrently executed. We develop techniques to verify linearizability for concurrent data structures.