It shows how to understand scalable algorithms in terms of local protocols governing each part of their hidden state. These protocols are visual artifacts that can be used to informally explain an algorithm at a whiteboard. But they also play a formal role in a new logic for verifying concurrent algorithms, enabling correctness proofs that are local in space, time, and thread execution. Correctness is stated in terms of refinement: clients of an algorithm can reason as if they were using the much simpler specification code it refines.
It shows how to express synchronization in a declarative but scalable way, based on a new library providing "join patterns". By declarative, we mean that the programmer needs only to write down the constraints of a synchronization problem, and the library will automatically derive a correct solution. By scalable, we mean that the derived solutions deliver robust performance with increasing processor count and problem complexity. For common synchronization problems, join patterns perform as well as specialized algorithms from the literature.
It shows how to express scalable algorithms through "reagents", a new monadic abstraction. With reagents, algorithms no longer need to be constructed from "whole cloth," i.e. by using system-level primitives directly. Instead, they are built using a mixture of shared-state and message-passing combinators. Concurrency experts benefit, because they can write libraries at a higher level, with more reuse, without sacrificing scalability. Their clients benefit, because composition empowers them to extend and tailor a library without knowing the details of its underlying algorithms.
This work introduces GPS, the first program logic to provide a full-fledged suite of modern verification techniques—including ghost state, rely-guarantee "protocols", and separation logic—for high-level, structured reasoning about weak memory. We demonstrate the effectiveness of GPS by applying it to challenging examples drawn from the Linux kernel as well as lock-free data structures. We also define the semantics of GPS and prove its soundness directly in terms of the axiomatic C11 weak memory model.
Freeze after writing: quasi-deterministic parallel programming with LVars
Lindsey Kuper, Aaron Turon, Neelakantan R. Krishnaswami, Ryan R. Newton
to appear in POPL 2014.
[ abstract | pdf | technical appendix ]
We extend LVars in two ways. First, we add the ability to "freeze" and then read the contents of an LVar directly. Second, we add the ability to attach event handlers to an LVar, triggering a callback when the LVar's value changes. Together, handlers and freezing enable an expressive and useful style of parallel programming. We prove that in a language where communication takes place through these extended LVars, programs are at worst quasi-deterministic: on every run, they either produce the same answer or raise an error. We demonstrate the viability of our approach by implementing a library for Haskell supporting a variety of LVar-based data structures, together with a case study that illustrates the programming model and yields promising parallel speedup.
This separation of concerns manifests itself nicely when we must implement software with sophisticated error handling. We illustrate with two examples that exploit the architecture to disentangle some core parsing task from its error management. The parser code is completely separate from the error-correction code, although the two components are deeply intertwined at run time.
In this paper, we propose a new semantic model, based on Kripke logical relations, that supports direct proofs of contextual refinement in the setting of a type-safe high-level language. The key idea behind our model is to provide a simple way of expressing the ``local life stories'' of individual pieces of an FCD's hidden state by means of protocols that the threads concurrently accessing that state must follow. By endowing these protocols with a simple yet powerful transition structure, as well as the ability to assert invariants on both heap states and specification code, we are able to support clean and intuitive refinement proofs for the most sophisticated types of FCDs, such as conditional compare-and-set (CCAS).
In this paper, inspired by recent work on "fictional disjointness" in separation logic, we propose a simple and flexible way of enabling any module in a program to create its own custom type of splittable resource (represented as a commutative monoid), thus providing fine-grained control over how the module's private state is shared with its clients. This functionality can be incorporated into an otherwise standard substructural type system by means of a new typing rule we call the sharing rule, whose soundness we prove semantically via a novel resource-oriented Kripke logical relation.
This separation of concerns manifests itself nicely when we must implement software with sophisticated error handling. We illustrate with two examples that both exploit the architecture to disentangle some core parsing task from its error management. The parser code is completely separate from the error-correction code, although the two components are deeply intertwined at run time.
The purpose of this paper is largely educational. Our positive experience with RE derivatives leads us to believe that they deserve the attention of the current generation of functional programmers, especially those implementing RE recognizers. We begin with a review of background material in Section 2, introducing the notation and definitions of regular expressions and their recognizers. Section 3 gives a fresh presentation of Brzozowski's work, including DFA construction with RE derivatives. In addition to reexamining Brzozowski's work, we also report on the key implementation challenges we faced in Section 4, including new techniques for handling large character sets such as Unicode (Unicode Consortium, 2003). Section 5 reports our experience in general and includes an empirical comparison of the derivative-based scanner generator for SML/NJ with the more traditional tool it replaces. We conclude with a review of related work and a summary.