NJPLS at Lehigh University

Friday, Oct. 2nd, 2009

Directions to Lehigh


Agenda

Arrival and Breakfast: 9:30 - 10:00

Session I: 10:00 - 11:15

Break: 11:15 - 11:30

Session II: 11:30 - 12:15

Lunch: 12:15 - 1:20

Business meeting: 1:20 - 1:30

Session III: 1:30 - 2:45

Break: 2:45 - 3:00

Session IV: 3:00 - 4:30


Abstracts


Syntactic Proofs of Compositional Compiler Correctness
Adam Chlipala (Harvard)

Formal verification of compilers for higher-order languages holds the promise of lowering the costs of building and evolving correct implementations. Simple syntactic techniques exist for verifying such compilers, based on compiler-specific relations connecting source-level and target-level values. With these proof techniques, the final theorems say nothing about linking code from one compiler with code from a different compiler or with manually-written code.

Recent work by Nick Benton and associates has shown how to use step-indexed logical relations to fix this problem for pure, typed languages. The relations and proof techniques are significantly more complex than the old standards. Extension to new language features often requires real ingenuity.

In this talk, I will present a new, more "syntactic" approach to building compositional compiler correctness theorems. The technique is inspired by the world of foundational proof-carrying code, where early work was split between the "semantic" approach of Appel et al., based on step-indexed logical relations; and the "syntactic" approach of Shao et al., based on the design of novel proof systems. By thinking of compiler correctness in terms of proof systems instead of just black box results, we can obtain composable results while only adding minimal complexity to the classic proof techniques. Such an approach scales easily to handle untyped and impure languages.

I have prototyped these ideas with a Coq implementation specialized to CPS translation for an untyped Mini-ML language with recursive functions, sums, products, mutable references, and exceptions.


Task Types for Pervasive Atomicity
Yu David Liu (SUNY Binghamton) and Scott Smith (The Johns Hopkins University)

Several languages now provide the syntax of atomic blocks to bracket code in which atomicity must be preserved. Outside of these blocks however, there are no atomicity guarantees. In this talk we describe a language that takes the opposite approach: with no programmer declaration, atomicity holds pervasively in the code, and is broken only when an explicit need for sharing is declared. Furthermore, our principled way of sharing prevents a complete loss of atomicity; it only means a large region may need to be broken into smaller atomicity-enforcing regions.

At the core of our work is a novel polymorphic type system, Task Types, to help enforce this flavor of pervasive atomicity statically, and to reduce the need for costly run-time support mechanisms such as locks or transactions. Task Types statically map objects onto tasks (threads). New program syntax is minimal; programmers need only declare the sharing points, as subtasks. We show the reasonableness of our type system by proving type soundness, isolation invariance, and atomicity enforcement properties hold at run time.


Jeannie and Blink: Tools for Multi-Lingual Programming and Debugging
Martin Hirzel (IBM Research)

In an ideal world, we would write bug-free programs in our favorite language. In the real world, we write buggy programs, and we reuse code from different languages. Debuggers help us with the first problem, and foreign function interfaces help us with the second problem. But when the two problems occur together, they still confound us. This talk presents two solutions. The first, Jeannie, is a language that nests Java and C in each other. Jeannie makes it easy to write multi-lingual code, and the compiler statically prevents many common bugs. The second, Blink, is a debugger for Java and C. Blink makes it easy to interactively control and inspect an execution of multi-lingual programs. The challenge in building Jeannie and Blink was that they subsume the functionality of two full compilers and debuggers respectively. This talk shows a solution by composing existing single-language tools. The composition yields better multi-lingual tools, which in turn bring us closer to the "ideal world" of bug-free programs in our favorite languages.

Joint work with Robert Grimm, Byeong Lee, and Kathryn McKinley.


Region logic and dynamic encapsulation boundaries
David Naumann (Stevens Institute of Technology)

Kassios introduced dynamic frames for local reasoning about object-based programs, using state-dependent footprints, i.e., expressions that denote location sets writable by commands or read by formulas. Region logic formalizes this approach in a way suited to pre/post/modifies specifications and automated first order theorem provers. I will introduce the ideas as applied to a challenge problem in verification, the Composite design pattern. One of the challenges is how to hide representation invariants for object-based programs where lexical scoping is inadequate for encapsulation. This challenge is addressed by a second order frame rule which formalizes a notion of dynamic encapsulation boundary, i.e., the footprint of an invariant, expressed in client-visible terms.

Joint work with Anindya Banerjee and Stan Rosenberg.


A Fresh Look at Separation Algebras and Share Accounting
Rob Dockins (Princeton University)

Separation Algebras serve as models of Separation Logics; Share Accounting allows reasoning about concurrent-read/exclusive-write resources in Separation Logic. In designing a Concurrent Separation Logic and in mechanizing proofs of its soundness, we found previous axiomatizations of separation algebras and previous systems of share accounting to be useful but imperfect. We adjust the axioms of separation algebras; we demonstrate an operator calculus for constructing new separation algebras; we present a more powerful system of share accounting with a new, simple model; and we provide a reusable Coq development.

Joint work with Aquinas Hobor and Andrew W Appel.


Lifted inference: normalizing loops by evaluation
Chung-chieh Shan (Rutgers) and Oleg Kiselyov (FNMOC)

Many loops in probabilistic inference map almost every individual in their domain to the same result. Running such loops symbolically takes time sublinear in the domain size. Using normalization by evaluation with first-class delimited continuations, we lift inference procedures to reap this speed-up without interpretive overhead. To express nested loops, we use multiple control delimiters for metacircular interpretation. To express loops over a powerset domain, we convert nested loops over a subset to unnested loops.

A Context-free Markup Language for Semi-structured Text
Qian Xi and David Walker (Princeton University)

An ad hoc data format is any non-standard, semi-structured data format for which robust data processing tools are not available. In this paper, we present ANNE, a new kind of mark-up language designed to help users generate documentation and data processing tools for ad hoc text data. More specifically, given a new ad hoc data source, an ANNE programmer will edit the document to add a number of simple annotations, which serve to specify its syntactic structure. Annotations include elements that specify constants, optional data, alternatives, enumerations, sequences, tabular data, and recursive patterns. The ANNE system uses a combination of user annotations and the raw data itself to extract a context-free grammar from the document. This context-free grammar can then be used to parse the data and transform it into an XML parse tree, which may be viewed through a browser for analysis or debugging purposes. In addition, the ANNE system will generate a PADS/ML description, which may be saved as lasting documentation of the data format or compiled into a host of useful data processing tools ranging from parsers, printers and traversal librari es to format translators and query engines. Overall, ANNE simplifies the process of generating descriptions for data formats and improves the productivity of programmers who work with ad hoc data regularly.

In addition to designing and implementing ANNE, we have devised a semantic theory for the core elements of the language. This semantic theory describes the editing process, which translates a raw, unannotated text document into an annotated document, and the grammar extraction process, which generates a context-free grammar from an annotated document. We also present an alternative characterization of system behavior by drawing upon ideas from the field of relevance logic. This secondary characterization, which we call relevance analysis, specifies a direct relationship between unannotated documents and the context-free grammars that our system can generate from them. Relevance analysis allows us to prove a number of important theorems concerning the expressiveness and utility of our system.


Gang Tan put together the schedule.