*Abstract:*

This thesis concerns the problem of complexity in operational semantics definitions. The appeal of modern operational semantics is the simplicity of their metatheories, which can be regarded as theories of deduction about certain shapes of operational judgments. However, when applied to real programming languages they produce bulky definitions that are cumbersome to reason about. The theory of interacting deductions is a richer metatheory which simplifies operational judgments and admits new proof techniques.

An interacting deduction is a pair (*F*, *I*), where
*F* is a forest of inference trees and *I* is a set of
interaction links (a symmetric set of pairs of formula occurrences
of *F*), which has been built from interacting inference rules
(sequences of standard inference rules, or *rule atoms*).
This setting allows one to decompose operational judgments. For
instance, for a simple imperative language, one rule atom might
concern a program transition, and another a store transition.
Program judgments only interact with store judgments when
necessary: so stores do not have to be propagated by every
inference rule. A deduction in such a semantics would have two
inference trees: one for programs and one for stores.

This introduces a natural notion of modularity in proofs about
semantics. The *proof fragmentation theorem* shows that one
need only consider the rule atoms relevant to the property being
proved. To illustrate, I give the semantics for a simple process
calculus, compare it with standard semantics and prove three simple
properties: nondivergence, store correctness and an equivalence
between the two semantics.

Typically evaluation semantics provide simpler definitions and
proofs than transition semantics. However, it turns out that
evaluation semantics cannot be easily expressed using interacting
deductions: they require a notion of sequentiality. The
*sequential* deductions contain this extra structure. I
compare the utility of evaluation and transition semantics in the
interacting case by proving a simple translation correctness
example. This proof in turn depends on proof-theoretic concerns
which can be abstracted using *dangling interactions*. This
gives rise to the techniques of breaking and assembling interaction
links. Again I get the proof fragmentation theorem, and also the
*proof assembly* theorem, which allow respectively both the
isolation and composition of modules in proofs about semantics. For
illustration, I prove a simple type-checking result (in evaluation
semantics) and another nondivergence result (in transition
semantics).

I apply these results to a bigger language, CSP, to show how the
results scale up. Introducing a special *scoping*
side-condition permits a number of linguistic extensions including
nested parallelism, mutual exclusion, dynamic process creation and
recursive procedures. Then, as an experiment I apply the theory of
interacting deductions to present and prove sound a compositional
proof system for the partial correctness of CSP programs.

Finally, I show that a deduction corresponds to CCS-like process evaluation, justifying philosophically my use of the theory to give operational semantics. A simple corollary is the undecidability of interacting-deducibility. Practically, the result also indicates how one can build prototype interpreters for definitions.

This report is available in the following formats:

- PDF file
- Gzipped PDF file
- PostScript file
- Gzipped PostScript file
- PostScript file with Type 1 fonts
- Gzipped PostScript file with Type 1 fonts
- LaTeX DVI file
- Gzipped LaTeX DVI file