Neil D. Jones
Slides for ESSLLI Evening Lecture (as of 12 August, 2010)
Recent and ongoing research work
Remark: There may exist dead links to publications in the Topps
Bibliography (the systems people keep moving things around).
Last resort: many of the papers can be straightforwardly found via
(LINKS WORKED ON AGAIN: 17 February 2012; more papers will soon be added).
Programming in Biomolecular Computation:
Programming in Biomolecular Computation;
with Lars Hartmann and Jakob Grue Simonsen).
In spite of widespread discussion about connections between biology and computation, one question is notable by the absence of an answer: where are the programs?
A new "blob" model of computation is presented that is programmable,
and at the same time biologically plausible, stored-program,
and universal. It is Turing complete in that a universal algorithm (i.e., a self-interpreter)
exists, able to execute any program,
and not asymptotically inefficient.
Paper  is a revised and extended journal version. Paper  examines this area
in greater depth and with more extensive connections with computational biology. Paper 
(accepted and to appear in journal form in 2012) examines the blob model of computation in
relation to other models that have appeared since Alan Turing's 1936 paper.
Size-Change Termination and
with Matthias Heizmann and Andreas Podelski).
There have been rapid advances in methods for automatically proving program
termination in recent years, both in theoretical research and in applications as
practical as finding termination bugs in device drivers. A recent wave of activity
began with the work on size-change termination
(Jones, Ben-Amram, Lee et al), and led to work on
transition invariants (Podelski, Rybalchenko et al). The
difference in the setting has as consequence some apparent incomparability of the analysis and verification methods in these two lines of work.
The goal of this paper is formally to
compare and contrast the two approaches' 1. soundness proofs,
2. abstract domains, and 3. base algorithms.
Termination Analysis of Higher-Order Functional Programs (APLAS
Techniques for size-change termination of the pure untyped
lambda-calculus (RTA 2004) are unified and extended significantly,
to yield a termination analyser for higher-order, call-by-value
programs as in ML's purely functional core or similar functional
languages. The analyser has been proven correct, and implemented for
a substantial subset of OCaml.
Termination analysis of the untyped lambda-calculus
(2004 conference paper:
2008 journal version (Logical Methods in Computer Science):
The size-change principle is extended
from first-order programs to the untyped lambda-calulus.
The Size-Change Principle for Program Termination
(POPL conference paper
The ``size-change termination'' principle for a first-order
functional language with well-founded data is: a program terminates on
all inputs if every infinite call sequence (following program control
flow) would cause an infinite descent in some data values.
The Spectrum problem:
- Program transformation by distillation.
Two conference papers [4, 3] (PSI 2011, PEPM 2012) concern transformation by distillation. The
first reformulates distillation as bisimulation between labeled transition systems. The second
gives distillation an improved semantic basis, and explains how superlinear speedups can occur.
(Work connected with a 2010 visit to Dublin City University.)
- Program obfuscation by partial evaluation.
Paper  introduces a novel application of partial evaluation (PEPM 2012). It is shown
that specialisation of a ``distorted" self-interpreter can be used to obfuscate programs, i.e., to
transform them into dicult-to-analyze versions that are computionally equivalent.
(Work connected with a 2010 visit to the University of
Fifty Years of the Spectrum Problem: Survey and New Results
by A. Durand, N.D. Jones, J.A. Makowsky and M. More.
Preprint, accepted by the Bulletin of Symbolic Logic, to appear in 2012.
In 1952, Heinrich Scholz published a question in the Journal
of Symbolic Logic asking for a characterization of spectra, i.e., sets of
that are the cardinalities of finite models of first order
sentences. Gunter Asser
asked whether the complement of a spectrum is always a
questions turned out to be seminal for the development of finite
model theory and
descriptive complexity. Scholz'
question was first answered in a 1972 paper by Jones and
Selman. Asser's question remains open, as it is closely related to the
question P = NP ?
In this paper we survey developments over the
years pertaining to the spectrum problem.
Interpretive Overhead and Optimal Specialisation.
Or: Life without the Pending List (Workshop Version).
A self-interpreter and a program specialiser with the following
are developed for a simple imperative language:
1) The self-interpreter runs with program-independent interpretive
2) the specialiser achieves optimal specialisation, that is,
it eliminates all interpretation overhead;
3) the specialiser has been run on a variety of small and large
programs, including specialising the
self-interpreter to itself.
Linear, Polynomial or Exponential? Complexity Inference in Polynomial
A very natural question about a program is what the growth-rate of its
time complexity is, e.g., is it polynomial. This work is the
first one to completely solve the problem for a well-defined core
language including non-deterministic branching and bounded loops, as
well as restricted arithmetics over integers.
The Flow of Data and the Complexity of Algorithms
A proof system is devised that makes it possible to deduce polynomial
runtime bounds for imperative programs. Further, the system is proven
sound and to have a certain completeness property.
A Flow Calculus of mwp-Bounds for Complexity Analysis
(Transactions on Computational Logic 2008).
Program complexity analysis seems naturally to decompose into two
parts: a termination analysis and a data size analysis.
This paper's aims, given a
to find out whether its variables have acceptable
(polynomially bounded) growth rates.
CTL and program transformation:
Proving Correctness of Compiler Optimizations by Temporal Logic
Lacey and De Moor showed how classical compiler optimization techniques
can be elegantly expressed via rewrite rules conditioned on temporal
conditions. This journal paper sets up a framework for proving correctness of
such rules, and applies it to three of them (dead code elimination,
constant folding, code motion).
List of downloadable papers
Odds and Ends:
- Home address: Bukkeballevej 88
DK-2960 Rungsted Kyst
(retired in 2007 from: DIKU, University of Copenhagen,
DK-2100 Copenhagen East,
- Phone. Home: 45+ 4586 8236.
Neil D. Jones
( firstname.lastname@example.org )
Last revised: 3 September, 2011.