Keywords: time complexity, hierarchy theorem, universal program
Summary: Using a simple programming language as a computational model, Neil Jones has shown that a constant-factor time hierarchy exists: thus a constant-factor difference in time makes a difference in problem-solving power, unlike the situation with Turing machines. In this note we review this result and fill in the details of the proof, thus obtaining a precise version of the theorem with explicit constant factors. Specifically, multiplying the running time by 232 provably allows more decision problems to be solved.
Keywords: speedup, Unlimp
Summary: The general topic of this thesis is optimality of running times of programs. In the first, and longest, part we focus directly on these issues. We provide a general discussion of the kind of computational problems for which some program solving a problem can be said to have optimal running time among all programs solving the problem. Classical results by Leonid A. Levin and Manuel Blum demonstrate that for some computational problems such a program does exist, and for others such a program does not exist. We present the two results and prove some new variations of these. We also consider the possibility of deciding whether a problem is solved by an optimal program and of obtaining optimal programs by use of automatic optimizers. The second part is a spin-off of our work with Levin's Optimal Search Theorem. In an earlier report we performed some experiments with an implementation of the algorithm associated with this theorem. A serious problem was the amount of space used by a particular part of the algorithm which was supposed to enumerate programs in some language efficiently. This problem made us look into ways of improving the space-usage of such algorithms. One particularly interesting construction was Stefan Kahr's so-called ``Unlimp''-interpreters. An ``Unlimp'' interpreter manages heap usage in a clever way that minimizes the number of cells allocated by the constructors of the interpreted language. Moreover, it allows very easy implementation of function memoization. We found that this idea was so interesting in itself that we dedicated the second part of this thesis to describing the idea more carefully and investigating an implementation of the scheme by a number of experiments.
Keywords: Logic Programming, partial deduction, termination, unfold/fold transformation
Summary: Partial deduction in the Lloyd-Shepherdson framework cannot achieve certain optimisations which are possible by unfold/fold transformations. We introduce conjunctive partial deduction, an extension of partial deduction accommodating such optimisations, e.g., tupling and deforestation. We first present a framework for conjunctive partial deduction, extending the Lloyd-Shepherdson framework by considering conjunctions of atoms (instead of individual atoms) for specialisation and renaming. Correctness results are given for the framework with respect to computed answer semantics, least Herbrand model semantics, and finite failure semantics. Maintaining the well-known distinction between local and global control, we describe a basic algorithm for conjunctive partial deduction, and refine it into a concrete algorithm for which we prove termination. The problem of finding suitable renamings which remove redundant arguments turns out to be important, so we give an independent technique for this. A fully automatic implementation has been undertaken, which always terminates. Differences between the abstract semantics and Prolog's left-to-right execution motivate deviations from the abstract technique in the actual implementation, which we discuss. The implementation has been tested on an extensive set of benchmarks which demonstrate that conjunctive partial deduction indeed pays off, surpassing conventional partial deduction on a range of small to medium-size programs, while remaining manageable in an automatic and terminating system.
Summary: AnnoDomini is a commercially available source-to-source conversion tool for finding and fixing Year 2000 problems in COBOL programs. AnnoDomini uses type-based specification, analysis, and transformation to achieve its main design goals: flexibility, completeness, correctness, and a high degree of safe automation.
Summary: AnnoDomini is a source-to-source conversion tool for making COBOL programs Year 2000 compliant. It is technically and conceptually built upon type-theoretic techniques and methods: type-based specification, program analysis by type inference and type-directed transformation. These are combined into an integrated software reengineering tool and method for finding and fixing Year 2000 problems. AnnoDomini's primary goals have been flexibility (support for multiple year representations), completeness (identifying all potential Year 2000 problems), correctness (correct fixes for Year 2000 problems) and a high degree of safe automation in all phases (declarative specification of conversions, no second-guessing or dangerous heuristics). In this paper we present the type-theoretic foundations of AnnoDomini: type system, type inference, unification theory, semantic soundness, and correctness of conversion. We also describe how these foundations have been applied and extended to a common COBOL mainframe dialect, and how AnnoDomini is packaged with graphical user interface and syntax-sensitive editor into a commercially available software tool.
Keywords: Modules, Standard ML, Separate Compilation, Region Inference, ML Kit
Summary: This paper presents a technique for compiling Standard ML Modules into typed intermediate language fragments, which may be compiled separately and linked using traditional linking technology to form executable code. The technique is called static interpretation and allows compile-time implementation details to propagate across module boundaries. Static interpretation eliminates all module-level code at compile time. The technique scales to full Standard ML and is used in the ML Kit with Regions compiler. A framework for smart recompilation makes the technique useful for compiling large programs.
Keywords: termination, partial evaluation, BSV, bounded static variation, specialisation point insertion, complexity
Summary: This paper presents a novel way of detecting termination for a large class of programs written in a functional language. The method includes, but is not restricted to, detection of decreasing values under lexicographic ordering, and can thus prove the termination of Ackermann's function. Assuming the existence of a well-founded ordering of a value domain we present an algorithm, utilising efficient graph operations, that first detects variables of bounded variation, and then uses information about successively decreasing values assigned to some of these variables to establish termination. We show how an extension of this technique to partial evaluation can be used to ensure that specialisation terminates when all variables are of bounded static variation and appropriate specialisation points are inserted. This can pose a tricky problem when the result of nested calls cannot be classified as static due to specialisation points. We solve this in an elegant way by recording sufficient information necessary to undo previous boundedness classifications without recomputing costly graph operations. In contrast to previous methods, our insertion of specialisation points is not based on a general heuristic (like e.g. "insert specialisation points at all dynamic conditionals"), but rather on a safe approximation and an "insert-by-need" strategy. Experiments have shown that the method works well on interpreters, which are of prime interest for partial evaluation. The algorithm for detecting termination has a worst-case complexity of O(p3), where p is the program size, but for typical programs it is expected to be at most O(p2). The insertion of a small set of specialisation points during the binding-time analysis has exponential worst-case complexity, due to the need for considering all combinations of recursive call sites. This indicates that future research should address this problem in more depth.
Keywords: partial evaluation, binding-time analysis, generating extension
Summary: A bird's-eye view of several aspects of the C-Mix system is given: how it is used, a few examples of what it can do, and some information on how the binding-time and points-to analyses works together with the generating-extension generator.
Keywords: partial evaluation, binding-time analysis, compiler generator, generating extensions
Summary: Program specialization can divide a computation into several computation stages. The program generator which we designed and implemented for a higher-order functional language converts programs into very compact multi-level generating extensions that guarantee fast successive specialization. Experimental results show a remarkable reduction of generation time and generator size compared to previous attempts of multiple self-application. This paper summarizes the key ingredients of our approach to multi-level specialization.
Keywords: program transformation, metasystem transition, self-application, generalization, program specialization
Summary: In recent work, we proposed a simple functional language S-graph-n to study meta-programming aspects of self-applicable online program specialization. The primitives of the language provide support for multiple encodings of programs. An important component of online program specialization is the termination strategy. In this paper we show that such a representation has the great advantage of simplifying generalization of multiply encoded data. We extend two basic methods to multiply encoded data: most specific generalization and the homeomorphic embedding relation. Examples illustrate their working in hierarchies of programs.
Keywords: language hierarchies, metacomputation, program composition, program specialization, formal linguistic modeling
Summary: We study the structure of language hierarchies and their reduction by two forms of metacomputation in order to overcome the time and space complexity of language hierarchies. We show that program specialization and program composition are sufficient to reduce all forms of language hierarchies constructed from interpreters and translators. We argue that the reduction of language hierarchies is a prerequisite for effective formal linguistic modeling on a large scale.
Keywords: partial evaluation, lambda calculus, binding time analysis, type inference
Summary: A simple, self-applicable partial evaluator for the untyped lambda calculus is described. Binding time analysis is expressed as a question of type inference in the two-level lambda calculus. Two generated compilers are examined.
Summary: Abadi and Cardelli present a series of type systems for
their object calculi, four of which are first-order. Palsberg has shown how
typability in each one of these systems can be decided in time O(n^3) and
space O(n^2), where n is the size of an untyped object expression, using
an algorithm based on dynamic transitive closure. In this paper we improve
each one of the four type inference problems from O(n^3) to the following
item no subtyping/no recursive types: O(n); item no subtyping/with recursive types: O(n log^2 n); item with subtyping/no recursive types: O(n^2); item with subtyping/with recursive types: O(n^2).
Furthermore, our algorithms improve the space complexity from O(n^2) to O(n) in each case. The key ingredient that lets us ``beat'' the worst-case time and space complexity induced by general dynamic transitive closure or similar algorithmic methods is that object subtyping, in contrast to record subtyping, is invariant: an object type is a subtype of a ``shorter'' type with a subset of the method names if and only if the common components have equal types.
Keywords: partial evaluation, lambda calculus, binding time analysis, type inference
Summary: A programming approach to computability and complexity theory yields more natural definitions and proofs of central results than the classical approach. Further, some new results can be obtained using this viewpoint. This paper contains new intrinsic characterizations of the well-known complexity classes sc ptime and sc logspace, with no externally imposed resource bounds on time or space. sc logspace is proven identical with the decision problems solvable by read-only imperative programs on Lisp-like lists; and sc ptime is proven identical with the problems solvable by recursive read-only programs.
Keywords: lambda-calculus, erasing reductions, normalisation, first-order logic, CPS-transformation
Summary: The thesis studies certain aspects of normalisation in the lambda-calculus. It begins with a study of the notions conservation and uniform normalisation. Conservation means that infinite reduction paths are preserved under reduction. Uniform normalisation means that the term can either not be reduced to a normal form, or that all ways of reducing the term will eventually lead to a normal form. The classical conservation theorem for Lambda I has been formulated using either of the notions conservation and uniform normalisation. The reason for the indistinctness in the formulation of the conservation theorem is that Lambda I is closed under beta-reduction, which in Lambda I does not erase subterms. In this situation uniform normalisation implies conservation, and conservation also implies uniform normalisation. However, when turning to erasing reductions the distinction becomes important as conservation no longer implies uniform normalisation. This insight is elaborated in a new technique for finding uniformly normalising subsets of a lambda-calculus. The technique uses a combination of a syntactic and a semantic criterion and it is applied in several ways: i) The technique is used to find a new uniformly normalising subset of the basic lambda-calculus. This uniformly normalising subset includes some terms with erasing redexes, so-called K-redexes. ii) The technique is also used in a typed setting to find a uniformly normalising subset of the typed lambda-calculus Lambda M corresponding to minimal first-order logic through the Curry-Howard isomorphism. iii) This uniformly normalising subset is used to infer strong normalisation from weak normalisation for Lambda M. This is a non-trivial extension of techniques developed recently by Sørensen and Xi to infer strong normalisation from weak normalisation of the same notion of reduction. Furthermore, the thesis considers the type system of classical first-order logic, which allows typing of exception handling. An analysis concludes that it is not possible to make a direct extension of the technique by Sørensen and Xi to this system.
Keywords: program transformation, information propagation, negative information, termination
Keywords: Supercompilation, termination
Summary: The paper gives an introduction to Turchin's supercompiler, a program transformer for functional programs which performs optimizations beyond partial evaluation and deforestation. More precisely, the paper presents positive supercompilation.