Keywords: programming languages, non-standard semantics, inverse computation, logic programming, program specialization, partial evaluation, metacomputation
Summary: An approach for systematically modifying the semantics of programming languages by semantics modifiers is described. It allows the design of general and reusable ``semantics components'' without the penalty of being too inefficient. Inverse computation and neighborhood analysis are used to demonstrate the computational feasibility of the approach. Equivalence transformers are shown to fulfill the requirements of semantics modifiers. Finally, projections are given that allow the efficient implementation of non-standard interpreters and non-standard compilers using program specialization.
Keywords: computability in subrecursive langauges, complexity classes, multi-head automata
Summary: We investigate the expressive power of programs operating on an input which is a LISP-style tree without the possibility to modify or create cons cells. They can thus apply car or cdr but not cons. We obtain a hierarchy of classes of sets that can be recognized by deterministic, non-deterministic and recursive programs respectively, with and without the additionl operation of testing pointer equality. Several results on the relationship of the classes defined by these languages to Turing-machine based complexity classes are obtained.
Keywords: pointer machine, pointer algorithm, storage modification machine, SMM, PPM, HMM, KUM, LISP machine
Summary: The term ``pointer machine'' has been used in Computer Science literature with various incompatible denotations. To help in disambiguating references, this report explains the different concepts and lists publications referring to each.
Keywords: time hierarchy, constant factor
Summary: This report repeats the proof of the hierarchy theorem for language pgtI by Jones (cf. item D-160 in this bibliography) in a sufficient level of detail that the constants involved can be evaluated, showing for example that there are problems decidable in time 127an but not an for every age 103. The constants are of course dependent on the programming language, but our main point is that they can be effectively calculated and are not very large.
Keywords: type, recursive, subtyping, coercion, axiomatization, inference rules, coinduction, fixpoint, Amadio, Cardelli
Summary: We give new axiomatizations of type equality and subtyping for a type language with recursive types. The novelty of our approach is the use of the fixpoint rule, which captures a finitary form of coinduction. Our coinductive approach has several advantages over previous axiomatizations: The coinductive rule is conceptually simpler than the contraction rule and it is self-contained and strictly syntax directed. Natural operational interpretations (coercions) exist for the coinductive rule. Efficient and natural decision algorithms are easily constructed. The recursive subtype relation may be axiomatized without type equality. We prove our axiomatizations sound and complete with respect to Amadio, Cardelli's. Due to the coinductive rule are we able to construct a coercion language which encodes all subtype derivations and has a natural operational interpretation easily related to coercions in real programming languages. We develop a coherence theory of coercions and subtype derivations which is founded on a coinductive equality relation on coercions. Furthermore, we present a denotational semantics for coercions and show coercion equality sound under this interpretation. Finally, we formulate an optimality criteria for coercions and show that optimal coercions exist for all subtype derivations and we give an explicit algorithm for finding them. This algorithm also works as a decision algorithm for the subtype relation.
Keywords: Recursive type, type equality, subtyping, axiomatization, inference rule, inference system, coercion, coinduction, contractive
Summary: We present new sound and complete axiomatizations of type equality and subtype inequality for a first-order type language with regular recursive types. The rules are motivated by coinductive characterizations of type containment and type equality via simulation and bisimulation, respectively. The main novelty of the axiomatization is the fixpoint rule, a finitary coinduction principle. The new axiomatizations give rise to a natural operational interpretation of proofs as coercions and to efficient algorithms for constructing explicit coercions efficiently, not only deciding type equality and type containment. More generally, we show how adding the fixpoint rule makes it possible to define inductively a a set coinductively defined as the kernel (greatest fixed point) of an inference system.
Keywords: partial deduction, binding-time analysis, off-line, logic programming, partial evaluation, program specialisation
Summary: We study the notion of binding-time analysis for logic programs. We formalise the unfolding aspect of an on-line partial deduction system as a Prolog program. Using abstract interpretation, we collect information about the run-time behaviour of the program. We use this information to make the control decisions about the unfolding at analysis time and to turn the on-line system into an off-line system. We report on some initial experiments.
Keywords: dynamic memory management, Java, region, regions, type, inference system, soundness, static semantics , dynamic semantics
Summary: We present a Java-like language in which objects are explicitly put in regions. The language has con structs for allocating, updating and deallocating regions, as well as region types for objects. For this language we prsent a static semantics ensuring that well-typed programs use regions safely, and we present a dynamic semantics t hat is intentional with respect to a region-based store. We formulate and prove a soundness theorem stating that wel l-typed programs do not go wrong. Finally, we develop a concrete model for implementing regions, and we compare this model to garbage collection for small exam ples.
Keywords: AnnoDomini, Type Theory, Year 2000 Problem, Automatic Conversion
Summary: AnnoDomini is a commercially available source-to-source conversion tool for making COBOL programs Year 2000 compliant. It was developed in the last two years by a group at DIKU (part of the ERCIM partner DANIT) and grew directly out of research in the theory of programming languages; it uses type-based specification, analysis, and transformation. These are combined into an integrated software reengineering tool and method for finding and fixing Year 2000 problems. AnnoDomini's primary goals have been flexibility, completeness, correctness, and a high degree of safe automation.
Summary: This thesis is about a framework for elaborating and interpreting module language constructs at compile time in such a way that (1) arbitrary compile time information about declared identifiers of a module may be propagated to other modules and (2) no code is generated for module language constructs. The framework for interpreting module constructs is called static interpretation. More information about referenced identifiers than can be obtained from programmer produced interfaces is necessary for analyses such as region inference. Further, many analyses improve significantly from availability of analysis specific information about referenced identifiers. Static interpretation facilitates intermodule optimisation, yet it still supports a variant of separate compilation called cut-off incremental recompilation. The thesis is divided into three parts. The first part is about the static properties of a small language called ModML, which features a small Core language and the essential constructs of of the Standard ML Modules language. The second part develops the framework for static interpretation by showing how ModML can be compiled into an explicitly typed intermediate language. The third part describes the ML Kit with regions compiler, which is based on the techniques developed in part one and part two.
Keywords: partial evaluation
Keywords: partial evaluation
Keywords: programming languages, program transformation, partial evaluation, program specialization, program composition, metacomputation
Summary: One of the main discovery in the seventies was that the concept of a generating extension covers a very wide class of apparently different program generators. Program specialization, or partial evaluation, is powerful because it provides uniform techniques for the automatic implementation of generating extensions from ordinary programs. The Futamura projections stand as the cornerstone of the development of program specialization. This paper takes the idea of the Futamura projections further. Three degeneration projections are formulated which tell us how to achieve the reverse goal by program composition, namely turning a generating extension into an ordinary program. The fact that program composition can invert the effect of program specialization shows that these projections are dual in some sense. The degeneration projections complete a missing link between programs and generating extensions and allow for novel applications of program transformation.
Keywords: subtyping, subtype, entailment, recursive types, constraint, computational complexity
Summary: We study entailment of structural and nonstructural recursive subtyping constraints. Constraints are formal inequalities between type expressions, interpreted over and ordered set of possibly infinite labeled trees. The nonstructural ordering on trees is the one introduced by Amadio and Cardelli for subtyping with recursive types. The structural ordering compares only trees with common shape. A constraint set entails an inequality if every assignment of meanings (trees) to type expressions that satisfies all the constraints also satisfies the inequality. In this paper we prove that nonstructural subtype entailment is PSPACE-hard, both for finite trees (simple types) and infinite trees (recursive types). For the structural ordering we prove that subtype entailment over infinite trees is PSPACE-complete when the order on trees is generated from a lattice of type constants. Since structural subtype entailment over finite trees has been shown to be coNP-complete, assuming coNP neq PSPACE these are the first complexity-theoretic separation results that show that, informally, nonstructural subtype entailment is harder than structural entailment, and recursive entailment is harder than nonrecursive entailment.
Keywords: Dagstuhl seminar report
Summary: Abstracts of a meeting held June 8-12, 1998 at Schloss Dagstuhl
Keywords: computability, complexity, programming
Keywords: partial deduction, on-line partial evaluation, program specialisation, logic programming
Keywords: Termination, Well-quasi orders, Well-founded orders, Program Analysis, Specialisation, and Transformation, Partial Evaluation, Partial Deduction, Logic Programming, Functional & Logic Programming
Summary: Recently well-quasi orders in general, and homeomorphic embedding in particular, have gained popularity to ensure the termination of program analysis, specialisation and transformation techniques. In this paper we investigate and clarify for the first time, both intuitively and formally, the advantages of such an approach over one using well-founded orders. Notably we show that the homeomorphic embedding relation is strictly more powerful than a large class of involved well-founded approaches.
Keywords: program specialisation, partial evaluation, partial deduction, abstract interpretation
Summary: We clarify the relationship between abstract interpretation and program specialisation in the context of logic programming. We present a generic top-down abstract specialisation framework, along with a generic correctness result, into which a lot of the existing specialisation techniques can be cast. The framework also shows how these techniques can be further improved by moving to more refined abstract domains. It, however, also highlights inherent limitations shared by all these approaches. In order to overcome them, and to fully unify program specialisation with abstract interpretation, we also develop a generic combined bottom-up/top-down framework, which allows specialisation and analysis outside the reach of existing techniques.
Keywords: Program analysis, type systems, efficiency, polymorphism, recursive types, polyvariance
Summary: A value flow analysis based on a new concept of higher-order value flow graphs is presented. The language analysed need to be typed, but the types system may include polymorphism, sum types, recursive types and even dynamic types. The precision of the analysis is equivalent to Sestofts closure analysis, yet the the algorithm has quadratic complexity in the size of the explicitly typed program. A proof of correctness is given.
Keywords: partial deduction, tabling, partial evaluation, program specialisation, logic programming
Keywords: Lambda-Calculus, Type Theory, Weak Normalization, Strong Normalization
Summary: Available as DIKU Rapport 97/27
Keywords: Program transformation, termination, metric spaces
Summary: In recent years increasing consensus has emerged that program transformers, e.g., partial evaluation and unfold/fold transformations, should terminate; a compiler should stop even if it performs fancy optimizations! A number of techniques to ensure termination of program transformers have been invented, but their correctness proofs are sometimes long and involved. We present a framework for proving termination of program transformers, cast in the metric space of trees. We first introduce the notion of an abstract program transformer; a number of well-known program transformers can be viewed as instances of this notion. We then formalize what it means that an abstract program transformer terminates and give a general sufficient condition for an abstract program transformer to terminate. We also consider some specific techniques for satisfying the condition. As applications we show that termination of some well-known program transformers either follows directly from the specific techniques or is easy to establish using the general condition. Our framework facilitates simple termination proofs for program transformers. Also, since our framework is independent of the language being transformed, a single correctness proof can be given in our framework for program transformers using essentially the same technique in the context of different languages. Moreover, it is easy to extend termination proofs for program transformers to accommodate changes to these transformers. Finally, the framework may prove useful for designing new termination techniques for program transformers.
Keywords: The Curry-Howard isomorphism, Logic, Lambda-Calculus