- D-385.

**A Ben-Amram and N Jones**.

*A precise version of a time hierarchy theorem*.

Fundamenta Informaticae**38**, pp. 1-15. 1999. Special issue dedicated to Arto Salomaa.*Keywords*: time complexity, hierarchy theorem, universal program*Summary*: Using a simple programming language as a computational model, Neil Jones has shown that a constant-factor time hierarchy exists: thus a constant-factor difference in time makes a difference in problem-solving power, unlike the situation with Turing machines. In this note we review this result and fill in the details of the proof, thus obtaining a*precise version*of the theorem with explicit constant factors. Specifically, multiplying the running time by 232 provably allows more decision problems to be solved. - D-393.

**N H Christensen**.

*Levin, Blum and the Time Optimality of Programs*.

Master's Thesis. DIKU, University of Copehagen. April, 1999.*Keywords*: speedup, Unlimp*Summary*: The general topic of this thesis is optimality of running times of programs. In the first, and longest, part we focus directly on these issues. We provide a general discussion of the kind of computational problems for which some program solving a problem can be said to have optimal running time among all programs solving the problem. Classical results by Leonid A. Levin and Manuel Blum demonstrate that for some computational problems such a program does exist, and for others such a program does not exist. We present the two results and prove some new variations of these. We also consider the possibility of deciding whether a problem is solved by an optimal program and of obtaining optimal programs by use of automatic optimizers. The second part is a spin-off of our work with Levin's Optimal Search Theorem. In an earlier report we performed some experiments with an implementation of the algorithm associated with this theorem. A serious problem was the amount of space used by a particular part of the algorithm which was supposed to enumerate programs in some language efficiently. This problem made us look into ways of improving the space-usage of such algorithms. One particularly interesting construction was Stefan Kahr's so-called ``Unlimp''-interpreters. An ``Unlimp'' interpreter manages heap usage in a clever way that minimizes the number of cells allocated by the constructors of the interpreted language. Moreover, it allows very easy implementation of function memoization. We found that this idea was so interesting in itself that we dedicated the second part of this thesis to describing the idea more carefully and investigating an implementation of the scheme by a number of experiments. - D-397.

**M V Christiansen, F Henglein, H Niss, and P Velschow**.

*Safe Region-Based Memory Management for Objects*.

Technical Report. DIKU, University of Copenhagen. October, 1998. - D-370.

**D De Schreye, R Glück, J Jørgensen, M Leuschel, B Martens, and M H Sørensen**.

*Conjunctive Partial Deduction: Foundations, Control, Algorithms, and Experiments*.

Journal of Logic Programming**41**(2&3), pp. 231-277. 1999.*Keywords*: Logic Programming, partial deduction, termination, unfold/fold transformation*Summary*: Partial deduction in the Lloyd-Shepherdson framework cannot achieve certain optimisations which are possible by unfold/fold transformations. We introduce conjunctive partial deduction, an extension of partial deduction accommodating such optimisations, e.g., tupling and deforestation. We first present a framework for conjunctive partial deduction, extending the Lloyd-Shepherdson framework by considering conjunctions of atoms (instead of individual atoms) for specialisation and renaming. Correctness results are given for the framework with respect to computed answer semantics, least Herbrand model semantics, and finite failure semantics. Maintaining the well-known distinction between local and global control, we describe a basic algorithm for conjunctive partial deduction, and refine it into a concrete algorithm for which we prove termination. The problem of finding suitable renamings which remove redundant arguments turns out to be important, so we give an independent technique for this. A fully automatic implementation has been undertaken, which always terminates. Differences between the abstract semantics and Prolog's left-to-right execution motivate deviations from the abstract technique in the actual implementation, which we discuss. The implementation has been tested on an extensive set of benchmarks which demonstrate that conjunctive partial deduction indeed pays off, surpassing conventional partial deduction on a range of small to medium-size programs, while remaining manageable in an automatic and terminating system. - D-404.

**P H Eidorff, F Henglein, C Mossin, H Niss, M H Sørensen, and M Tofte**.

*AnnoDomini: From Type Theory to a Year 2000 Conversion Tool*.

ERCIM News**36**, pp. 12-13. January, 1999. European Research Consortium for Informatics and Mathematics (ERCIM), http://www.ercim.org.*Pointers*: BibTeX. - D-384.

**P Eidorff, F Henglein, C Mossin, H Niss, M H B Sørensen, and M Tofte**.

*AnnoDomini in Practice: A Type-Theoretic Approach to the Year 2000 Problem*.

In*Proc. Symposium on Typed Lambda Calculus and Applications (TLCA)*( J-Y Girard, ed.), pp. 6-13. Volume 1581 of Lecture Notes in Computer Science. Springer-Verlag. L'Aquila, Italy. April, 1999.*Summary*: AnnoDomini is a commercially available source-to-source conversion tool for finding and fixing Year 2000 problems in COBOL programs. AnnoDomini uses type-based specification, analysis, and transformation to achieve its main design goals: flexibility, completeness, correctness, and a high degree of safe automation. - D-383.

**P Eidorff, F Henglein, C Mossin, H Niss, M Sørensen, and M Tofte**.

*AnnoDomini: From Type Theory to Year 2000 Conversion Tool*.

In*ACM Symposium on Principles of Programming Languages*(, ed.), pp. 1-14. Volume of . ACM press. 1999. Invited paper.*Summary*: AnnoDomini is a source-to-source conversion tool for making COBOL programs Year 2000 compliant. It is technically and conceptually built upon type-theoretic techniques and methods: type-based specification, program analysis by type inference and type-directed transformation. These are combined into an integrated software reengineering tool and method for finding and fixing Year 2000 problems. AnnoDomini's primary goals have been*flexibility*(support for multiple year representations),*completeness*(identifying all potential Year 2000 problems),*correctness*(correct fixes for Year 2000 problems) and a high degree of*safe automation*in all phases (declarative specification of conversions, no second-guessing or dangerous heuristics). In this paper we present the type-theoretic foundations of AnnoDomini: type system, type inference, unification theory, semantic soundness, and correctness of conversion. We also describe how these foundations have been applied and extended to a common COBOL mainframe dialect, and how AnnoDomini is packaged with graphical user interface and syntax-sensitive editor into a commercially available software tool. - D-381.

**M Elsman**.

*Static Interpretation of Modules*.

In*Proceedings of the 1999 International Conference on Functional Programming (ICFP'99)*(, ed.), page . Volume of . September, 1999.*Keywords*: Modules, Standard ML, Separate Compilation, Region Inference, ML Kit*Summary*: This paper presents a technique for compiling Standard ML Modules into typed intermediate language fragments, which may be compiled separately and linked using traditional linking technology to form executable code. The technique is called static interpretation and allows compile-time implementation details to propagate across module boundaries. Static interpretation eliminates all module-level code at compile time. The technique scales to full Standard ML and is used in the ML Kit with Regions compiler. A framework for smart recompilation makes the technique useful for compiling large programs. - D-401.

**A J Glenstrup**.

*Terminator II: Stopping Partial Evaluation of Fully Recursive Programs*.

Master's Thesis. DIKU, University of Copenhagen. Universitetsparken 1, DK-2100 Copenhagen Ø. June, 1999.*Keywords*: termination, partial evaluation, BSV, bounded static variation, specialisation point insertion, complexity*Summary*: This paper presents a novel way of detecting termination for a large class of programs written in a functional language. The method includes, but is not restricted to, detection of decreasing values under lexicographic ordering, and can thus prove the termination of Ackermann's function. Assuming the existence of a well-founded ordering of a value domain we present an algorithm, utilising efficient graph operations, that first detects variables of bounded variation, and then uses information about successively decreasing values assigned to some of these variables to establish termination. We show how an extension of this technique to partial evaluation can be used to ensure that specialisation terminates when all variables are of bounded static variation and appropriate specialisation points are inserted. This can pose a tricky problem when the result of nested calls cannot be classified as static due to specialisation points. We solve this in an elegant way by recording sufficient information necessary to undo previous boundedness classifications without recomputing costly graph operations. In contrast to previous methods, our insertion of specialisation points is not based on a general heuristic (like e.g. "insert specialisation points at all dynamic conditionals"), but rather on a safe approximation and an "insert-by-need" strategy. Experiments have shown that the method works well on interpreters, which are of prime interest for partial evaluation. The algorithm for detecting termination has a worst-case complexity of O(p3), where p is the program size, but for typical programs it is expected to be at most O(p2). The insertion of a small set of specialisation points during the binding-time analysis has exponential worst-case complexity, due to the need for considering all combinations of recursive call sites. This indicates that future research should address this problem in more depth.*Pointers*: BibTeX. dvi.gz. pdf.gz. ps.gz. - D-390.

**A Glenstrup, H Makholm, and J P Secher**.

*C-Mix -- Specialization of C Programs*.

In*Partial Evaluation: Practice and Theory. Proceedings of the 1998 DIKU International Summerschool*( J Hatcliff, T Mogensen, and P Thiemann, eds.), pp. 108-154. Volume 1706 of Lecture Notes in Computer Science. Springer-Verlag. 1999.*Keywords*: partial evaluation, binding-time analysis, generating extension*Summary*: A bird's-eye view of several aspects of the C-Mix system is given: how it is used, a few examples of what it can do, and some information on how the binding-time and points-to analyses works together with the generating-extension generator.*Pointers*: BibTeX. - D-363.

**R Glück and J Jørgensen**.

*Multi-Level Specialization (Extended Abstract)*.

In*Partial Evaluation: Practice and Theory*( J Hatcliff, T Mogensen, and P Thiemann, eds.), pp. 326-337. Volume 1706 of Lecture Notes in Computer Science. Springer-Verlag. 1999.*Keywords*: partial evaluation, binding-time analysis, compiler generator, generating extensions*Summary*: Program specialization can divide a computation into several computation stages. The program generator which we designed and implemented for a higher-order functional language converts programs into very compact multi-level generating extensions that guarantee fast successive specialization. Experimental results show a remarkable reduction of generation time and generator size compared to previous attempts of multiple self-application. This paper summarizes the key ingredients of our approach to multi-level specialization. - D-380.

**R Glück, J Hatcliff, and J Jørgensen**.

*Generalization in Hierarchies of Online Program Specialization Systems*.

In*Logic-Based Program Synthesis and Transformation. Proceedings*( P Flener, ed.), pp. 179-198. Volume 1559 of Lecture Notes in Computer Science. Springer-Verlag. 1999.*Keywords*: program transformation, metasystem transition, self-application, generalization, program specialization*Summary*: In recent work, we proposed a simple functional language S-graph-n to study meta-programming aspects of self-applicable online program specialization. The primitives of the language provide support for multiple encodings of programs. An important component of online program specialization is the termination strategy. In this paper we show that such a representation has the great advantage of simplifying generalization of multiply encoded data. We extend two basic methods to multiply encoded data: most specific generalization and the homeomorphic embedding relation. Examples illustrate their working in hierarchies of programs. - D-230.

**R Glück and A V Klimov**.

*Reduction of Language Hierarchies*.

Volume 8 of The Evolution of Complexity. Kluwer Academic Publishers. 1999.*Keywords*: language hierarchies, metacomputation, program composition, program specialization, formal linguistic modeling*Summary*: We study the structure of language hierarchies and their reduction by two forms of metacomputation in order to overcome the time and space complexity of language hierarchies. We show that program specialization and program composition are sufficient to reduce all forms of language hierarchies constructed from interpreters and translators. We argue that the reduction of language hierarchies is a prerequisite for effective formal linguistic modeling on a large scale. - D-413.

**C K Gomard and N D Jones**.

*Partial evaluation of lambda calculus*.

In*Partial Evaluation: Practice and Theory. Proceedings of the 1998 DIKU International Summerschool*( J Hatcliff, T Mogensen, and P Thiemann, eds.), pp. 203-220. Volume 1706 of Lecture Notes in Computer Science. Springer-Verlag. 1999.*Keywords*: partial evaluation, lambda calculus, binding time analysis, type inference*Summary*: A simple, self-applicable partial evaluator for the untyped lambda calculus is described. Binding time analysis is expressed as a question of type inference in the two-level lambda calculus. Two generated compilers are examined. - D-408.

**N Hallenberg**.

*Combining Garbage Collection and Region Inference in The ML Kit*.

Master's Thesis. Department of Computer Science, University of Copenhagen. June, 1999.*Pointers*: BibTeX. - D-409.

**E K Hansen**.

*Checkpointing Java Programs on Standard Java; Implementations using Program Transformation*.

Master's Thesis. University of Copenhagen. Universitetsparken 1, 2100 Copenhagen Ø. 1999.*Pointers*: BibTeX. - D-406.

**J Hatcliff, T Mogensen, and P Thiemann, editors**.

*em Partial Evaluation: Practice and Theory*.

Volume 1706 of , edition. Springer-Verlag. 1999.*Pointers*: BibTeX. - D-407.

**L Haupt-Hansen**.

*Memory management in object-oriented systems*.

Master's Thesis. University of Copenhagen. Universitetsparken 1, 2100 Copenhagen Ø. 1999.*Pointers*: BibTeX. - D-396.

**F Henglein**.

*Breaking Through the n^3 Barrier: Faster Object Type Inference*.

Theory and Practice of Object Systems (TAPOS)**5**(1), pp. 57-72. 1999.*Summary*: Abadi and Cardelli present a series of type systems for their object calculi, four of which are first-order. Palsberg has shown how typability in each one of these systems can be decided in time O(n^3) and space O(n^2), where n is the size of an untyped object expression, using an algorithm based on dynamic transitive closure. In this paper we improve each one of the four type inference problems from O(n^3) to the following time complexities:

item no subtyping/no recursive types: O(n); item no subtyping/with recursive types: O(n log^2 n); item with subtyping/no recursive types: O(n^2); item with subtyping/with recursive types: O(n^2).

Furthermore, our algorithms improve the space complexity from O(n^2) to O(n) in each case. The key ingredient that lets us ``beat'' the worst-case time and space complexity induced by general dynamic transitive closure or similar algorithmic methods is that object subtyping, in contrast to record subtyping, is*invariant*: an object type is a subtype of a ``shorter'' type with a subset of the method names if and only if the common components have*equal*types. - D-386.

**N Jones**.

*Computability and Complexity Revisited*.

Volume 259 of London Mathematical Society Lecture Notes Series, edition. Cambridge University Press. University of Oxford, 24-29 Giles Street, Oxford OX1 3TG, UK. 1999.*Pointers*: BibTeX. - D-398.

**N D Jones**.

*LOGSPACE and PTIME characterized by Programming Languages*.

Theoretical Computer Science**228**(), pp. 151-174. 1999.*Keywords*: partial evaluation, lambda calculus, binding time analysis, type inference*Summary*: A programming approach to computability and complexity theory yields more natural definitions and proofs of central results than the classical approach. Further, some new results can be obtained using this viewpoint. This paper contains new*intrinsic*characterizations of the well-known complexity classes sc ptime and sc logspace, with no externally imposed resource bounds on time or space. sc logspace is proven identical with the decision problems solvable by*read-only*imperative programs on Lisp-like lists; and sc ptime is proven identical with the problems solvable by*recursive*read-only programs. - D-400.

**B Kjellerup**.

*Normalization of First-Order Logic Proofs in Isabelle*.

Master's Thesis. DIKU, University of Copenhagen, Denmark. 1999. - D-414.

**J Lawall**.

*Faster Fourier transforms via automatic program specialization*.

In*Partial Evaluation: Practice and Theory. Proceedings of the 1998 DIKU International Summerschool*( J Hatcliff, T Mogensen, and P Thiemann, eds.), pp. 338-355. Volume 1706 of Lecture Notes in Computer Science. Springer-Verlag. 1999.*Pointers*: BibTeX. - D-405.

**H Makholm**.

*Specializing C - an introduction to the principles behind C-Mix II*.

`http://www.diku.dk/ makholm/cmixintro.ps.gz`. DIKU, University of Copenhagen. 1999.*Pointers*: BibTeX. - D-403.

**T Mogensen**.

*Gödelization in the untyped lambda calculus*.

In*Proc. Partial Evaluation and Semantics-Based Program Manipulation (PEPM), San Antonio, Texas*, pp. 19-24. BRICS Notes Series NS-99-1, Aarhus. Aarhus University. 1999.*Pointers*: BibTeX. - D-412.

**T Mogensen**.

*Inherited Limits*.

In*Partial Evaluation: Practice and Theory. Proceedings of the 1998 DIKU International Summerschool*( J Hatcliff, T Mogensen, and P Thiemann, eds.), pp. 189-202. Volume 1706 of Lecture Notes in Computer Science. Springer-Verlag. 1999.*Pointers*: BibTeX. - D-411.

**T Mogensen**.

*Partial evaluation: concepts and applications*.

In*Partial Evaluation: Practice and Theory. Proceedings of the 1998 DIKU International Summerschool*( J Hatcliff, T Mogensen, and P Thiemann, eds.), pp. 1-19. Volume 1706 of Lecture Notes in Computer Science. Springer-Verlag. 1999.*Pointers*: BibTeX. - D-399.

**P Møller Neergaard**.

*Weak and Strong Normalization,*.**K**-redexes, and First-Order Logic

Technical Report 99/11. DIKU, University of Copenhagen, Denmark. Universitetsparken 1, DK-2100 København Ø. September, 1999.*Keywords*: lambda-calculus, erasing reductions, normalisation, first-order logic, CPS-transformation*Summary*: The thesis studies certain aspects of normalisation in the lambda-calculus. It begins with a study of the notions conservation and uniform normalisation. Conservation means that infinite reduction paths are preserved under reduction. Uniform normalisation means that the term can either not be reduced to a normal form, or that all ways of reducing the term will eventually lead to a normal form. The classical conservation theorem for Lambda I has been formulated using either of the notions conservation and uniform normalisation. The reason for the indistinctness in the formulation of the conservation theorem is that Lambda I is closed under beta-reduction, which in Lambda I does not erase subterms. In this situation uniform normalisation implies conservation, and conservation also implies uniform normalisation. However, when turning to erasing reductions the distinction becomes important as conservation no longer implies uniform normalisation. This insight is elaborated in a new technique for finding uniformly normalising subsets of a lambda-calculus. The technique uses a combination of a syntactic and a semantic criterion and it is applied in several ways: i) The technique is used to find a new uniformly normalising subset of the basic lambda-calculus. This uniformly normalising subset includes some terms with erasing redexes, so-called**K**-redexes. ii) The technique is also used in a typed setting to find a uniformly normalising subset of the typed lambda-calculus Lambda M corresponding to minimal first-order logic through the Curry-Howard isomorphism. iii) This uniformly normalising subset is used to infer strong normalisation from weak normalisation for Lambda M. This is a non-trivial extension of techniques developed recently by Sørensen and Xi to infer strong normalisation from weak normalisation of the same notion of reduction. Furthermore, the thesis considers the type system of classical first-order logic, which allows typing of exception handling. An analysis concludes that it is not possible to make a direct extension of the technique by Sørensen and Xi to this system.*Pointers*: BibTeX. dvi.gz. html. ps.gz. - D-382.

**F v Raamsdonk, P Severi, M Sørensen, and H Xi**.

*Perpetual Reductions in lambda-calculus*.

Information and Computation**149**(2), pp. 173-225. 1999.*Pointers*: BibTeX. - D-410.

**P Rasmussen**.

*Theory and Practice of Tupling*.

Master's Thesis. University of Copenhagen. Universitetsparken 1, 2100 Copenhagen Ø. 1999.*Pointers*: BibTeX. - D-376.

**J P Secher**.

*Perfect Supercompilation*.

DIKU-TR-99/1. Department of Computer Science (DIKU), University of Copenhagen. February, 1999.*Keywords*: program transformation, information propagation, negative information, termination - D-392.

**S C Skalberg**.

*A Mechanical Proof of the Optimality of a Partial Evaluator*.

Master's Thesis. University of Copenhagen. Universitetsparken 1, 2100 Copenhagen Ø. February, 1999.*Pointers*: BibTeX. - D-364.

**M H Sørensen and R Glück**.

*Introduction to Supercompilation*.

In*Partial Evaluation: Practice and Theory*( J Hatcliff, T Mogensen, and P Thiemann, eds.), pp. 246-270. Volume 1706 of Lecture Notes in Computer Science. Springer-Verlag. 1999.*Keywords*: Supercompilation, termination*Summary*: The paper gives an introduction to Turchin's supercompiler, a program transformer for functional programs which performs optimizations beyond partial evaluation and deforestation. More precisely, the paper presents positive supercompilation.

Generated automatically from ftp'able BibTeX sources.

topps-www@diku.dk / October 27, 2010.