- D-355.

**S Alstrup, J P Secher, and M Spork**.

*Optimal On-line Decremental Connectivity in Trees*.

Information Processing Letters**64**(4), pp. 161-164. 1997.*Keywords*: algorithms, trees, connectivity*Summary*: Let T be a tree with n nodes from which edges are deleted intersperced with m on-line connectivity queries. An O(nlog n + m) algorithm for this problem exists. We present an O(n + m) algorithm. - D-323.

**N Andersen**.

*Linear Time Simulation of Invertible Non-Deterministic Stack Algorithms*.

Journal of Universal Computer Science**3**(3), pp. 148-171. March, 1997.*Keywords*: 2DPDA, Cook's transformation, memorization, non-deterministic programming, backtracking, program transformation*Summary*: Via memorization, a program making use of a single stack may be transformed into an equivalent one running in time proportional to the sum of variabilities at certain points of the original program. It is suggested to express the source program in a non-deterministic language with invertible operations. The paper is directly available from the electronic journal, either in HTML (abstract only) or PostScript format, see http://www.jucs.org/jucs_3_3/linear_time_simulation*Pointers*: BibTeX. - D-336.

**G Barthe, J Hatcliff, and M H Sørensen**.

*Reflections on reflections*.

In*Programming Languages: Implementations, Logics and Programs*( H Glaser, H Hartel, and H Kuchen, eds.), pp. 241-258. Volume 1292 of Lecture Notes in Computer Science. Springer-Verlag. 1997.*Keywords*: Reflections, compiling, translations*Summary*: In the functional Programming literature, compiling is often expressed as a translation between source and target calculi. in recent work, Sabry and Wadler proposed the notion of a*reflection/*as a basis for relating source and target calculi. A reflection elegantly describes the situation where there is a kernel of the source language that is isomorphic to the target language. however, we believe that the reflection criteria is so strong that it often excludes the usual situation in compiling where one is compiling from a higher-level to a lower-level language. We give a detailed analysis of several translations commonly used in compiling that fail to be reflections. We conclude that, in addition to the notion of reflection, there are several relations weaker than a reflection that are useful for characterizing translations. We show that several familiar translations (that are not naturally reflections) form what we call a*reduction correspondence.*We introduce the more general notion of a (cal R_1,*R*_2,*R*_3,*R*_4)-correspondence as a framework for describing relations between source and target calculi. - D-338.

**G Barthe and M H Sørensen**.

*Domain-free Pure Type Systems*.

In*Logical Foundations of Computer Science*( S Adian and A Nerode, eds.), pp. 9-20. Volume 1234 of Lecture Notes in Computer Science. Springer-Verlag. 1997.*Keywords*: Pure type systems, Curry-style typing*Summary*: Pure type systems feature domain-specified lambda-abstractions lambda x:A.M. We present a variant of pure type systems, which we call domain-free pure type systems, with domain-free lambda-abstractions lambda x.M. We study the basic properties of domain-free pure type systems and establish their formal relationship with pure type systems. - D-304.

**G Barthe, J Hatcliff, and M H Sørensen**.

*CPS Translations and Applications: the Cube and Beyond*.

In*ACM SIGPLAN Workshop on Continuations*( O Danvy, ed.), pp. 4:1-31. BRICS Notes Series. 1997. Proceedings to appear as BRICS Technical Report.*Keywords*: Continuation passing style, the lambda-cube, pure type systems*Summary*: Continuation passing style (CPS) translations of typed lambda-calculi have numerous applications. However, the range of these applications have been confined thus far by the fact that CPS translations are known only for non-dependent type systems, thus excluding a number of well-known systems, e.g., the calculus of constructions and LF. The need for more general CPS translations has appeared in several lines of recent work in which the authors are involved. We discuss the difficulties involved with CPS translating more general systems. We then introduce CPS translations of Barendregt's cube of typed lambda-calculi, including the calculus of constructions and the LF calculus. This is generalized further to what we call*strict logical/*pure type systems. This class includes all the systems of the cube and logical non-dependent systems of Coquand and Herbelin. Almost all known (call-by-name, Plotkin-style) CPS translations appear as special cases. Direct style (DS) (i.e., inverse CPS) translations have been used in both technical and implementation-oriented applications. Existing DS translations treat only untyped or simply-typed call-by-value languages. We introduce call-by-name DS translations for all the systems of the cube. Several applications are sketched. - D-340.

**G Barthe, J Hatcliff, and M H Sørensen**.

*A Notion of Classical Pure Type Systems*.

In*Mathematical foundations of programming semantics*( S Brookes, M Main, A Melton, and M Mislove, eds.). Volume 6 of Electronic Notes in Computer Science. Elsevier. 1997.*Keywords*: Classical logic, control operators, pure type systems*Summary*: We present a notion of classical pure type system, which extends the formalism of pure type system with a double negation operator. - D-335.

**C Berline and K Grue**.

*A kappa-denotational semantics for Map Theory in ZFC+SI*.

Theoretical Computer Science**179**(1-2), pp. 137-202. june, 1997.*Summary*: Map theory, or MT for short, has been designed as an ``integrated'' foundation for mathematics, logic and computer science. By this we mean that most primitives and tools are designed from the beginning to bear the three intended meanings: logical, computational, and set-theoretic. MT was originally introduced in [grue92]. It is based on lambda-calculus instead of logic and sets, and it fulfills Church's original aim of introducing lambda-calculus. In particular, it embodies all of ZFC set theory, including classical propositional and classical first order predicate calculus. MT also embodies the unrestricted, untyped lambda calculus including unrestricted abstraction and unrestricted use of the fixed point operator. MT is an equational theory. We present here a semantic proof of the consistency of map theory within ZFC+SI, where SI asserts the existence of an inaccessible cardinal. The proof is in the spirit of denotational semantics and relies on mathematical tools which reflect faithfully, and in a transparent way, the intuitions behind map theory. This gives a consistency proof, but also for the first time gives a clear presentation of the semantics of map theory in a traditional framework. Furthermore, the proof seems to indicate that all ``big'' models of (a very weak extension of) lambda-calculus can be expanded to models of MT. From the metamathematical point of view the strength of MT lies somewhere between ZFC and ZFC+SI. The lower bound is proved in [grue92] by means of a syntactical translation of ZFC (including classical propositional and predicate calculus) into map theory, and the upper bound by building an (exceedingly complex) model of map theory within ZFC+SI. The present paper confirms the upper bound by providing much simpler models, the ``canonical models'' of the paper, which are in fact the paradigm of a large class of quite natural models of MT. That all these models interpret a model of ZFC is a consequence of the syntactic translation, which is a difficult theorem of [grue92]. We can however give here a direct proof of a stronger result, namely that they interpret some (V_sigma,in), where sigma is an inaccessible cardinal. Finally we return to the ``canonical'' models and show that they are adequate for the notion of computation which underlies MT.*Pointers*: BibTeX. dvi.gz. ps.gz. - D-322.

**M Brandt and F Henglein**.

*Coinductive axiomatization of recursive type equality and subtyping*.

In*Proc. 3d Int'l Conf. on Typed Lambda Calculi and Applications (TLCA), Nancy, France, April 2-4, 1997*( R Hindley, ed.), pp. 63-81. Volume 1210 of Lecture Notes in Computer Science (LNCS). Springer-Verlag. April, 1997.*Keywords*: Recursive types, type equality, subtyping, axiomatization, coinduction*Summary*: We present new sound and complete axiomatizations of type equality and subtype inequality for a first-order type language with regular recursive types. The rules are motivated by coinductive characterizations of type containment and type equality via simulation and bisimulation, respectively. The main novelty of the axiomatization is the fixpoint rule (or coinduction principle). The new axiomatizations give rise to a natural operational interpretation of proofs as coercions and to efficient algorithms for constructing explicit coercions efficiently, not only deciding type equality and type containment.*Pointers*: BibTeX. ps.gz. dvi.gz. - D-325.

**R Glück and J Jørgensen**.

*An Automatic Program Generator for Multi-Level Specialization*.

Lisp and Symbolic Computation**10**(2), pp. 113-158. 1997.*Keywords*: programming languages, program transformation, partial evaluation, generating extensions, binding-time analysis, functional languages, Scheme*Summary*: Program specialization can divide a computation into several computation stages. This paper investigates the theoretical limitations and practical problems of standard specialization tools, presents multi-level specialization, and demonstrates that, in combination with the cogen approach, it is far more practical than previously supposed. The program generator which we designed and implemented for a higher-order functional language converts programs into very compact multi-level generating extensions that guarantee fast successive specialization. Experimental results show a remarkable reduction of generation time and generator size compared to previous attempts of multi-level specialization by self-application. Our approach to multi-level specialization seems well-suited for applications where generation time and program size are critical. - D-341.

**R Glück and A V Klimov**.

*A regeneration scheme for generating extensions*.

Information Processing Letters**62**(3), pp. 127-134. 1997.*Keywords*: programming languages, program generation, partial evaluation, program composition, metacomputation*Summary*: A regeneration scheme is presented which shows how to change the computation staging of a generating extension by a two-level metasystem structure using program specialization and program composition. From the results described in this paper we can see that program generation and program degeneration are two extremes of the same transformation dimension. This relates several well-known program transformations under a general scheme.*Pointers*: BibTeX. - D-334.

**K Grue**.

*Church Memorial Volume*.

Festschrift. Kluwer. To appear.*Summary*: Church introduced lambda-calculus in the beginning of the thirties as a foundation of mathematics, and map theory, from around 1992, fulfilled that primary aim. The present paper presents a new version of map theory whose axioms are simpler and better motivated than those of the original version from 1992. The paper focuses on the semantics of map theory and explains this semantics on the basis of kappa-Scott domains. The new version sheds some light on the difference between Russell's and Burali-Forti's paradoxes, and also sheds some light on why it is consistent to allow non-well-founded sets in a ZF-style system.*Pointers*: BibTeX. dvi.gz. ps.gz. - D-317.

**F Henglein**.

*Breaking through the n^3 barrier: Faster object type inference*.

In*Proc. 4th Int'l Workshop on Foundations of Object-Oriented Languages (FOOL), Paris, France*( B Pierce, ed.). January, 1997. Published electronically at URL http://www.cs.williams.edu/ kim/FOOL/.*Keywords*: Object, type, inference, algorithm, asymptotic complexity, Abadi/Cardelli type system*Summary*: Abadi and Cardelli have presented and investigated object calculi that model most object-oriented features found in actual object-oriented programming languages. The calculi are innate object calculi in that they are*not*based on lambda-calculus. They present a series of type systems for their calculi, four of which are first-order. Palsberg has shown how typability in each one of these systems can be decided in time O(n^3), where n is the size of an untyped object expression, using an algorithm based on dynamic transitive closure. He also shows that each of the type inference problems is hard for polynomial time under log-space reductions. In this paper we show how we can break through the*(dynamic) transitive closure bottleneck*and improve each one of the four type inference problems from O(n^3) to the following time complexities: Furthermore, for each of the four calculi we present a principal typing characterization for typable object expression which can be computed in the given times. The key ingredient that lets us ``beat'' the worst-case time complexity induced by using general dynamic transitive closure or similar algorithmic methods is that object subtyping is*invariant*: an object type is a subtype of a ``shorter'' type with a subset of the field names if and only if the common fields have*equal*types.*Pointers*: BibTeX. dvi.gz. ps.gz. - D-319.

**F Henglein and J Rehof**.

*The Complexity of Subtype Entailment for Simple Types*.

TOPPS D-report D-319. DIKU, University of Copenhagen. Universitetsparken 1, DK-2100 Copenhagen, Denmark. January, 1997.*Keywords*: computational complexity, simple type, subtyping, entailment, coNP-completeness, linear time algorithm, atomic constraints*Summary*: We study the complexity of subtype entailment for simple types over lattices of base types. We show that deciding C |= T <= T' for subtype constraints C and simple types T, T' is coNP-complete, and deciding C |= A <= B for consistent, atomic C and A, B atomic can be done in linear time. The structural lower (coNP-hardness) and upper (membership in coNP) bounds as well as the optimal algorithm for atomic entailment are new. The coNP-hardness result shows that entailment is strictly harder than (un)satisfiability (in contrast to for example propositional logic), which is known to be in PTIME for lattices of base types. The proof of coNP-completeness, on the other hand, gives an improved algorithm for deciding entailment and puts a precise complexity-theoretic marker on the intuitive ``exponential explosion'' in the algorithm. Central to our results is a novel characterization of C |= A <= B for atomic, consistent C. This is the basis for correctness of the linear-time algorithm, and it shows how to axiomatize completely C |= A <= B by extending the usual proof rules for subtype inference. - D-332.

**N D Jones and M Rosendahl**.

*Higher-Order Minimal Function Graphs*.

Journal of Functional and Logic Programming**1997**(2). The MIT Press. February, 1997. electronic journal, URL http://www.cs.tu-berlin.de/journal/jflp/articles/1997/1997.html.*Summary*: We present a minimal function graph semantics for a higher-order functional language with applicative evaluation order. The approach extends previous results on minimal function graphs to higher-order functions. The semantics captures the intermediate calls performed during the evaluation of a program. This information may be used in abstract interpretation as a basis for proving the soundness of program analyses. An example of this is the ``closure analysis'' of partial evaluation.*Pointers*: BibTeX. - D-333.

**N D Jones**.

*Combining Abstract Interpretation and Partial Evaluation (brief overview)*.

In*Static Analysis*( P Van Hentenryck, ed.), pp. 396-405. Volume 1302 of Lecture Notes in Computer Science. Springer-Verlag. 1997.*Summary*: Earlier work has shown that partial evaluation of a self-interpreter with respect to an input program can be used to effect program transformation. This talk shows that this technique, augmented by a preliminary abstract interpretation, can yield, among many other transformations: bullet Safe, efficient execution of programs in an unsafe language, and bullet A change in program style, from one list representation to another. Further, partial evaluation can be viewed as an on-line polyvariant abstract interpretation, augmented by code generation. This view generalizes conventional projection-based partial evaluation in the direction of Turchin's supercompilation. - D-295.

**N D Jones**.

*Computability and Complexity from a Programming Perspective*.

Volume of Foundations of Computing, 1 edition. MIT Press. Boston, London. 1997.*Keywords*: computability, complexity, programming language*Summary*: Computability and complexity theory should be of central concern to practitioners as well as theorists. Unfortunately, however, the field is known for its impenetrability. This book's goal is to build a bridge between computability and complexity theory and other areas of computer science, especially programming. In a shift away from the Turing machine- and Gödel number-oriented classical approaches, it uses concepts familiar from programming languages to make computability and complexity more accessible to computer scientists and more applicable to practical programming problems. The fields of computability and complexity theory, and programming languages and semantics, have a great deal to offer each other. Computability and complexity theory have a breadth, depth, and generality not often seen in programming languages. The programming language community, meanwhile, has a firm grasp of algorithm design, presentation, and implementation. In addition, programming languages sometimes provide computational models that are more realistic in certain crucial aspects than traditional models. New results in the book include a proof that constant time factors*do*matter for its programming-oriented model of computation. (In contrast, Turing machines have a counterintuitive "constant speedup" property: that almost any program can be made to run faster, by any amount. Its proof involves techniques irrelevant to practice.) Further results include simple characterizations in programming terms of the central complexity classes PTIME and LOGSPACE; and a new approach to complete problems for NLOGSPACE, PTIME, NPTIME and PSPACE, uniformly based on Boolean programs. This book can be used in a first course in computability and complexity at the upper-division undergraduate or beginning graduate level. It is also appropriate for self-study by experienced programmers who want to increase their understanding of the fundamental theory that underlies computation.*Pointers*: BibTeX. - D-347.

**N D Jones, editor**.

*Proceedings of the Symposium on Principles of Programming Languages (POPL), Paris, France*.

Volume 24, page 497. ACM Press. Sponsored by the ACM Special Interest Group SIGPLAN. January, 1997. ISBN number 0-89791-853-3.*Keywords*: programming languages*Summary*: Conference Proceedings.*Pointers*: BibTeX. - D-346.

**R Milner, M Tofte, R Harper, and D MacQueen**.

*The Definition of Standard ML (Revised)*.

The MIT Press. 1997.*Pointers*: BibTeX. - D-331.

**T Æ Mogensen and P Sestoft**.

*Partial Evaluation*.

In*Encyclopedia of Computer Science and Technology*( A Kent and J G Williams, eds.), pp. 247-279. Volume 37, pp. 247-279. Marcel Dekker. 270 Madison Avenue, New York, New York 10016. 1997.*Keywords*: partial evaluation*Summary*: An overview paper about partial evaluation*Pointers*: BibTeX. - D-344.

**T Æ Mogensen**.

*2-stage fixed-point iteration in a modified Plotkin powerdomain*.

In*Proceedings of the 1997 Glasgow Functional Programming Workshop*( J O'Donnel, ed.), page . Volume of . To appear. 1997.*Keywords*: powerdomains, fixed-points, abstract interpretation*Summary*: We make a modification to Plotkins powerdomain construction to obtain more distinctions. This requires a two-stage fixed-point iteration to find minimal fixed-points.*Pointers*: BibTeX. - D-345.

**T Æ Mogensen**.

*Types for 0, 1 or many uses*.

In*Proceedings of IFL'97*(, ed.), page . Volume To appear of Lecture Notes in Computer Science. Springer-Verlag. 1997.*Keywords*: usage analysis, types, constraints, functional programming*Summary*: We extend the usage analysis of Turner, Wadler and Mossin to give a better treatment of data structures. The analysis is defined by a type system that generates a set of constraints whicha re solved off-line in linear time. - D-326.

**C Mossin**.

*Exact Flow Analysis*.

In*Fourth International Static Analysis Symposium (SAS)*(, ed.), pp. 250-264. Volume of Lecture Notes in Computer Science (LNCS). Springer-Verlag. Paris, France. September, 1997.*Keywords*: Data flow analysis, intersection types*Summary*: We present a flow analysis based on annotated types. The analysis is exact in the following sense: if the analysis predicts a redex, then there exist a reduction sequence (using standard reduction plus context propagation rules) such that this redex will be reduced. The precision is accomplished using intersection typing. It follows that the analysis is non-elementary recursive -- more surprisingly, the analysis is decidable. We argue that the specification of such an analysis provides a good starting point for developing new flow analyses and an important benchmark against which other flow analyses can be compared. Furthermore, we believe that the methods employed for stating and proving exactness are of independent interest: they provide methods for reasoning about the precision of program analyses. - D-327.

**C Mossin**.

*Exact Flow Analysis*.

. DIKU. August, 1997. Full version of D-326.*Keywords*: Data flow analysis, intersection types*Summary*: We present a flow analysis based on annotated types. The analysis is exact in the following sense: if the analysis predicts a redex, then there exist a reduction sequence (using standard reduction plus context propagation rules) such that this redex will be reduced. The precision is accomplished using intersection typing. It follows that the analysis is non-elementary recursive -- more surprisingly, the analysis is decidable. We argue that the specification of such an analysis provides a good starting point for developing new flow analyses and an important benchmark against which other flow analyses can be compared. Furthermore, we believe that the methods employed for stating and proving exactness are of independent interest: they provide methods for reasoning about the precision of program analyses. - D-330.

**C Mossin**.

*Flow Analysis of Typed Higher-Order Programs*.

PhD thesis. DIKU, University of Copenhagen. January (revised August), 1997.*Keywords*: Data flow analysis, types*Summary*: The thesis concerns flow analysis of typed higher order programs. Flow analysis attempts at compile time to predict the creation, flow and use of values. We will present a suite of analyses based on type systems: they work by adding extra information to the standard types of the analysed program. The analyses are proven to be sound under any evaluation order. The first two analyses, simple and sub-type based flow analysis, are known from the literature. We present a new modular formulation, allowing subexpressions to be analysed independently from their context without loss of precision. Modularity makes it possible to extend the analyses with polymorphism which allows named functions to be used differently in different contexts. This leads to improved precision. We show that ML- and fix-polymorphic flow analysis is computable in polynomial time. Finally, we present a flow analysis based on intersection types. We show a completeness result for this analysis: the analysis is precise up to not knowing which branch of a conditional is taken and not discarding any computation. The sub-type based analysis can be given a very natural presentation using graphs. The graph formulation leads to an improvement over existing algorithms: single flow queries can be answered in linear time and full flow analysis can be performed in quadratic time (under assumption that the size of standard types is bounded). We argue that the presented analyses are not restricted to a specific standard type system, but are applicable to any functional programming language; even dynamically typed languages. - D-328.

**C Mossin**.

*Higher-Order Value Flow Graphs*.

In*Ninth International Symposium on Programming Languages, Implementations, Logics, and Programs (PLILP)*(, ed.), pp. 159-174. Volume of Lecture Notes in Computer Science (LNCS). Springer-Verlag. Southampton, UK. September, 1997.*Keywords*: Data flow analysis, efficiency, polyvariance*Summary*: The concepts of value- and control-flow graphs are important for program analysis of imperative programs. An imperative value flow graph can be constructed by a single pass over the program text. No similar concepts exist for declarative languages: we propose a method for constructing value flow graphs for typed higher-order functional languages. A higher-order value flow graph is constructed by a single pass over an explicitly typed program. By using standard methods, single source and single use value flow problems can be answered in linear time and all sources-all uses can be answered in quadratic time (in the size of the flow graph, which is equivalent to the size of the explicitly typed program). The precision of the resulting analysis is equivalent to closure analysis. In practice, it is a reasonable assumption that typed programs are only bigger than their untyped equivalent by a constant factor, hence this is an asymptotic improvement over previous algorithms. We extend the analysis to handle polymorphism, sum types and recursive types. As a consequence, the analysis can handle (explicit) dynamically typed programs. The analysis is polyvariant for polymorphic definitions. - D-329.

**C Mossin**.

*Higher-Order Value Flow Graphs*.

. DIKU. August, 1997. Full version of D-328.*Keywords*: Data flow analysis, efficiency, polyvariance*Summary*: The concepts of value- and control-flow graphs are important for program analysis of imperative programs. An imperative value flow graph can be constructed by a single pass over the program text. No similar concepts exist for declarative languages: we propose a method for constructing value flow graphs for typed higher-order functional languages. A higher-order value flow graph is constructed by a single pass over an explicitly typed program. By using standard methods, single source and single use value flow problems can be answered in linear time and all sources-all uses can be answered in quadratic time (in the size of the flow graph, which is equivalent to the size of the explicitly typed program). The precision of the resulting analysis is equivalent to closure analysis. In practice, it is a reasonable assumption that typed programs are only bigger than their untyped equivalent by a constant factor, hence this is an asymptotic improvement over previous algorithms. We extend the analysis to handle polymorphism, sum types and recursive types. As a consequence, the analysis can handle (explicit) dynamically typed programs. The analysis is polyvariant for polymorphic definitions. - D-286.

**J Rehof**.

*Minimal Typings in Atomic Subtyping*.

In*Proceedings POPL '97, 24th Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, Paris, France*(, ed.), pp. 278-291. Volume of . ACM Press. January, 1997.*Summary*: This paper studies the problem of simplifying typings and the size-complexity of most general typings in typed programming languages with atomic subtyping. We define a notion of minimal typings relating all typings which are equivalent with respect to instantiation. The notion of instance is the ``lazy'' instance relation defined by Fuh and Mishra, which supports many interesting simplifications. We prove that every typable term has a unique minimal typing, which is the logically most succinct among all equivalent typings. We study completeness properties, with respect to our notion of minimality, of well-known simplification techniques. Drawing upon these results, we prove a tight exponential lower bound for the worst case dag-size of constraint sets as well as of types in most general typings. To the best of our knowledge, the best previously proven lower bound was linear. - D-337.

**H Seidl and M H Sørensen**.

*Constraints to Stop Deforestation*.

Science of Computer Programming**32**, pp. 73-107. 1998.*Keywords*: Higher-order deforestation, termination, constraints*Summary*: Wadler's deforestation algorithm eliminates intermediate data structures from functional programs. To be suitable for inclusion in a compiler, deforestation must terminate on all programs. Several techniques exist to ensure termination of deforestation on all first-order programs, but general techniques for higher-order programs were introduced only recently by first Hamilton and then Marlow. We present a new technique for ensuring termination of deforestation on all higher-order programs that allows useful transformation steps prohibited in Hamilton's and Marlow's techniques. The technique uses a constraint-based higher-order control-flow analysis. We also relate our technique to previous approaches to termination of first- and higher-order deforestation in some detail. - D-302.

**H Seidl and M H Sørensen**.

*Constraints to Stop Higher-Order Deforestation*.

In*ACM Symposium on Principles of Programming Languages*, pp. 400-413. ACM Press. 1997.*Keywords*: Higher-order deforestation, termination, constraints*Summary*: Wadler's deforestation algorithm removes intermediate data structures from functional programs. To be appropriate for inclusion in a compiler, deforestation must terminate on all programs. Several techniques exist to ensure termination of deforestation on all first-order programs, but a technique for higher-order programs was only recently introduced by Hamilton and later Marlow. We present a new technique for ensuring termination of deforestation on all higher-order programs that allows useful transformation steps prohibited in Hamilton's and Marlow's techniques. The technique uses a constraint-based higher-order control-flow analysis. - D-305.

**M H Sørensen**.

*Hilbert's Tenth Problem*.

In*Computability and Complexity from a Programming Perspective (D-295)*, pp. 167-185. Volume of Foundations of Computing, 1 edition. MIT Press. Boston, London. 1997.*Keywords*: Hilbert's Tenth Problem, undecidability, Diophantine equations*Summary*: A presentation is given of the proof that Hilbert's Tenth Problem is undecidable.*Pointers*: BibTeX. - D-339.

**M H Sørensen**.

*Strong Normalization from Weak Normalization in Typed lambda-Calculi*.

Information and Computation**133**(1), pp. 35-71. 1997.*Keywords*: Weak normalization, strong normalization, lambda-I, CPS-translation*Summary*: For some typed lambda-calculi it is easier to prove weak normalization than strong normalization. Techniques to infer the latter from the former have been invented over the last twenty years by Nederpelt, Klop, Khasidashvili, Karr, de Groote, and Kfoury and Wells. However, these techniques infer strong normalization of one notion of reduction from weak normalization of a*more complicated/*notion of reduction. This paper presents a new technique to infer strong normalization of a notion of reduction in a typed lambda-calculus from weak normalization of the*same/*notion of reduction. The technique is demonstrated to work on some well-known systems including second-order lambda-calculus and the system of positive, recursive types. It gives hope for a positive answer to the Barendregt-Geuvers conjecture stating that every pure type system which is weakly normalizing is also strongly normalizing. The paper also analyzes the relationship between the techniques mentioned above, and reviews, in less detail, other techniques for proving strong normalization. - D-342.

**M Tofte, L Birkedal, M Elsman, N Hallenberg, T H O Højfeld, P Sestoft, and P Bertelsen**.

*Programming with Regions in the ML Kit*.

Technical Report. Dept. of Computer Science, University of Copenhagen. 1997.*Pointers*: BibTeX. - D-343.

**M Tofte and T Jean-Pierre**.

*Region-Based Memory Management*.

Information and Computation**132**(2), pp. 109-176. 1997.*Pointers*: BibTeX. - D-374.

**M Welinder**.

*Partial Evaluation and Correctness*.

PhD thesis. Department of Computer Science, University of Copenhagen. 1997. Available as DIKU Rapport 98/13.

Generated automatically from ftp'able BibTeX sources.

topps-www@diku.dk / October 27, 2010.