- D-157.

**L O Andersen**.

*Binding-Time Analysis and the Taming of C Pointers*.

In*Proc. of ACM Symposium on Partial Evaluation and Semantics-Based Program Manipulation, PEPM'93*( D Schmidt, ed.), pp. 47-58. 1993.*Keywords*: binding-time analysis, C, pointers, partial evaluation*Summary*: This paper describes an automatic binding-time analysis for the C programming language*Pointers*: BibTeX. dvi.gz. ps.gz. - D-187.

**N Andersen and N D Jones**.

*Generalizing Cook's Transformation to Imperative Stack Programs*.

In*Results and Trends in Theoretical Computer Science*( J Karhumäki, H Maurer, and G Rozenberg, eds.), pp. 1-18. Volume 812 of Lecture Notes in Computer Science. Springer-Verlag. 1994.**Obsolete!**Superseded by D-323.*Keywords*: program transformation, 2DPDA, memoization, stack program, push-down program*Summary*: Cook's construction may be lifted from the world of automata (he showed in 1971 that any two-way deterministic push-down automaton can be simulated in time proportional to the length of its input string) to the world of imperative programs with a push-down stack. Under certain very liberal conditions such a program may be simulated in time proportional to the number of its ``surface configurations''. - D-178.

**P H Andersen**.

*Partial Evaluation Applied to Ray Tracing*.

1993. Unpublished.*Keywords*: Partial evaluation, C, ray tracing*Summary*: C-mix is used to specialize a ray tracer. The paper investigates how partial evaluation can be used to optimize an already efficient program, and the problems that occur when partial evaluation is applied to large-scale programs. The experiments show that C-mix can optimize the ray tracer to run twice as fast.*Pointers*: BibTeX. dvi.Z. ps.Z. - D-176.

**L Birkedal and M Welinder**.

*Partial Evaluation of Standard ML*.

Technical Report DIKU-report 93/22. DIKU, Department of Computer Science, University of Copenhagen. October, 1993.*Keywords*: Partial Evaluation. Standard ML. Handwritten Cogen.*Summary*: The paper describes off-line partial evaluation of the core of Standard ML. A new approach is used: we have handwritten a cogen which transforms a program into its generating extension. We deal with partially static structures and pattern matching. A binding-time analysis based on constraint solving has been developed. A complete system for partial evaluation of SML has been implemented and we report on some experiments with the system. - D-181.

**L Birkedal, N Rothwell, M Tofte, and D N Turner**.

*The ML Kit (Version 1)*.

Technical Report DIKU-report 93/14. Department of Computer Science, University of Copenhagen, Universitetsparken 1, DK-2100 Copenhagen. 1993.*Summary*: The ML Kit is an implementation of ``The Definition of Standard ML.'' It provides a highly modular implementation of complete Standard ML. In addition, it contains a compiler from the Standard ML Core Language to an enriched lambda-calculus and a compiler for this language. This technical report explains how to use, read and modify the Kit.*Pointers*: BibTeX. - D-184.

**L Birkedal**.

*Higher-order Functors and Principal Signatures in Standard ML*.

September, 1993. DIKU Student Project.*Keywords*: Standard ML, higher-order functors*Summary*: Presents an extension of the Standard ML Modules Language in which specification of functors inside signatures and declaration of functors inside structures are allowed, thereby making it possible to parameterise modules on parameterised modules; in effect providing higher-order modules. Shows that principal signatures still exist and gives an algorithm which computes the principal signature for a signature expression if one exists. Describes how to detect illegal signature expressions. The algorithm has been implemented in the ML Kit citebrtt93. The work is an extension of Tofte's work on a skeletal higher-order modules language. - D-167.

**A Bondorf**.

*Similix 5.0 Manual*.

DIKU, University of Copenhagen, Denmark. May, 1993. Included in Similix distribution, 82 pages.*Keywords*: partial evaluation, Scheme*Summary*: The manual describes Similix, a self-applicable partial evaluator for a higher order subset of Scheme. A section describes some often used binding time improvements.*Pointers*: BibTeX. - D-169.

**A Bondorf and J Palsberg**.

*Compiling actions by partial evaluation*.

In*FPCA'93, Conference on Functional Programming and Computer Architecture, Copenhagen, Denmark*, pp. 308-317. ACM. June, 1993.**Obsolete!**Superseded by D-283.*Keywords*: action semantics, compiler generation, partial evaluation, binding-time improvements*Summary*: An action compiler is obtained by partial evaluation of an action interpreter written in Scheme. The code produced by the action compiler is as fast as that produced by previous action compilers.*Pointers*: BibTeX. - D-153.

**A Bondorf and J Jørgensen**.

*Efficient analyses for realistic off-line partial evaluation*.

Journal of Functional Programming**3**(3), pp. 315-346. July, 1993.*Keywords*: partial evaluation, program analysis, flow analysis, binding time analysis, poor man's generalization*Summary*: Three efficient analysis for partial evaluation preprocessing are described. The analyses are based on based on constraint solving and are shown to have almost-linear complexity. Benchmarks are given.*Pointers*: BibTeX. - D-164.

**A Bondorf and J Jørgensen**.

*Efficient analyses for realistic off-line partial evaluation: extended version*.

Technical Report 93/4. DIKU, University of Copenhagen, Denmark. 1993.*Keywords*: partial evaluation, program analysis, flow analysis, binding time analysis, evaluation order dependency analysis, poor man's generalization*Summary*: Four efficient analysis for partial evaluation preprocessing are described. The analyses are based on based on constraint solving and are shown to have almost-linear complexity. Benchmarks are given. Extended version of D-153*Pointers*: BibTeX. part-1-and-3.dvi.Z. part-2.ps.Z. - D-170.

**R Glück and A V Klimov**.

*Occam's Razor in Metacomputation: the Notion of a Perfect Process Tree*.

In*Static Analysis. Proceedings*( P Cousot, M Falaschi, G Filé, and A Rauzy, eds.), pp. 112-123. Volume 724 of Lecture Notes in Computer Science. Springer-Verlag. 1993.*Keywords*: partial evaluation, partially-static structures, driving, supercompilation, metacomputation*Summary*: We introduce the notion of a perfect process tree as a model for the full propagation of information in metacomputation. Starting with constant propagation we construct step-by-step the driving mechanism used in supercompilation which ensures the perfect propagation of information. The concept of a simple supercompiler based on perfect driving coupled with a simple folding strategy is explained. As an example we demonstrate that specializing a naive pattern matcher with respect to a fixd pattern obtains the efficiency of a matcher generated by the Knuth, Morris and Pratt algorithm. - D-165.

**C Hankin, D L Métayer, and D Sands**.

*A Parallel Programming Style and its Algebra of Programs*.

In*Proceeding of Parallel Architectures and Languages Europe (PARLE)*, pp. 367-378. Volume 694 of Lecture Notes in Computer Science. Springer-Verlag. 1993.*Keywords*: Parallel Programming Languages: Language Constructs, Semantics.*Summary*: We present a set of primitive program schemes called tropes, which provide a parallel programming language based on a variant of the Gamma model. Some properties and simple program transformations are studied - D-163.

**F Henglein**.

*Dynamic Typing: Syntax and Proof Theory*.

Science of Computer Programming. 1993. Special Issue on European Symposium on Programming 1992 (to appear).*Keywords*: dynamic typing, coercions, dynamically typed lambda-calculus, type inference, coherence, completions, safety, minimality*Summary*: The dynamically typed lambda-calculus, a type system modeling dynamic type operations in run-time typed languages, is presented and its equational and reduction-theoretic properties are investigated. - D-168.

**T P Jensen**.

*Abstract Interpretation in Logical Form*.

PhD thesis. Imperial College, University of London. November, 1992. Available as DIKU Report 93/11 from DIKU, University of Copenhagen.*Pointers*: BibTeX. - D-160.

**N D Jones**.

*Constant Time Factors*.*Do*Matter

In*STOC '93. Symposium on Theory of Computing*( S Homer, ed.), pp. 602-611. ACM Press. 1993. Accepted for publication.*Keywords*: complexity, programming languages, constant time factors*Summary*: It has long been disconcerting that partial evaluation only gives linear speedups, whereas the*constant speedup theorem*from Turing machine based complexity theory says that constant factors make no difference at all in computing power. In this paper the constant speedup theorem is shown false for a simple imperative programming language manipulating tree-structured data -- that DLIN, the collection of all sets recognizable in linear time by*L*-programs, contains an infinite hierarchy ordered by constant coefficients. Second, a problem is exhibited which is complete for the nondeterministic linear time sets NLIN. Third, a reflexive language is defined, allowing calls to the language's own interpretation function, and even to its running time function. It is proven that any*L*^uparrow program can be translated into an*L*program slower by at most a constant factor, and that a quite efficient version of Kleene's Second Recursion Theorem holds. The results are robust over a variety of programming languages: DLIN and NLIN are identical for*L*^uparrow, for Schönhage's SMMs (Storage Modification Machines), Knuth/Tarjan's pointer machines, and for successor RAMs.*Pointers*: BibTeX. dvi.Z. ps.Z. - D-135.

**N D Jones, C K Gomard, and P Sestoft**.

*Partial Evaluation and Automatic Program Generation*.

Prentice Hall International. International Series in Computer Science. June, 1993. ISBN number 0-13-020249-5 (pbk).*Keywords*: Partial Evaluation, Binding Time Analysis, Automatic Program Transformation and Generation, Compiler Generation, Self-Application*Summary*: This book provides a broad coverage of basic and advanced topics in partial evaluation. A wide spectrum of languages are treated including imperative, functional (first-order, higher-order), and logic programming languages.*Pointers*: BibTeX. - D-162.

**N D Jones, editor**.

*Special Issue on Partial Evaluation, 1993 (Journal of Functional Programming, vol. 3, no. 4)*.

Cambridge University Press. 1993. 5 accepted papers being revised; to be printed in 1993.*Keywords*: partial evaluation, functional programming, online specialization, binding-time analysis*Summary*: Contents:*On the Specialization of Online Program Specializers, E. Ruf, D.Weise*;*Program Transformation by Metasystem Transition*, V.F. Turchin;*Efficient Analyses for Realistic Off-Line Partial Evaluation*, A. Bondorf, J. Jørgensen;*Correctness of Binding Time Analysis*, J. Palsberg;*Specifying the Correctness of Binding-Time Analysis*, M. Wand*Pointers*: BibTeX. - D-188.

**N D Jones**.

*The Essence of Program Transformation by Partial Evaluation and Driving*.

In*Logic, Language and Computation, a Festschrift in honor of Satoru Takasu*( M S Neil D Jones, Masami Hagiya, ed.), pp. 206-224. Springer-Verlag. April, 1994.*Keywords*: partial evaluation, driving, supercompilation*Summary*: A new abstract framework is developed for program specialization algorithms that is more general than the projection-based partial evaluation methods formalised in the author's earlier `Re-examination' paper. Turchin's `driving' transformation, known to yield more efficient specialised programs, is in this paper for the first time put on a solid semantic foundation which is not tied to any particular programming language or data structure. The new approach includes both projection-based methods and driving in a natural and simple way, and eases formulating correctness of residual code generation. - D-161.

**K Malmkjær**.

*Towards Efficient Partial Evaluation*.

In*ACM SIGPLAN Symposium on Partial Evaluation and Semantics Based Program Manipulation*( D Schmidt, ed.), pp. 33-43. June, 1993.*Keywords*: partial evaluation, abstract interpretation*Summary*: In general, a partial evaluator needs to keep track of the tasks that have already been completed or initiated, so that it can recognize when to stop unfolding. In the mix-style polyvariant specialization algorithm this is accomplished by a global log. We consider the purpose of the global log and identify some classes of special cases in which a simpler technique would suffice. We outline how a partial evaluator can take advantage of these special cases and propose analyses to detect them automatically. We discuss examples to illustrate the effect on specialization and to demonstrate that we can even obtain better residual programs. - D-159.

**T Æ Mogensen**.

*Constructor Specialization*.

In*ACM Symposium on Partial Evaluation and Semantics-Based Program Manipulation*( D Schmidt, ed.), pp. 22-32. June, 1993.*Keywords*: Partial evaluation, specialization, data-types, constructors*Summary*: Constructors are specialized with respect to the static part of their arguments, similarly to specialization of functions. This allows specialized data-types to be invented and in general improves binding times.*Pointers*: BibTeX. dvi.Z. ps.Z. - D-158.

**C Mossin**.

*Partial evaluation of General Parsers (Extended abstract)*.

In*ACM SIGPLAN Symposium on Partial Evaluation and Semantics Based Program Manipulation*( D Schmidt, ed.), pp. 13-21. June, 1993.*Keywords*: Partial evaluation, general parsers, LR*Summary*: We partially evaluate a simple general LR(k) parsing algorithm. To obtain good results, we rewrite the algorithm using a number of binding time improvements. The final LR(1) parser has been specialized using Similix-2. The obtained specialized parsers are efficient and compact. - D-174.

**C Mossin**.

*Polymorphic Binding Time Analysis*.

Master's Thesis. DIKU, University of Copenhagen, Denmark. July, 1993.*Keywords*: Binding-time analysis, type inference, polymorphism, constraints*Summary*: A binding-time analysis with explicit polymorphism in binding time values is presented. The analysis is proved correct w.r.t. a given specializer. A variant of algorithm W is used to implement the analysis. Constraint set reductions a la Fuh and Mishra are used to limit the number of binding time parameters - D-175.

**J Rehof and M H Sørensen**.

*The lambda_Delta calculus*.

1993. Unabridged version of D-189.*Keywords*: Lambda calculus, control operators, Curry-Howard Isomorphism, classical proof theory*Summary*: By restriction of Felleisen's control operator*F*we obtain an operator Delta and a fully compatible, Church-Rosser control calculus lambda_Delta enjoying a number of desirable properties. It is shown that lambda_Delta contains a strongly normalizing typed subcalculus with a reduction corresponding closely to systems of proof normalization for classical logic. The calculus is more than strong enough to express a call-by-name catch-throw programming paradigm. - D-146.

**K H Rose**.

*Graph-based Operational Semantics of a Lazy Functional Language*.

In*Term Graph Rewriting: Theory and Practice*( M R Sleep, M J Plasmeijer, and M C D J van Eekelen, eds.), chapter 22, pp. 234-247. John Wiley & Sons. 1992.*Keywords*: term graph rewriting, operational semantics, graph reduction, lazy functional programming languages*Summary*: Presents Graph Operational Semantics (GOS): a semantic specification formalism based on structural operational semantics and term graph rewriting. Demonstrates the method by specifying the dynamic part of a generic lazy functional programming language. - D-166.

**K H Rose**.

*Explicit Cyclic Substitution*.

March, 1993. Unpublished.*Keywords*: lambda-calculus, substitutions, recursion*Summary*: The untyped lambda-calculus enriched with local possibly recursive definitions, eg, let and letrec of functional programming languages, is discussed. It is shown how it may be modeled using conditional term rewriting systems defining ``explicit acyclic and cyclic substitution'' calculi that generalise the explicit substitution calculus of Abadi, Cardelli, Curien, and Levy in two directions: the systems model traditional lambda-calculus with variables (thus not restricted to closed terms), and the cyclic calculus may model recursion directly. - D-147.

**K H Rose**.

*Explicit Recursion*.

1993. to appear.*Keywords*: recursion, lambda-calculus, term graph rewriting*Summary*: Discusses recursion by considering lambdamu a lambda-calculus with an explicit fixed point operator. Shows how theories of this calculus based on substitution are equivalent to theories based on term graph rewriting.*Pointers*: BibTeX. - D-180.

**M Rosendahl**.

*Higher-Order Chaotic Iteration Sequences*.

In*PLILP'93, Tallinn, Estonia*, pp. 332-345. Lecture Notes in Computer Science. Springer-Verlag. 1993.*Summary*: Chaotic iteration sequences is a method for approximating fixpoints of monotonic functions proposed by Patrick and Radhia Cousot. It may be used in specialisation algorithms for Prolog programs and in abstract interpretation when only parts of a fixpoint result is needed to perform program optimisations. In the first part of this paper we reexamine the definition of chaotic iteration sequences and show how a number of other methods to compute fixpoints may be formulated in this framework. In the second part we extend the technique to higher-order functions. - D-173.

**D Sands**.

*A Naïve Time Analysis and its Theory of Cost Equivalence"*.

DIKU, University of Copenhagen. Submitted for publication, 1993.**Obsolete!**Superseded by D-253.*Summary*: A simple calculus for time analysis of non-strict functional programs is introduced, and a supporting theory of*cost-equivalence*/ is developed. An interface to more compositional methods is given.*Pointers*: BibTeX. dvi.Z. ps.Z. - D-171.

**D Sands**.

*A Compositional Semantics of Combining Forms for Gamma Programs*.

In*International Conference on Formal Methods in Programming and Their Applications.*. Lecture Notes in Computer Science. 1993.*Summary*: A compositional semantics for Gamma is defined, and it is shown how this can be used to verify substitutive refinement laws, and to explore the semantics and properties of a number of new combining forms for the Gamma model.*Pointers*: BibTeX. dvi.Z. ps.Z. - D-172.

**D Sands**.

*Laws of Parallel Synchronised Termination*.

In*Theory and Formal Methods 1993: Proceedings of the First Imperial College, Department of Computing, Workshop on Theory and Formal Methods*( G L Burn, S J Gay, and M D Ryan, eds.). Springer-Verlag Workshops in Computer Science. Isle of Thorns, UK. 29-31 March, 1993.*Summary*: The inequational partial correctness properties of the combination of sequential and parallel composition operators for Gamma programs are studied, including the ``residual program'' input-output laws originally described by Hankin*et al.**Pointers*: BibTeX. dvi.Z. ps.Z. - D-296.

**M H Sørensen**.

*A New Means of Ensuring Termination of Deforestation*.

1993. Unabridged version of D-191.*Keywords*: Deforestation, termination, tree grammars*Summary*: Wadler's*deforestation*algorithm removes intermediate data structures from functional programs, but is only guaranteed to terminate for*treeless*programs. Chin has shown how one can apply deforestation to all first-order programs:*annotate*non-treeless subterms and apply the*extended*deforestation algorithm which essentially leaves annotated subterms untransformed. We develop a new technique of putting annotations on programs. The basic idea is to compute a finite grammar which approximates the set of terms that the deforestation algorithm encounters. The technique extends Wadler's and Chin's in a certain sense. - D-177.

**M H Sørensen**.

*A New Means of Ensuring Termination of Deforestation With an Applicaton to Logic Programming*.

In*Global Compilation Workshop*. 1993. Proceedings appeared as Penn State Technical Report.*Keywords*: Deforestation, termination, tree grammars*Summary*: Wadler's deforestation algorithm eliminates intermediate data structures from functional programs, but is only guaranteed to terminate for a certain class of programs. Chin has shown how one can apply deforestation to all first-order programs. We develop a new technique of ensuring termination of deforestation for all first-order programs which strictly extends Chin's technique in a sense we make precise. We also show how suitable modifications may render our technique applicable to ensure termination of transformers of logic programs such as*partial evaluation*as studied by Gallagher and others and the*elimination procedure*studied by Proietti and Pettorossi. - D-182.

**M Tofte and J-P Talpin**.

*A Theory of Stack Allocation in Polymorphically Typed Languages*.

Technical Report DIKU-report 93/15. Department of Computer Science, University of Copenhagen. 1993.*Summary*: We present a translation scheme for the polymorphically typed call-by-value lambda-calculus. All runtime values, including function closures, are put into*regions*. The store consists of a stack of regions. Region inference and effect inference are used to infer where regions can be allocated and deallocated. Recursive functions are handled using a limited form of polymorphic recursion. The translation is proved correct with respect to a store semantics, which models a region-based run-time system. Experimental results suggest that regions tend to be small, that region allocation is frequent and that overall memory demands are usually modest, even without garbage collection.*Pointers*: BibTeX. - D-183.

**M Tofte and J-P Talpin**.

*Implementation of the Typed Call-by-value lambda-calculus using a Stack of Regions*.

In*Proceedings from the 21st annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages*. 1994. (Accepted for publication).*Summary*: We present a translation scheme for the polymorphically typed call-by-value lambda-calculus. All runtime values, including function closures, are put into*regions*. The store consists of a stack of regions. Region inference and effect inference are used to infer where regions can be allocated and deallocated. Recursive functions are handled using a limited form of polymorphic recursion. The translation is proved correct with respect to a store semantics, which models a region-based run-time system. Experimental results suggest that regions tend to be small, that region allocation is frequent and that overall memory demands are usually modest, even without garbage collection.*Pointers*: BibTeX.

Generated automatically from ftp'able BibTeX sources.

topps-www@diku.dk / October 27, 2010.