- D-232.

**A M Ben-Amram**.

*Lower Bounds on Algebraic Random Access Machines*.

In*Proceedings, 22nd Int. Colloquium on Automata, Languages and Programming (ICALP)*( Z Fülöp and F Gécseg, eds.), pp. 360-371. Volume 944 of LNCS. Springer. 1995.*Keywords*: lower bounds, algebraic random access machines, real-number RAM, algebraic computation tree*Summary*: The method of proving lower bounds for decision problems by topological component counting, previously used with respect to algebraic computation trees, is extended in this work to random access machines with operations {+,-,times,/} or {+,-,times,lfloor;rfloor}.*Pointers*: BibTeX. - D-252.

**A M Ben-Amram**.

*Pointer Machines and Pointer Algorithms: an Annotated Bibliography*.

95/21. DIKU (Department of Computer Science). University of Copenhagen. September, 1995.*Summary*: The term ``pointer machine'' has been used in Computer Science literature with various incompatible denotations. To help in disambiguating references, this report explains the different concepts and lists publications referring to each - D-233.

**A M Ben-Amram**.

*What is a ``Pointer Machine''?*.

SIGACT News**26**(2), pp. 88-95. June, 1995.*Keywords*: pointer machine, pointer algorithm*Summary*: A note that describes the various contexts where the term ``pointer machine'' has been used, and clarify the differences - D-272.

**L Birkedal and M Welinder**.

*Binding-Time Analysis for Standard ML*.

Journal of Lisp and Symbolic Computation**8**(), pp. 191-208. 1995.*Keywords*: BTA, SML, partial evaluation, constraint, normalization*Summary*: Binding-time analysis for Standard ML using constraint normalization.*Pointers*: BibTeX. - D-246.

**R Bloo and K H Rose**.

*Preservation of Strong Normalisation in Named Lambda Calculi with Explicit Substitution and Garbage Collection*.

In*CSN-95: Computer Science in the Netherlands*. November, 1995.*Keywords*: lambda calculus, explicit substitution, strong normalisation, garbage collection*Summary*: We show that preservation of strong normalisation of lambda-terms (PSN) in explicit substitution calculi can be analysed in the simpler setting of a named calculus based directly on the definition of substitution (instead of complex de Bruijn-based approaches). We analyse the role of garbage collection and give an exposition of some families of counterexamples to PSN (which include Mellies's) which provides insight into when PSN can be expected to hold. - D-243.

**D Dussart, F Henglein, and C Mossin**.

*Polymorphic Recursion and Subtype Qualifications: Polymorphic Binding-Time Analysis in Polynomial Time*.

In*Proc. 2nd Int'l Static Analysis Symposium (SAS), Glasgow, Scotland*( A Mycroft, ed.), pp. 118-135. Volume 983 of Lecture Notes in Computer Science. Springer-Verlag. September, 1995.*Keywords*: polymorhism, binding-time analysis, subtyping, qualified types, polymorphic recursion, Kleene-Mycroft Iteration*Summary*: In a program analysis context we show how Mycroft's iterative method of computing principal types for a type system with polymorphic recursion can be generalized and adapted to work in a setting with subtyping. This does not only yield a proof of existence of*principal types*(most general properties), but also an algorithm for computing them. The punch-line of the development is that a very simple modification of the basic algorithm reduces its computational complexity from exponential time to polynomial time relative to the size of the given, explicitly typed program. - D-229.

**R Glück and J Jørgensen**.

*Efficient Multi-Level Generating Extensions for Program Specialization*.

In*Programming Languages: Implementations, Logics and Programs (PLILP'95)*( S Swierstra and M Hermenegildo, eds.), pp. 259-278. Volume 982 of Lecture Notes in Computer Science. Springer-Verlag. 1995.*Keywords*: generating extension, partial evaluation, cogen approach, multi-level binding-time analysis, multiple specialization, incremental self-application, multiple self-application*Summary*: Multiple program specialization can stage a computation into several computation phases. This paper presents an effective solution for multiple program specialization by generalizing conventional off-line partial evaluation and integrating the ``cogen approach'' with a multi-level binding-time analysis. This novel ``multi-cogen approach'' solves two fundamental problems of self-applicable partial evaluation: the generation-time problem and the generator-size problem. The multi-level program generator has been implemented for a higher-order subset of Scheme. Experimental results show a remarkable reduction of generation time and generator size compared to previous attempts of multiple self-application. - D-244.

**R Glück, R Nakashige, and R Zöchling**.

*Binding-Time Analysis Applied to Mathematical Algorithms*.

In*System Modelling and Optimization*( J Dolevzal and J Fidler, eds.), pp. 137-146. Volume of . Chapman & Hall. 1995.*Keywords*: partial evaluation, numerical algorithms, scientific computing, Fortran*Summary*: This paper shows how a binding-time analysis can be used to identify potential sources for specialization in mathematical algorithms. The method is surprisingly simple and effective. To demonstrate the effectiveness of this approach we used an automatic partial evaluator for Fortran that we developed. Results for five well-known algorithms show that some remarkable speedup factors can be obtained on a uniprocessor architecture (Romberg integration, cubic splines interpolation, partial differential equations, fast Fourier transformation, Chebyshev approximation) - D-258.

**R Glück and A V Klimov**.

*Metasystem transition schemes in computer science and mathematics*.

World Futures**45**(), pp. 213-243. 1995.*Keywords*: linguistic modeling, metacomputation, metasystem transition, self-application, theorem proving, program composition, program inversion, program specialization*Summary*: We analyze metasystem transitions, which may be observed, or are intentionally used, in computer science and mathematics. Various metasystem structures are present in their activities of executing, creating and manipulating formal linguistic models. The crucial role in automating the creation and manipulation of linguistic models is played by metacomputation, that is, computation over formal models. The manipulation of languages is one of the most essential problems of linguistic modeling. In this paper we analyze different schemes for transforming language definitions by metasystem transition and metacomputation, and present an example of ultra-metasystem transition. We show that self-application of metacomputation, a special case of metasystem transition, plays a central role in linguistic modeling. Finally, we discuss a direct approach to theorem proving using a constructive representation of mathematical knowledge. We derive the basic requirements for metacomputation by a structural analysis of different model definitions using a single concept, namely formal linguistic modeling, and show that three operations must be performed effectively and efficiently by metacomputation: composition, inversion, and specialization of linguistic models.*Pointers*: BibTeX. - D-239.

**J Hatcliff**.

*Mechanically Verifying the Correctness of an Offline Partial Evaluator*.

In*Programming Languages: Implementations, Logics and Programs (PLILP'95)*( S Swierstra and M Hermenegildo, eds.), pp. 279-298. Volume 982 of Lecture Notes in Computer Science. Springer-Verlag. 1995.**Obsolete!**Superseded by D-316.*Keywords*: partial evaluation, binding-time analysis, executable specifications, automatic theorem proving*Summary*: We show that using deductive systems to specify an offline partial evaluator allows its correctness to be mechanically verified. For a lambda-mix-style partial evaluator, we specify binding-time constraints using a natural-deduction logic, and the associated program specializer using natural (aka ``deductive'') semantics. These deductive systems can be directly encoded in the Elf programming language -- a logic programming language based on the LF logical framework. The specifications are then executable as logic programs. This provides a prototype implementation of the partial evaluator. Moreover, since deductive system proofs are accessible as objects in Elf, many aspects of the partial evaluation correctness proofs (e.g., the correctness of binding-time analysis) can be coded in Elf and mechanically verified. This work illustrates the utility of declarative programming and of using deductive systems for defining program specialization systems: by exploiting the logical character of definitions, one may specify, prototype, and mechanically verify correctness via meta-programming -- all within a single framework. - D-240.

**J Hatcliff and O Danvy**.

*Thunks and the lambda-calculus*.

95/3. DIKU. Copenhagen, Denmark. February, 1995.*Keywords*: thunks, lambda-calculus, continuations, operational semantics*Summary*: In his paper, Call-by-name, call-by-value and the lambda-calculus, Plotkin formalized evaluation strategies and simulations using operational semantics and continuations. In particular, he showed how call-by-name evaluation could be simulated under call-by-value evaluation and vice versa. Since Algol 60, however, call-by-name is both implemented and simulated with thunks rather than with continuations. We recast this folk theorem in Plotkin's setting, and show that thunks, even though they are simpler than continuations, are sufficient for establishing all the correctness properties of Plotkin's call-by-name simulation. Furthermore, we establish a new relationship between Plotkin's two continuation-based simulations Cn and Cv, by deriving Cn as the composition of our thunk-based simulation T and of Cv. Almost all of the correctness properties of Cn follow from the properties of T and Cv. This simplifies reasoning about call-by-name continuation-passing style. We also give several applications involving factoring continuation-based transformations using thunks. - D-241.

**F Henglein and J Rehof**.

*Safe Polymorphic Type Inference for a Dynamically Typed Language: Translating Scheme to ML*.

In*Proc. ACM Conf. on Functional Programming Languages and Computer Architecture (FPCA), La Jolla, California*, page . ACM Press. ACM. June, 1995.*Keywords*: polymorphism, dynamic typing, Scheme, ML, translation, coercion, type inference*Summary*: We describe a new method for polymorphic type inference for the dynamically typed language Scheme. The method infers both types and explicit*run-time type operations (coercions)*for a given program. It can be used to statically debug Scheme programs and to give a high-level translation to ML, in essence providing an ``embedding'' of Scheme into ML. - D-242.

**F Henglein and D Sands**.

*A Semantic Model of Binding Times for Safe Partial Evaluation*.

In*Programming Languages: Implementations, Logics and Programs (PLILP'95)*( S Swierstra and M Hermenegildo, eds.), pp. 299-320. Volume 982 of Lecture Notes in Computer Science. Springer-Verlag. 1995.*Keywords*: binding time, ideal model, partial evaluation, safety, topped domains, monovariance*Summary*: We develop a semantic model of binding times that is sufficient to guarantee*safety*of a large class of partial evaluators. In particular, we

item identify problems of existing models of binding-time properties based on projections and partial equivalence relations (PERs), which imply that they are not adequate to prove the safety of simple off-line partial evaluators; item propose a new model for binding times that avoids the potential pitfalls of projections(PERs); item specify binding-time annotations justified by a ``collecting'' semantics, and clarify the connection between extensional properties (local analysis) and program annotations (global analysis) necessary to support binding-time analysis; item prove the safety of a simple but liberal class of monovariant partial evaluators for a higher-order functional language with recursive types, based on annotations justified by the model.

This is the extended version of the PLILP conference article, including all relevant proofs.*Pointers*: BibTeX. dvi.gz. ps.gz. - D-256.

**N D Jones**.

*Computability and Complexity from a Programming Perspective: MFPS Preview*.

In*Mathematical Foundations of Programming Semantics*( S Brookes, M Main, A Melton, and M Mislove, eds.), pp. 1-20. Volume of Electronic Notes in Theoretical Computer Science. Elsevier. 1995.*Keywords*: computability, complexity, programming languages*Summary*: The author's forthcoming book (re-)proves central results in computability and complexity theory from a programmer-oriented perspective. Some new results have come from an alternative approach using programming language concepts. One: for a computation model more natural than the Turing machine, multiplying the available problem-solving time provably increases problem-solving power (in general*not true*for Turing machines). Another: the class of decision problems solvable by Wadler's ``treeless" programs, or by*cons-free*programs on Lisp-like lists, are identical with the well-studied complexity class LOGSPACE. A third is that cons-free programs augmented with recursion can solve*all and only*PTIME problems. Paradoxically, these programs often run in exponential time (not a contradiction, since they can be simulated in polynomial time by memoization.) This tradeoff indicates a tension between running time and memory space which seems worth further investigation.*Pointers*: BibTeX. - D-255.

**N D Jones**.

*MIX Ten Years Later*.

In*Proceedings of PEPM '95*( W L Scherlis, ed.), pp. 24-38. Volume of . ACM Press. ACM. 1995.*Keywords*: partial evaluation, program specialisation, program specialization, self-application, online specialisation, offline specialisation*Summary*: This retrospective article discusses developments since 1985, when the first self-applicable partial evaluator was announced. Emphases include the value of equational reasoning even when dealing with entire programs as data objects; historical developments including online and offline specialisation; advances concerning higher-order functions on the one hand, and machine-near imperative languages on the other; and explicitly keeping track of staging levels and evaluation times. An assessment is given of central problems and progress made towards solving them, and the article ends by discussing criteria for ``what next,'' and gives a list of open problems. - D-228.

**P Kleinrubatscher, A Kriegshaber, R Zöchling, and R Glück**.

*Fortran Program Specialization*.

SIGPLAN Notices**30**(4), pp. 61-70. 1995.*Keywords*: partial evaluation, imperative languages, Fortran*Summary*: We developed a partial evaluator for a subset of Fortran 77. The partial evaluator is based on the off-line approach and uses a binding-time analysis prior to the specialization phase. The source language includes multi-dimensional arrays, procedures and functions, as well as global storage. The system is presented and experimental results for numerical and non-numerical applications are given. - D-224.

**T Æ Mogensen**.

*Self-Applicable Online Partial Evaluation of the Pure Lambda Calculus*.

In*Proceedings of PEPM '95*( W L Scherlis, ed.), pp. 39-44. Volume of . ACM Press. ACM. 1995.*Summary*: An online partial evaluator for the pure lambda calculus is shown. This is refined until self-application is possible. Reasonable speedups are obtained.*Pointers*: BibTeX. - D-234.

**K Nielsen and M H Sørensen**.

*Call-by-Name CPS-Translation as a Binding-Time Improvement*.

In*Static Analysis Symposium*( A Mycroft, ed.), pp. 296-313. Volume 983 of Lecture Notes in Computer Science. Springer-Verlag. 1995.*Keywords*: partial evaluation, deforestation, binding time improvement, evaluation order, program transformation, call-by-name, continuation passing style*Summary*: Much attention has been given to the call-by-value continuation passing style (CBV CPS) translation as a tool in partial evaluation, but the call-by-name (CBN) CPS translation has not been investigated. We undertake a systematic investigation of the effect of CBN CPS in connection with partial evaluation and deforestation. First, we give an example where CBN CPS translation acts as a binding time improvement to achieve the effects of deforestation using partial evaluation. The same effect cannot be achieved with CBV CPS. Second, we prove formally that the CBN CPS translation together with partial evaluation has the power to achieve llthe effects of deforestation. The consequence of these results is a practical tool (the CBN CPS) for improving the results of partial evaluation, as well as an improved understanding of the relation between partial evaluation and deforestation. - D-249.

**J Rehof**.

*Polymorphic Dynamic Typing. Aspects of Inference and Proof Theory*.

Master's Thesis. DIKU. Department of Computer Science, University of Copenhagen, Denmark. August, 1995.*Keywords*: Dynamic typing, coercions calculi, proof theory, type inference*Summary*: We study Henglein's proof theoretic calculus of dynamic typing, in particular problems of confluence and termination of coercion minimization. We extend Henglein's monomorphic framework to include polymorphism, regular recursive types and discriminative sum types, arriving at a dynamic type system comparable to systems of soft typing in expressive power. We study the problem of completion inference in our generalized setting. - D-250.

**E Rose**.

*Linear Time Hierarchies and Functional Languages*.

September, 1995. Accepted for publication, ESOP '96.**Obsolete!**Superseded by D-270.*Keywords*: linear complexity, problem hierarchy, time-decidable sets, functional languages, the categorical abstract machine, computation model, data structures, selective updating*Summary*: In STOC 93, Jones sketched the existence of a hierarchy within linear time-decidable problems in the case of a canonical, first-order functional language based on tree-structured data (F), as well as for that first-order functional language based on graph-structured data (Fsu). We consider the Categorical Abstract Machine (CAM), a canonical machine model for implementing higher-order functional languages. We show the existence of such a hierarchy in the case of CAM based on tree-structured data (in particular without explicit recursion facilities), as well as in the case of graph-structured data (in particular with explicit recursion facilities). Conclusion: first-order functional programs, and higher-order functional programs all define the same class of linear-time decidable problems. Establishing the hierarchies for the two CAM versions, however, reveal a tension with respect to complexity hierarchies within linear time-decidable sets when using a computation model for higher-order functional languages with selective update instructions, and one without. Motto: programming with recursive data structures changes the linear time hierarchy. - D-238.

**K H Rose and R R Moore**.

*Xy-pic Reference Manual*.

3.0 edition. DIKU, University of Copenhagen. Universitetsparken 1, DK-2100 København Ø. June, 1995.*Keywords*: diagram, picture, graph, arrow, curve, commutative diagram*Summary*: Xy-pic is a package for typesetting graphs and diagrams. It is structured as several modules, each defining a custom notation for a particular kind of graphical object or structure. Example objects are arrows, curves, frames, and colouring/rotation on drivers that support it; these can be organised in matrix, directed graph, path, polygon, knot, and 2-cell structure. Xy-pic works with most formats, including LaTeX, AMS-LaTeX, AMS-TeX, and plain TeX, and has been used to typeset complicated diagrams from many application areas including category theory, automata theory, algebra, neural networks, and database theory - D-237.

**K H Rose**.

*Xy-pic User's Guide*.

3.0 edition. DIKU, University of Copenhagen. Universitetsparken 1, DK-2100 København Ø. June, 1995.*Keywords*: diagram, picture, graph, arrow, curve, commutative diagram*Summary*: Xy-pic is a package for typesetting graphs and diagrams using Knuth's TeX typesetting system. Xy-pic works with most of the many formats available, eg, plain TeX, LaTeX, AMS-LaTeX, and AMS-TeX. Several styles of input for various diagram types are supported; they all share a mnemonic notation based on the logical composition of visual components used - D-247.

**K H Rose**.

*Combinatory Reduction Systems with Explicit Substitution*.

In*HOA '95 - Second International Workshop on Higher-Order Algebra, Logic and Term Rewriting*. Paderborn, Germany. September, 1995.*Keywords*: combinatory reduction systems, explicit substitution*Summary*: We show how the idea of explicit substitution can be applied to give a measure of the computational complexity of rewriting in combinatory reduction systems (CRS) by constructing an extension Rx with explicit substitution for any CRS R. We prove that Rx is confluent if R is orthogonal. - D-251.

**K H Rose**.

*Deriving Abstract Machines with Sharing*.

October, 1995. Submitted.*Keywords*: lambda calculus, functional programming, explicit substitution, computational models, abstract machines, sharing, graph reduction*Summary*: It is known that abstract machines can be derived from calculi with explicit substitution or environments. In this paper we first show how an abstract machine can be derived from a lambda-calculus with explicit substitution using names. We then show how we can enrich this with*sharing information*which then ripples through the derivation, to reappear in the abstract machine as a form of `addressing' very similar to that used in graph reduction for functional language implementations. - D-223.

**D Sands**.

*Higher-Order Expression Procedures*.

In*Proceedings of the ACM SIGPLAN Symposium on Partial Evaluation and Semantics-Based Program Manipulation (PEPM)*(, ed.), pp. 178-189. Volume of . ACM Press. 1995.*Summary*: We investigate the soundness of a specialisation technique due to Scherlis, expression procedures, in the context of a higher-order non-strict language. An expression procedure is a generalised procedure construct which provides a means of expressing contextual properties of functions, and thereby facilitates the manipulation and contextual specialisation of programs. Generalised programs provide a route to the correct specialisation of recursive functions, are transformed by means of three basic transformation rules: composition, application and abstraction. In this paper we show that the expression procedure approach is correct for call-by-name evaluation, including lazy data structures and higher order functions. - D-253.

**D Sands**.

*A Naïve Time Analysis and its Theory of Cost Equivalence*.

Journal of Logic and Computation**5**(4), pp. 495-541. 1995.*Keywords*: time analysis, lazy evaluation, operational semantics*Summary*: A simple calculus for time analysis of non-strict functional programs is introduced, and a supporting theory of cost-equivalence is developed. An interface to more compositional methods is given.*Pointers*: BibTeX. - D-222.

**D Sands**.

*Proving the Correctness of Recursion-Based Automatic Program Transformations*.

In*Sixth International Joint Conference on Theory and Practice of Software Development (TAPSOFT)*( P Mosses, M Nielsen, and M Schwartzbach, eds.), pp. 681-695. Volume 915 of Lecture Notes in Computer Science. Springer-Verlag. 1995.*Summary*: This paper shows how the Improvement Theorem--a semantic condition for the total correctness of program transformation on higher-order functional programs--has practical value in proving the correctness of automatic techniques, including deforestation and supercompilation. - D-221.

**D Sands**.

*Total Correctness by Local Improvement in Program Transformation*.

In*Proceedings of the 22nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL)*(, ed.), pp. 221-232. Volume of . ACM Press. 1995.*Summary*: This paper presents a condition for the total correctness of transformations on recursive programs, which, for the first time, deals with higher-order functional languages (both strict and non-strict) including lazy data structures. The main technical result is an improvement theorem which says that if the local transformation steps are guided by certain optimisation concerns (a fairly natural condition for a transformation), then correctness of the transformation follows. The improvement theorem makes essential use of a formalised improvement-theory; as a rather pleasing corollary it also guarantees that the transformed program is a formal improvement over the original. The theorem has immediate practical consequences: (i) It is a powerful tool for proving the correctness of existing transformation methods for higher-order functional programs, without having to ignore crucial factors such as memoization or folding. We have applied the theorem to obtain a particularly simple proof of correctness for a higher-order variant of deforestation. (ii) It yields a simple syntactic method for guiding and constraining the unfold/fold method in the general case so that total correctness (and improvement) is always guaranteed.*Pointers*: BibTeX. dvi.gz. ps.gz. - D-254.

**M H Sørensen and R Glück**.

*An Algorithm of Generalization in Positive Supercompilation*.

In*Logic Programming: Proceedings of the 1995 International Symposium*( J Lloyd, ed.), pp. 465-479. MIT Press. 1995.*Keywords*: Positive supercompilation, generalization, termination, partial deduction*Summary*: This paper presents a termination technique for positive supercompilation, based on notions from term algebra. The technique is not particularily biased towards positive supercompilation, but also works for deforestation and partial evaluation. It appears to be well suited for partial deduction too. The technique guarantees termination, yet it is not overly conservative. Our technique can be viewed as an instance of Martens' and Gallagher's recent framework for global termination of partial deduction, but it is more general in some important respects, e.g., it uses well-quasi orderings rather than well-founded orderings. Its merits are illustrated on several examples. - D-299.

**M H Sørensen**.

*Embeddings and Infinite Reduction Paths in Untyped lambda-calculus*.

1995. Presented at the 2nd International Workshop on Termination, Le Bresse, France.*Keywords*: Lambda-calculus, strong normalization, term embeddings*Summary*: The term Omegaequiv omega:omega with omega equiv lambda x . x:x is the simplest term in untyped lambda-calculus with an infinite reduction path: Omega rightarrow Omega rightarrow ldots In every step of the reduction path the term reduces to itself. This characterization does not apply to all infinite reduction paths. In this paper we develop two characterizations which are true for all infinite reduction paths. The first states that in any infinite reduction path, some term must be embedded (in a certain sense) in a subsequent term. The second states that Omega is embedded (in a certain sense) in every term of every infinite reduction path. - D-245.

**P Thiemann and R Glück**.

*The generation of a higher-order online partial evaluator*.

In*Fuji International Workshop on Functional and Logic Programming*( M Takeichi and T Ida, eds.), pp. 239-253. Volume of . World Scientific. 1995.*Keywords*: partial evaluation, program generation, specializer projections, self-application, interpreter, termination*Summary*: We address the problem of generating an online partial evaluator for a higher-order, functional language from an appropriate interpreter using a state-of-the-art offline partial evaluator. To ensure termination of the generated online specializer the interpreter computes a self-embedding property on closure values and data structures. This guarantees termination whenever there is no static loop in the program to be specialized. We obtain a transformer for higher-order removal and higher-order arity raising (redundancy elimination) for free, by running the online specializer on a program with completely dynamic inputs.*Pointers*: BibTeX. - D-231.

**D N Turner, P Wadler, and C Mossin**.

*Once upon a type*.

In*7'th International Conference on Functional Programming and Computer Architecture*(, ed.), pp. 1-11. Volume of . ACM Press. La Jolla, California. June, 1995.*Keywords*: Usage Analysis, Call-by-need, Linear Logic*Summary*: We extend a Hindley-Milner type system with*uses*, yielding a type-inference based program analysis which determines when values are accessed at most once. The analysis is proven sound with respect to call-by-need reduction.*Pointers*: BibTeX. dvi.Z. dvi.gz. ps.Z. - D-248.

**M Welinder**.

*Very Efficient Conversions*.

In*The 8th International Workshop on Higher Order Logic Theorem Proving and Its Applications, Aspen Grove, Utah*( E T Schubert, P J Windley, and J Alves-Foss, eds.), pp. 340-352. Volume 971 of Lecture Notes in Computer Science. Springer Verlag. September, 1995.*Keywords*: Theorem Proving, Conversions, Generating Extension*Summary*: Using program transformation techniques from the field of partial evaluation an automatic tool for generating very efficient conversions from equality-stating theorems has been implemented. In the situation where a Hol user would normally employ the built-in function GEN_REWRITE_CONV, a function that directly produces a conversion of the desired functionality, this article demonstrates how producing the conversion in the form of a program text instead of as a closure can lead to significant speed-ups. The Hol system uses a set of 31 simplifying equations on a very large number of intermediate terms derived, e.g., during backwards proofs. For this set the conversion generated by the two-step method is about twice as fast as the method currently used. When installing the new conversion, tests show that the overall running times of Hol proofs are reduced by about 10%. Apart from the speed-up this is completely invisible to the user. With cooperation from the user further speed-up is possible.

Generated automatically from ftp'able BibTeX sources.

topps-www@diku.dk / October 27, 2010.