link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D....

69
Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen, J., Hendler, J., & (Eds.), A. T. (1990). Readings in Planning. Morgan Kaufmann, San Mateo, CA. Allouche, J.-P. (1994). Note on the cyclic Towers of Hanoi. Theoretical Computer Science, 123, 3–7. Anderson, J. R. (1983). The Architecture of Cognition. Havard University Press, Cambridge, MA. Anderson, J. R. (1995). Cognitive Psychology and Its Implications. Freeman, New York. Anderson, J. R., Conrad, F. G., & Corbett, A. T. (1989). Skill acquisition and the LISP tutor. Cognitive Science, 13, 467–505. Anderson, J. R., & Lebi` ere, C. (1998). The atomic components of thought. Erlbaum, Mahwah, NJ. Anderson, J., & Thompson, R. (1989). Use of analogy in a production system architecture. In Vosniadou, S., & Ortony, A. (Eds.), Similarity and Analogical Reasoning, pp. 267–297. Cambridge University Press. Angluin, D. (1981). A note on the number of queries needed to identify regular languages. Information and Control, 51 (1), 76–87. Angluin, D. (1984). Inductive inference: Efficient algorithms. In Biermann, A., Guiho, G., & Kodratoff, Y. (Eds.), Automatic Program Construction Tech- niques, pp. 503–516. Macmillan, New York. Anzai, Y., & Simon, H. (1979). The theory of learning by doing. Psychological Review, 86, 124–140. Atkinson, M. D. (1981). The cyclic Towers of Hanoi. Information Processing Letters, 13 (3), 118–119. Atwood, M. E., & Polson, P. G. (1976). A process model for water jug problems. Cognitive Psychology, 8, 191–216. Bacchus, F., & Kabanza, F. (1996). Using temproal logic to control search in a forward chaining planner. In New Directions in AI-planning, pp. 141–153. ISO, Amsterdam. Bacchus, F., Kautz, H., Smith, D. E., Long, D., Geffner, H., & Koehler, J. (2000). AIPS-00 Planning Competition, Breckenridge, CO. http://www. cs.toronto.edu/aips2000/. Barr, A., & Feigenbaum, E. A. (Eds.). (1982). The Handbook of Artificial Intelli- gence, Vol. 2. William Kaufmann, Los Altos, CA. Bates, J., & Constable, R. (1985). Proofs as programs. ACM Transactions on Programming Languages and Systems, 7 (1), 113–136. Bauer, F., Broy, M., M¨oller, B., Pepper, P., Wirsing, M., et al. (1985). The Munich Project CIP. Vol. I: The Wide Spectrum Language CIP-L. Springer.

Transcript of link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D....

Page 1: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Bibliography

Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques,and Tools. Addison Wesley, Reading, MA.

Allen, J., Hendler, J., & (Eds.), A. T. (1990). Readings in Planning. MorganKaufmann, San Mateo, CA.

Allouche, J.-P. (1994). Note on the cyclic Towers of Hanoi. Theoretical ComputerScience, 123, 3–7.

Anderson, J. R. (1983). The Architecture of Cognition. Havard University Press,Cambridge, MA.

Anderson, J. R. (1995). Cognitive Psychology and Its Implications. Freeman, NewYork.

Anderson, J. R., Conrad, F. G., & Corbett, A. T. (1989). Skill acquisition and theLISP tutor. Cognitive Science, 13, 467–505.

Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Erlbaum,Mahwah, NJ.

Anderson, J., & Thompson, R. (1989). Use of analogy in a production systemarchitecture. In Vosniadou, S., & Ortony, A. (Eds.), Similarity and AnalogicalReasoning, pp. 267–297. Cambridge University Press.

Angluin, D. (1981). A note on the number of queries needed to identify regularlanguages. Information and Control, 51 (1), 76–87.

Angluin, D. (1984). Inductive inference: Efficient algorithms. In Biermann, A.,Guiho, G., & Kodratoff, Y. (Eds.), Automatic Program Construction Tech-niques, pp. 503–516. Macmillan, New York.

Anzai, Y., & Simon, H. (1979). The theory of learning by doing. PsychologicalReview, 86, 124–140.

Atkinson, M. D. (1981). The cyclic Towers of Hanoi. Information Processing Letters,13 (3), 118–119.

Atwood, M. E., & Polson, P. G. (1976). A process model for water jug problems.Cognitive Psychology, 8, 191–216.

Bacchus, F., & Kabanza, F. (1996). Using temproal logic to control search in aforward chaining planner. In New Directions in AI-planning, pp. 141–153.ISO, Amsterdam.

Bacchus, F., Kautz, H., Smith, D. E., Long, D., Geffner, H., & Koehler, J.(2000). AIPS-00 Planning Competition, Breckenridge, CO. http://www.cs.toronto.edu/aips2000/.

Barr, A., & Feigenbaum, E. A. (Eds.). (1982). The Handbook of Artificial Intelli-gence, Vol. 2. William Kaufmann, Los Altos, CA.

Bates, J., & Constable, R. (1985). Proofs as programs. ACM Transactions onProgramming Languages and Systems, 7 (1), 113–136.

Bauer, F., Broy, M., Moller, B., Pepper, P., Wirsing, M., et al. (1985). The MunichProject CIP. Vol. I: The Wide Spectrum Language CIP-L. Springer.

Page 2: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

328 Bibliography

Bauer, F., Ehler, H., Horsch, R., Moller, B., Partsch, H., Paukner, O., & Pepper,P. (1987). The Munich Project CIP. Vol. II: The Transformation SystemCIP-S, LNCS 292, Vol. II. Springer.

Bauer, M. (1975). A basis for the acquisition of procedures from protocols. In Proc.4th International Joint Conference on Artificial Intelligence (4th IJCAI), pp.226–231.

Bibel, W. (1980). Syntax-directed, semantic-supported program synthesis. ArtificialIntelligence, 14 (3), 243–261.

Bibel, W. (1986). A deductive solution for plan generation. New Generation Com-puting, 4 (2), 115–132.

Bibel, W., Korn, D., Kreitz, C., & Kurucz, F. (1998). A multi-level approach toprogram synthesis. In Proc. 7th International Workshop on Logic ProgramSynthesis and Transformation, Vol. 1463 of LNCS, pp. 1–27. Springer.

Biermann, A. W. (1978). The inference of regular LISP programs from examples.IEEE Transactions on Systems, Man, and Cybernetics, 8 (8), 585–600.

Biermann, A. W., Guiho, G., & Kodratoff, Y. (Eds.). (1984). Automatic ProgramConstruction Techniques. Macmillan, New York.

Blum, A., & Furst, M. (1997). Fast planning through planning graph analysis.Artificial Intelligence, 90 (1-2), 281–300.

Bonet, B., & Geffner, H. (1999). Planning as heuristic search: New results. In Proc.European Conference on Planning (ECP-99), Durham, UK. Springer.

Bostrom, H. (1996). Theory-guided induction of logic programs by inference ofregular languages. In Saitta, L. (Ed.), Proc. 13th International Conferenceon Machine Learning (ICML-96), pp. 46–53. Morgan Kaufmann.

Bostrom, H. (1998). Predicate invention and learning from positive examples only.In Proc. 10th European Conference on Machine Learning (ECML-98), No.1398 in LNAI, pp. 226–237. Springer.

Boyer, R., & Moore, J. (1975). Proving theorems about lisp functions. Jornal ofthe ACM, 22 (1), 129–144.

Briesemeister, L., Scheffer, T., & Wysotzki, F. (1996). A concept based algorith-mic model for skill acquisition. In Schmid, U., Krems, J., & Wysotzki, F.(Eds.), Proc. 1st European Workshop on Cognitive Modeling (14.-16.11.96)TU Berlin. http://ki.cs.tu-berlin.de/EuroCog/cm-program.html.

Broy, M., & Pepper, P. (1981). Program development as formal activity. IEEETransactions on Software Engineering, SE-7 (1), 14–22.

Bryant, R. E. (1986). Graph-based algorithms for boolean function manipulation.IEEE Transactions on Computers, 35, 677–691.

Buchanan, B. G., & Wilkins, D. C. (Eds.). (1993). Readings in Knowledge Acqui-sition and Learning. Morgan Kaufmann, San Mateo, CA.

Bundy, A. (1988). The use of explicit plans to guide inductive proofs. In Proc. 9thConference on Automatic Deduction (CADE), pp. 111–120. Springer.

Bunke, H., & Messmer, B. T. (1994). Similarity measures for structured represen-tations. In Wess, S., Althoff, K.-D., & Richter, M. (Eds.), Proc. 1st EuropeanWorkshop on Topics in Case-Based Reasoning, Vol. 837 of LNAI, pp. 106–118. Springer.

Burch, J. R., Clarke, E. M., McMillan, K. L., & Hwang, J. (1992). Symbolic modelchecking: 1020 states and beyond. Information and Computation, 98 (2), 142–170.

Burghardt, J., & Heinz, B. (1996). Implementing anti-unification modulo equationaltheory. Tech. rep., Arbeitspapiere der GMD. ftp://ftp.first.gmd.de/pub/jochen/Papers/ap1006.dvi.gz.

Page 3: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Bibliography 329

Burns, B. (1996). Meta-analogical transfer: Transfer between episodes of analogi-cal reasoning. Journal of Experimental Psychology: Learning, Memory, andCognition, 22 (4), 1032–1048.

Burstall, R., & Darlington, J. (1977). A transformation system for developingrecursive programs. JACM, 24 (1), 44–67.

Burton, C. T. P. (1992). Program morphisms. Formal Aspects of Computing, 4,693–726.

Bylander, T. (1994). The computational complexity of Strips planning. ArtificialIntelligence, 69, 165–204.

Carbonell, J. (1986). Derivational analogy: A theory of reconstructive problemsolving and expertise acquistion. In Michalski, R., Carbonell, J., & Mitchell,T. (Eds.), Machine Learning - An Artificial Intelligence Approach, Vol. 2, pp.371–392. Morgan Kaufmann, Los Altos, CA.

Cheatham, T. E. (1984). Reusability through program transformation. IEEE Trans-actions of Software Engineering, 10 (5), 589–594.

Chomsky, N. (1959). Review of Skinner’s ‘Verbal Behavior’. Language, 35, 26–58.Christofides, N. (1975). Graph Theory - An Algorithmic Approach. Academic Press,

London.Cimatti, A., Roveri, M., & Traverso, P. (1998). Automatic OBDD-based genera-

tion of universal plans in non-deterministic domains. In Proc. 15th NationalConference on Artificial Intelligence (AAAI-98), pp. 875–881 Menlo Park.AAAI Press.

Clement, E., & Richard, J.-F. (1997). Knowledge of domain effects in problem rep-resentation: The case of Tower of Hanoi isomorphs. Thinking and Reasoning,3 (2), 133–157.

Cohen, W. W. (1992). Desiderata for generaization-to-n algorithms. In Proc. Int.Workshop on Analogical and Inductive Inference (AII ’92), Dagstuhl Castle,Germany, Vol. 642 of LNAI, pp. 140–150. Springer.

Cohen, W. W. (1995). Pac-learning recursive logic programs: Efficient algorithms.Journal of Artificial Intelligence Research, 2, 501–539.

Cohen, W. (1998). Hardness results for learning first-order representations andprogramming by demonstration. Machine Learning, 30, 57–97.

Cormen, T., Leiserson, C. E., & Rivest, R. (1990). Introduction to Algorithms. MITPress.

Courcelle, B. (1990). Recursive applicative program schemes. In van Leeuwen,J. (Ed.), Handbook of Theoretical Computer Science, Vol. A, pp. 459–492.Elsevier.

Courcelle, B., & Nivat, M. (1978). The algebraic semantics of recursive programschemes. In Winkowski, J. (Ed.), Proc. 7th Symposium on MathematicalFoundations of Computer Science, Vol. 64 of LNCS, pp. 16–30. Springer.

Cypher, A. (Ed.). (1993). Watch What I Do: Programming by Demonstration. MITPress, Cambridge, MA.

Davey, B. A., & Priestley, H. A. (1990). Introduction to Lattices and Order. Cam-bridge University Press.

Dean, T., Basye, K., & Shewchuk, J. (1993). Reinforcement learning for planningand control. In Minton, S. (Ed.), Machine Learning Methods for Planning,chap. 3, pp. 67–92. Morgan Kaufmann.

Dershowitz, N. (1986). Programming by analogy. In Michalski, R. S., Carbonell,J. G., & Mitchell, T. M. (Eds.), Machine Learning – An Artificial IntelligenceApproach, Vol. 2, pp. 393–422. Morgan Kaufmann, Los Altos, CA.

Dershowitz, N., & Jouanaud, J.-P. (1990). Rewrite systems. In Leeuwen, J. (Ed.),Handbook of Theoretical Computer Science, Vol. B, pp. 243–320. Elsevier.

Page 4: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

330 Bibliography

Do, M. B., & Kambhampati, S. (2000). Solving planning-graph by compiling itinto CSP. In Proc. 5th International Conference on Artificial IntelligencePlanning and Scheduling (AIPS 2000), pp. 82–91. AAAI Press.

Dold, A. (1995). Representing, verifying and applying software development stepsusing the PVS system. In Alagar, V. S., & Nivat, M. (Eds.), Proc. AlgebraicMethodology and Software Technology (AMAST’95), Vol. 936 of LNCS, pp.431–460. Springer.

Duncker, K. (1945). On problem solving. Psychological Monographs, 58.Eckes, T., & Roßbach, H. (1980). Clusteranalysen. Kohlhammer, Stuttgart.Edelkamp, S. (2000). Heuristic search planning with BDDs. In Proc. 14th Workshop

”New Results in Planning, Scheduling and Design” (PuK2000), Berlin. http://www-is.informatik.uni-oldenburg.de/~sauer/puk2000/paper.html.

Ehrig, H., & Mahr, B. (1985). Fundamentals of Algebraic Specification 1 - Equationsand Initial Semantics. Springer, Berlin.

Evans, T. G. (1968). A program for the solution of a class of geometric-analogyintelligence-questions. In Minsky, M. (Ed.), Semantic Information Processing,pp. 271–353. MIT Press.

Falkenhainer, B., Forbus, K., & Gentner, D. (1989). The structure mapping engine:Algorithm and example. Artificial Intelligence, 41, 1–63.

Feather, M. S. (1987). A survey and classification of some program transforma-tion approaches and techniques. In Meertens, L. G. L. T. (Ed.), ProgramSpecification and Transformation, pp. 165–198. North-Holland.

Field, A. J., & Harrison, P. G. (1988). Functional Progamming. Addison-Wesley,Reading, MA.

Fikes, R., & Nilsson, N. (1971). STRIPS: A new approach to the application oftheorem proving to problem solving. Artificial Intelligence, 2 (3), 189–208.

Fink, E., & Yang, Q. (1997). Automatically selecting and using primary effects inplanning: Theory and experiments. Artificial Intelligence Journal, 89, 285–315.

Flener, P. (1995). Logic Program Synthesis from Incomplete Information. KluwerAcademic Press, Boston.

Flener, P., Popelinsky, L., & Stepankova, O. (1994). Ilp and automatic program-ming: towards three approaches. In Proc. of ILP-94, pp. 351–364.

Flener, P., & Yilmaz, S. (1999). Inductive synthesis of recursive logic programs:Achievements and prospects. Journal of Logic Programming, 41 (2–3), 141–195.

Fox, M., & Long, D. (1998). The automatic inference of state invariants in TIM.Journal of Artificial Intelligence Research, 9, 367–421.

Fruhwirth, T., & Abdennadher, S. (1997). Constraint-Programming. Springer,Berlin.

Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1996). Design Patterns: Ele-ments of Reusable Object-oriented Software. Addison Wesley, Reading, MA.

Garey, M. R., & Johnson, D. S. (1979). Computers and Intractability – A Guide tothe Theory of NP-completeness. Freeman.

Geffner, H. (2000). Functional Strips: A more flexible language for planning andproblem solving. In Minker, J. (Ed.), Logic-Based Artificial Intelligence.Kluwer, Dordrecht. Earlier version presented at ”Logic-based AI Workshop”,Washington D.C., June 1999.

Gelfond, M., & Lifschitz, V. (1993). Representing action and change by logic pro-grams. Journal of Logic Programming, 17, 301–322.

Gentner, D. (1980). The Structure of Analogical Models in Science. Bolt Beranekand Newman Inc., Cambridge, MA.

Page 5: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Bibliography 331

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cog-nitive Science, 7, 155–170.

Gentner, D., Brem, S., Ferguson, R., Markman, A. B., Levidow, B., Wolff, P., &Fobus, K. (1997). Analogical reasoning and conceptual change: A case studyof Johannes Kepler. Journal of the Learning Sciences, 6 (1), 3–40.

Gentner, D., & Landers, R. (1985). Analogical access: A good match is hard tofind. In Proc. Annual Meeting of the Psychonomic Society, Boston.

Gentner, D., Ratterman, M. J., & Forbus, K. D. (1993). The roles of similarityin transfer: Separating retrievability from inferential soundness. CognitivePsychology, 25 (4), 524–575.

Georgeff, M. P. (1987). Planning. Annual Review of Computer Science, 2, 359–400.also in J. Allen, J. Hendler and A. Tate (Eds.), Readings in Planning, MorganKaufmann, 1990.

Gholson, B., Smither, D., Buhrman, A., Duncan, M. K., & Pierce, K. (1996). Thesources of children’s reasoning errors during analogical problem solving. Ap-plied Cognitive Psychology, Special Issue: ”Reasoning Processes”, 10, 85–97.

Gick, M., & Holyoak, K. (1980). Analogical problem solving. Cognitive Psychology,12, 306–355.

Gick, M., & Holyoak, K. (1983). Schema induction and analogical transfer. Cogni-tive Psychology, 14, 1–38.

Giesl, J. (2000). Context-moving transformations for function verification. In Proc.9th International Workshop on Logic-based Program Synthesis and Transfor-mation (LOPSTR ’99). Springer.

Ginsberg, M. (1989). Universal planning: An (almost) universally bad idea. AIMagazine, 10 (4), 40–44.

Giunchiglia, F., & Traverso, P. (1999). Planning as model checking. In Biundo,S., & Fox, M. (Eds.), Proc. 5th European Conference on Planning (ECP’99),Durham, UK, LNCS. Springer.

Gold, E. M. (1978). Complexity of automaton identification from given data. In-formation and Control, 37, 302–320.

Gold, E. (1967). Language identification in the limit. Information and Control, 10,447–474.

Green, C. (1969). Application of theorem proving to problem solving. In Proc.1st International Joint Conference on Artificial Intelligence (IJCAI-69), pp.342–344.

Green, C., Luckham, D., Balzer, R., Cheatham, T., & Rich, C. (1983). Report on aknowledge-based software assistant. Tech. rep. KES.U.83.2, Kestrel Institute,Palo Alto, CA.

Green, C., & Barstow, D. (1978). On program synthesis knowledge. ArtificialIntelligence, 10, 241–279.

Guessarian, I. (1992). Trees and algebraic semantics. In Nivat, M., & Podelski, A.(Eds.), Tree Automata and Languages, pp. 291–310. Elsevier.

Hasker, R. W. (1995). The Replay of Program Derivations. Ph.D. thesis, Univ. ofIllinois at Urbana-Champaign.

Haslum, P., & Geffner, H. (2000). Admissible heuristics for optimal planning. InProc. 5th International Conference on Artificial Intelligence Planning andScheduling (AIPS 2000), pp. 150–158. AAAI Press.

Hickman, A. K., & Larkin, J. H. (1990). Internal analogy: A model of transferwithin problems. In Proc. 12th Annual Conference of the Cognitive ScienceSociety (CogSci-90), pp. 53–60 Hillsdale, NJ. Lawrence Erlbaum.

Hinman, P. (1978). Recursion-Theoretic Hierarchies. Springer, New York.Hinz, A. M. (1996). Square-free Tower of Hanoi sequences. Enseign. Math., 2 (42),

257–264.

Page 6: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

332 Bibliography

Hofstadter, D., & The Fluid Analogies Research Group (1995). Fluid Concepts andCreative Analogies. Basic Books, New York.

Holland, J., Holyoak, K., Nisbett, R., & Thagard, P. (1986). Induction – Processesof Inference, Learning, and Discovery. MIT Press, Cambridge, MA.

Holyoak, K. J., & Thagard, P. (1989). Analogical mapping by constraint satisfac-tion. Cognitive Science, 13, 295–355.

Holyoak, K., & Koh, K. (1987). Surface and structural similarity in analogicaltransfer. Memory and Cognition, 15, 332–440.

Hopcroft, J., & Ullman, J. (1980). Introduction to Automata Theory, Languages,and Computation. Addison-Wesley.

Huang, Y., Selman, B., & Kautz, H. (2000). Learning declarative control rulesfor constraint-based planning. In Proc. 17th International Conference onMachine Learning (ICML-2000), Stanford, CA, pp. 415–422. Morgan Kauf-mann.

Huet, G. (1976). Resolution d’equations dans les langages d’ordre 1, 2, . . . , ω. Ph.D.thesis, Universitee de Paris VII.

Hummel, J., & Holyoak, K. (1997). Distributed representation of structure: Atheory of analogical access and mapping. Psychological Review, 104 (3), 427–466.

Idestam-Almquist, P. (1993). Generalization under implication by recursive anti-unification. In Utgoff, P. (Ed.), Proc. 10th International Conference on Ma-chine Learning (ICML-93), pp. 151–158. Morgan Kaufmann.

Idestam-Almquist, P. (1995). Efficient induction of recursive definitions by struc-tural analysis of saturations. In De Raedt, L. (Ed.), Proc. 5th InternationalWorkshop on Inductive Logic Programming (ILP-95), Leuven, pp. 77–94.

Indurkhya, B. (1992). Metaphor and Cognition. Kluwer, Dordrecht, The Nether-lands.

Jamnik, M., Kerber, M., & Benzmuller, C. (2000). Towards learning new methods inproof planning. In Proc. of the Calculemus Symposium: Systems for IntegratedComputation and Deduction St. Andrews, Scotland, UK. ftp://ftp.cs.bham.ac.uk/pub/authors/M.Kerber/TR/CSRP-00-09.ps.gz.

Jensen, R. M., & Veloso, M. M. (2000). OBDD-based universal planning for syn-chronized agents in non-deterministic domains. Journal of Artificial Intelli-gence Research, 13, 189–226.

Johnson, W. L. (1988). Modelling programmers’ intentions. In Self, J. (Ed.),Artificial Intelligence and Human Learning – Intelligent Computer-Aided In-struction, pp. 374–390. Chapman and Hall, London.

Johnson, W., & Feather, M. (1991). Using evolution transformations to constructspecifications. In Lowry, M., & McCartney, R. (Eds.), Automating SoftwareDesign, chap. 4, pp. 65–92. AAAI press/MIT Press, Menlo Park.

Jouannaud, J. P., & Kodratoff, Y. (1979). Characterization of a class of func-tions synthesized from examples by a summers like method using a ‘B.M.W.’matching technique. In Proc. International Joint Conference on ArtificialIntelligence (IJCAI-79), pp. 440–447. Morgan Kaufmann.

Kalmar, Z., & Szepesvari, C. (1999). An evaluation criterion for macro learningand some results. Tech. rep. TR99-01, Mindmaker Ltd., Budapest. http://victoria.mindmaker.hu/~szepes/papers/macro-tr99-01.ps.gz.

Kambhampati, S. (2000). Planning graph as a (dynamic) CSP: Exploiting EBL,DDB and other CSP search techniques in Graphplan. Journal of ArtificialIntelligence Research, 12, 1–34.

Page 7: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Bibliography 333

Kautz, H., & Selman, B. (1996). Pushing the envelope: Planning, propositional logicand stochastic search. In Proc. 13th National Conference on Artificial Intel-ligence and 8thInnovative Applications of Artificial Intelligence Conference,pp. 1194–1201 Menlo Park. AAAI Press/MIT Press.

Keane, M. (1996). On adaptation in analogy: Tests of pragmatic importance andadaptability in analogical problem solving. The Quarterly Journal of Exper-imental Psychology, 49A(4), 1062–1085.

Keane, M., Ledgeway, T., & Duff, S. (1994). Constraints on analogical mapping: Acomparison of three models. Cognitive Science, 18, 387–438.

Knuth, D. E. (1992). Convolution polynomials. The Mathematica Journal, 2 (4),67–78.

Kodratoff, Y., & J.Fargues (1978). A sane algorithm for the synthesis of LISPfunctions from example problems: The Boyer and Moore algorithm. In Proc.AISE Meeting Hambourg, pp. 169–175.

Kodratoff, Y., Franova, M., & Partridge, D. (1989). Why and how program synthe-sis?. In Proc. International Workshop on Analogical and Inductive Inference(AII ’89), LNCS, pp. 45–59. Springer.

Koedinger, K., & Anderson, J. (1990). Abstract planning and perceptual chunks:Elements of expertise in geometry. Cognitive Science, 14, 511–550.

Koehler, J. (1998). Planning under resource constraints. In Prade, H. (Ed.), Proc.13th European Conference on Artificial Intelligence (ECAI-98, pp. 489–493.Wiley.

Koehler, J., Nebel, B., & Hoffmann, J. (1997). Extending planning graphs to anADL subset. In Proc. European Conference on Planning (ECP-97), pp. 273–285. Springer. extended version as Technical Report No. 88/1997, UniversityFreiburg.

Kolbe, T., & Walther, C. (1995). Second-order matching modulo evaluation: Atechnique for reusing proofs. In Mellish, C. S. (Ed.), Proc. 14th InternationalJoint Conference on Artificial Intelligence (IJCAI-95), pp. 190–195. MorganKaufmann.

Kolodner, J. (1993). Case-based Reasoning. Morgan Kaufmann, San Mateo, CA.Korf, R. E. (1985). Macro-operators: a weak method for learning. Artificial Intel-

ligence, 1985, 26, 35–77.Koza, J. (1992). Genetic programming: On the programming of computers by means

of natural selection. MIT Press, Cambridge, MA.Kreitz, C. (1998). Program synthesis. In Bibel, W., & Schmitt, P. (Eds.), Automated

Deduction – A Basis for Applications, Vol. III, chap. III.2.5, pp. 105–134.Kluwer.

Laborie, P., & Ghallab, M. (1995). Planning with sharable resource constraints. InProc. of the 14th IJCAI, pp. 1643–1649. Morgan Kaufmann.

Lassez, J.-L., Maher, M. J., & Marriott, K. (1988). Unification revisited. InBoscarol, M., Aiello, L. C., & Levi, G. (Eds.), Proc. Workshop Foundations ofLogic and Functional Programming, Vol. 306 of LNCS, pp. 67–113. Springer.

Lavrac, N., & Dzeroski, S. (1994). Inductive Logic Programming: Techniques andApplications. Ellis Horwood, London.

Le Blanc, G. (1994). BMWk revisited: Generalization and formalization of analgorithm for detecting recursive relations in term sequences. In Bergadano,F., & de Raedt, L. (Eds.), Machine Learning, Proc. of ECML-94, pp. 183–197. Springer.

Leeuwenberg, E. (1971). A perceptual encoding language for visual and auditorypatterns. American Journal of Psychology, 84, 307–349.

Page 8: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

334 Bibliography

Lieberman, H. (1993). Tinker: A programming by demonstration system forbeginning programmers. In Cypher, A. (Ed.), Watch What I Do: Pro-gramming by Demonstration, chap. 2. MIT Press, Cambridge, MA. seehttp://www.acypher.com/wwid/Chapters/02Tinker.html.

Lifschitz, V. (1987). On the semantics of STRIPS. In Georgeff, M. P., & Lansky,A. L. (Eds.), Reasoning about Actions and Plans, pp. 1–9. Kaufmann, LosAltos, CA.

Long, D., & Fox, M. (1999). The efficient implementation of the plan-graph inSTAN. Journal of Artificial Intelligence Research, 10, 87–115.

Lowry, M., & Duran, R. (1989). Knowledge-based software engineering. In A. Barr,P. R. C., & Feigenbaum, E. A. (Eds.), Handbook of Artificial Intelligence,Vol. IV, pp. 241–322. Addison-Wesely, Reading, MA.

Lowry, M. L., & McCarthy, R. D. (Eds.). (1991). Autmatic Software Design. MITPress, Cambridge, MA.

Lu, S. (1979). A tree-to-tree distance and its application to cluster analysis. IEEETransactions on Pattern Analysis and Machine Intelligence, PAMI-1 (2), 219–224.

Manna, Z., & Waldinger, R. (1975). Knowledge and reasoning in program synthesis.Artificial Intelligence, 6, 175–208.

Manna, Z., & Waldinger, R. (1979). Synthesis: Dreams → programs. IEEE Trans-actions on Software Engineering, 5 (4), 294–328.

Manna, Z., & Waldinger, R. (1980). A deductive approach to program synthesis.ACM Transactions on Programming Languages and Systems, 2 (1), 90–121.also in: C. Rich and R.C. Waters (1986). Readings in AI and Software Engi-neering, Chap. 1. Morgan Kaufmann.

Manna, Z., & Waldinger, R. (1987). How to clear a block: a theory of plans. Journalof Automated Reasoning, 3 (4), 343–378.

Manna, Z., & Waldinger, R. (1992). Fundamentals of deductive program synthesis.IEEE Transactions on Software Engineering, 18 (8), 674–704.

Martın, M., & Geffner, H. (2000). Learning generalized policies in planning us-ing concept languages. In Proc. 7th International Conference on KnowledgeRepresentation and Reasoning (KR 2000), pp. 667–677. Morgan Kaufmann.

Mayer, R. E. (1983). Thinking, Problem Solving, Cognition. Freeman, New York.Mayer, R. E. (1988). Teaching and Learning Computer Programming: Multiple

Research Perspectives. Erlbaum, Hillsdale, NJ.McCarthy, J., & Hayes, P. J. (1969). Some philosophical problems from the stand-

point of artificial intelligence. Machine Intelligence, 4, 463–502.McCarthy, J. (1963). Situations, actions, and causal laws. Memo 2, Stanford

University Artificial Intelligence Project, Stanford, California.McDermott, D. (1998a). AIPS-98 Planning Competition Results. http://ftp.cs.

yale.edu/pub/mcdermott/aipscomp-results.html.McDermott, D. (1998b). PDDL - the planning domain definition language. http:

//ftp.cs.yale.edu/pub/mcdermott.Mercy, R. (1998). Programmieren durch analoges Schließen: Zwei Algorithmen zur

Adaptation rekursiver Programmschemata (programming by analogy: Twoalgorithms for the adaptation of recursive program schemes). Master’s thesis,Fachbereich Informatik, Technische Universitat Berlin.

Messing, L. S., & Campbell, R. (Eds.). (1999). Advances in Artificial Intelligencein Software Engineering. JAI Press, Greenwich, Connecticut.

Michalski, R. S., Carbonell, J. G., & Mitchell, T. M. (Eds.). (1983). MachineLearning – An Artificial Intelligence Approach. Tioga.

Michalski, R. S., Carbonell, J. G., & Mitchell, T. M. (Eds.). (1986). MachineLearning – An Artificial Intelligence Approach, Vol. 2. Morgan Kaufmann.

Page 9: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Bibliography 335

Minton, S. (1988). Learning effective search control knowledge: An explanation-based approach. Kluwer.

Minton, S. (1985). Selectively generalizing plans for problem-solving. In Proc.International Joint Conference on Artificial Intelligence (IJCAI-85), pp. 596–599. Morgan Kaufmann.

Mitchell, T. M., Keller, R. M., & Kedar-Cabelli, S. T. (1986). Explanation-basedgeneralization: A unifying view. Machine Learning, 1, 47–80.

Mitchell, T. M. (1997). Machine Learning. McGraw-Hill.Muggleton, S. (1994). Predicate invention and utility. Journal of Experimental and

Theoretical Artificial Intelligence, 6 (1), 121–130.Muggleton, S., & De Raedt, L. (1994). Inductive logic programming: Theory and

methods. Journal of Logic Programming, Special Issue on 10 Years of LogicProgramming, 19-20, 629–679.

Muggleton, S., & Feng, C. (1990). Efficient induction of logic programs. In Proc. ofthe Workshop on Algorithmic Learning Theory by the Japanese Society forAI, pp. 368–381.

Muggleton, S. (1991). Inductive logic programming. New Generation Computing,8, 295–318.

Muhlpfordt, M. (2000). Syntaktische Inferenz Rekursiver Programm Schemata(Syntactical inference of recursive program schemes). Master’s thesis, Dept.of Computer Science, TU Berlin.

Muhlpfordt, M., & Schmid, U. (1998). Synthesis of recursive functions with inter-dependent parameters. In Proc. of the Annual Meeting of the GI MachineLearning Group, FGML-98, pp. 132–139 TU Berlin.

Muller, W., & Wysotzki, F. (1995). Automatic synthesis of control programs bycombination of learning and problem solving methods. In Lavrac, N., &Wrobel, S. (Eds.), Proc. European Conference on Machine Learning (ECML-95), Vol. 912 of LNAI, pp. 323–326. Springer.

Nebel, B. (2000). On the compilability and expressive power of propositional plan-ning formalisms. Journal of Artificial Intelligence Research, 12, 271–315.

Nebel, B., & Koehler, J. (1995). Plan reuse versus plan generation: A theoreticaland empirical analysis. Artificial Intelligence, 76, 427–454.

Newell, A. (1990). Unifyied Theories of Cognition. Harvard University Press, Cam-bridge, MA.

Newell, A., Shaw, J. C., & Simon, H. A. (1958). Elements of a theory of humanproblem solving. Psychological Review, 65, 151–166.

Newell, A., & Simon, H. A. (1972). Human Problem Solving. Prentice Hall, Engle-wood Cliffs, NJ.

Newell, A., & Simon, H. (1961). GPS, A program that simulates human thought. InBilling, H. (Ed.), Lernende Automaten, pp. 109–124. Oldenbourg, Munchen.

Nilsson, N. J. (1980). Principles of Artificial Intelligence. Springer, New York.Nilsson, N. (1971). Problem-solving Methods in Artificial Intelligence. McGraw-Hill.Novick, L. R. (1988). Analogical transfer, problem similarity, and expertise. Journal

of Experimental Psychology: Learning, Memory, and Cognition, 14, 510–520.Novick, L. R., & Hmelo, C. E. (1994). Transferring symbolic representations across

nonisomorphic problems. Journal of Experimental Psychology: Learning,Memory, and Cognition, 20 (6), 1296–1321.

Novick, L. R., & Holyoak, K. J. (1991). Mathematical problem solving by anal-ogy. Journal of Experimental Psychology: Learning, Memory, and Cognition,17 (3), 398–415.

Odifreddi, P. (1989). Classical Recursion Theory, Vol. 125 of Studies in Logic.North-Holland, Amsterdam.

Page 10: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

336 Bibliography

O’Hara, S. (1992). A model of the redescription process in the context of geometricproportional analogy problems. In Proc. International Workshop on Analog-ical and Inductive Inference (AII ’92), Dagstuhl Castle, Germany, Vol. 642of LNAI, pp. 268–293. Springer.

Olsson, R. (1995). Inductive functional programming using incremental programtransformation. Artificial Intelligence, 74 (1).

Osherson, D., Stob, M., & Weinstein, S. (1986). Systems that Learn. MIT Press,Cambridge, MA.

Parandian, B., Schmid, U., & Wysotzki, F. (1995). Program synthesis with a gen-eralized planning approach. In Program Synthesis by Learning and Planning.Technical Report 95-20, Department of Computer Science, TU Berlin.

Partridge, D. (Ed.). (1991). Artificial Intelligence and Software Engineering. Ablex,Norwood, NJ.

Pednault, E. P. D. (1987). Formulating multiagent, dynamic-world problems in theclassical planning framework. In Georgeff, M. P., & Lansky, A. L. (Eds.),Proc. Workshop on Reasoning About Actions and Plans, pp. 47–82. MorganKaufmann.

Pednault, E. P. D. (1994). ADL and the state-transition model of action. Journalof Logic and Computation, 4 (5), 467–512.

Penberthy, J. S., & Weld, D. S. (1992). UCPOP: A sound, complete partial or-der planner for adl. In Proc. 3rd International Conference on Principles inKnowledge Representation and Reasoning (KR-92), pp. 103–114.

Pettorossi, A. (1984). Towers of Hanoi problems: Deriving the iterative solutionsusing the program transformation technique. Tech. rep. R.82, Instituto diAnalisi dei Sistemi ed Informatica, Roma.

Pirolli, P., & Anderson, J. (1985). The role of learning from examples in the acqui-sition of recursive programming skills. Canadian Journal of Psychology, 39,240–272.

Pisch, H. (2000). Ein effizienter Pattern-Matcher zur Synthese rekursiver Pro-gramme aus endlichen Termen (An efficient pattern-matcher for the synthesisof recursive programs from finite terms). Master’s thesis, Dept. of ComputerScience, TU Berlin.

Plaza, E. (1995). Cases as terms: A feature term approach to the structured rep-resentation of cases. In Proc. 1st International Conference on Case-BasedReasoning (ICCBR-95), Vol. 1010 of LNCS, pp. 265–276. Springer.

Plotkin, G. D. (1969). A note on inductive generalization. In Machine Intelligence,Vol. 5, pp. 153–163. Edinburgh University Press.

Polya, G. (1957). How to Solve It. Princeton University Press, Princeton, NJ.Potter, B., Sinclair, J., & Till, D. (1991). An Introduction to Formal Specification

and Z. Prentice Hall.Quinlan, J. R. (1990). Learning logical definitions from relations. Machine Learning,

5 (3), 239–266.Quinlan, J. (1986). Induction of decision trees. Machine Learning, 1 (1), 81–106.Reed, S. K., Ackinclose, C. C., & Voss, A. A. (1990). Selecting analogous problems:

Similarity versus inclusiveness. Memory & Cognition, 18 (1), 83–98.Reed, S. K., & Bolstad, C. (1991). Use of examples and procedures in problem solv-

ing. Journal of Experimental Psychology: Learning, Memory, and Cognition,17 (4), 753–766.

Reed, S., Ernst, G., & Banerji, R. (1974). The role of analogy in transfer betweensimilar problem states. Cognitive Psychology, 6, 436–450.

Rich, C., & Waters, R. C. (1988). Automatic programming: Myths and prospects.IEEE Computer, 21 (11), 10–25.

Page 11: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Bibliography 337

Rich, C., & Waters, R. (1986a). Introduction. In Rich, C., & (Eds.), R. W. (Eds.),Readings in Artificial Intelligence and Software Engineering. Morgan Kauf-mann, Los Altos, CA.

Rich, C., & Waters, R. (Eds.). (1986b). Readings in Artificial Intelligence andSoftware Engineering. Morgan Kaufmann, Los Altos, CA.

Riesbeck, C., & Schank, R. (1989). Inside Case-based Reasoning. Lawrence, Erl-baum, Hillsdale, NJ.

Rivest, R. (1987). Learning decision lists. Machine Learning, 2 (3), 229–246.Robinson, J. A. (1965). A machine-oeriented logic based on the resolution principle.

Journal of the ACM, 12 (1), 23–41.Rosenbloom, P. S., & Newell, A. (1986). The chunking of goal hierarchies: A gen-

eralized model of practice. In Michalski, R. S., Carbonell, J. G., & Mitchell,T. M. (Eds.), Machine Learning - An Artificial Intelligence Approach, Vol. 2,pp. 247–288. Morgan Kaufmann.

Ross, B. H. (1989). Distinguishing types of superficial similarities: Different effectson the access and use of earlier problems. Journal of Experimental Psychol-ogy: Learning, Memory, and Cognition, 15 (3), 456–468.

Rouveirol, C. (1991). Semantic model for induction of first order theories. InMyopoulos, R., & Reiter, J. (Eds.), Proc. 12th International Joint Conferenceon Artificial Intelligence (IJCAI-91), pp. 685–691. Morgan Kaufmann.

Rumelhart, D. E., & Norman, D. A. (1981). Analogical processes in learning. InAnderson, J. R. (Ed.), Cognitive Skills and Their Acquisition, pp. 335–360.Lawrence Erlbaum, Hillsdale, NJ.

Russell, S. J., & Norvig, P. (1995). Artificial Intelligence. A Modern Approach.Prentice-Hall, Englewood Cliffs, NJ.

Sacerdoti, E. (1975). The nonlinear nature of plans. In Proc. International JointConference on Artificial Intelligence (IJCAI-75), pp. 206–214. Morgan Kauf-mann.

Sakakibara, Y. (1997). Recent advances of grammatical inference. TheoreticalComputer Science, 185, 15–45.

Sammut, C., & Banerji, R. (1986). Learning concepts by asking questions. InMichalski, R., Carbonell, J., & Mitchell, T. (Eds.), Machine Learning – AnArtificial Intelligence Approach, Vol. 2, pp. 167–192. Morgan Kaufmann, LosAltos, CA.

Schadler, K., Scheffer, T., Schmid, U., & Wysotzki, F. (1995). Program synthesisby learning and planning. Tech. rep. 95-20, Technische Universitat Berlin,Computer Science Department.

Schadler, K., & Wysotzki, F. (1999). Comparing structures using a Hopfield-styleneural network. Applied Intelligence, 11, 15–30.

Schmid, U. (1994). Programmieren lernen: Unterstutzung des Erwerbs rekur-siver Programmiertechniken durch Beispielfunktionen und Erklarungstexte(Learning programming: Acquisition of recursive programming skills fromexamples and explanations). Kognitionswissenschaft, 4 (1), 47–54.

Schmid, U., & Carbonell, J. (1999). Empirical evidence for derivational analogy.In Wachsmuth, I., & Jung, B. (Eds.), Proc. Jahrestagung der DeutschenGesellschaft f. Kognitionswissenschaft, Sept. 1999, Bielefeld, pp. 116–121.infix.

Schmid, U., & Kaup, B. (1995). Analoges Lernen beim rekursiven Programmieren(Analogical learning in recursive programming). Kognitionswissenschaft, 5,31–41.

Schmid, U., Mercy, R., & Wysotzki, F. (1998). Programming by analogy: Retrieval,mapping, adaptation and generalization of recursive program schemes. In

Page 12: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

338 Bibliography

Proc. of the Annual Meeting of the GI Machine Learning Group, FGML-98,pp. 140–147 TU Berlin.

Schmid, U., Muller, M., & Wysotzki, F. (2000). Integrating function application instate-based planning. submitted manuscript.

Schmid, U., Sinha, U., & Wysotzki, F. (2001). Program reuse and abstraction byanti-unification. In Professionelles Wissensmanagement – Erfahrungen undVisionen, pp. 183–185. Shaker. Long Version: http://ki.cs.tu-berlin.de/~schmid/pub-ps/pba-wm01-3a.ps.

Schmid, U., Wirth, J., & Polkehn, K. (1999). Analogical transfer of non-isomorphicsource problems. In Proc. 21st Annual Meeting of the Cognitive Science So-ciety (CogSci-99), August 19-21, 1999; Simon Fraser University, Vancouver,British Columbia, pp. 631–636. Morgan Kaufmann.

Schmid, U., Wirth, J., & Polkehn, K. (2001). A closer look on structural similarityin analogical transfer. submitted manuscript.

Schmid, U., & Wysotzki, F. (1996). Skill acquisition can be regarded as programsynthesis. In Schmid, U., Krems, J., & Wysotzki, F. (Eds.), Proc. 1st Eu-ropean Workshop on Cognitive Modeling (14.-16.11.96), pp. 39–45 Techni-cal University Berlin. http://ki.cs.tu-berlin.de/EuroCog/cm-program.html.

Schmid, U., & Wysotzki, F. (1998). Induction of recursive program schemes. InProc. 10th European Conference on Machine Learning (ECML-98), Vol. 1398of LNAI, pp. 214–225. Springer.

Schmid, U., & Wysotzki, F. (2000a). Applying inductive program synthesis tolearning domain-dependent control knowledge – Transforming plans into pro-grams. Tech. rep. CMU-CS-143, Computer Science Department, CarnegieMellon University.

Schmid, U., & Wysotzki, F. (2000b). Applying inductive programm synthesis tomacro learning. In Proc. 5th International Conference on Artificial Intelli-gence Planning and Scheduling (AIPS 2000), pp. 371–378. AAAI Press.

Schmid, U., & Wysotzki, F. (2000c). A unifying approach to learning by doingand learning by analogy. In Callaos, N. (Ed.), 4th World Multiconferenceon Systemics, Cybernetics and Informatics (SCI 2000), Vol. 1, pp. 379–384.IIIS.

Schmid, U. (1999). Iterative macro-operators revisited: Applying program synthe-sis to learning in planning. Tech. rep. CMU-CS-99-114, Computer ScienceDepartment, Carnegie Mellon University.

Schoppers, M. (1987). Universal plans for reactive robots in unpredictable environ-ments. In IJCAI ’87, pp. 1039–1046. Morgan Kaufmann.

Schrodl, S., & Edelkamp, S. (1999). Inferring flow of control in program synthesisby example. In Proc. Annual German Conference on Artificial Intelligence(KI’99), Bonn, LNAI, pp. 171–182. Springer.

Shapiro, E. Y. (1983). Algorithmic Program Debugging. MIT Press.Shavlik, J. W. (1990). Acquiring recursive and iterative concepts with explanation-

based learning. Machine Learning, 5, 39–70.Shell, P., & Carbonell, J. (1989). Towards a general framework for composing

disjunctive and iterative macro-operators. In Proc. 11th International JointConference on Artificial Intelligence (IJCAI-89), Detroit, MI.

Siekmann, J. H. (1989). Unification theory. Journal of Symbolic Computation, 7,207–274.

Simon, H. (1958). Rational choice and the structure of the environment. In Modelsof Bounded Rationality, Vol. 2. MIT Press, Cambridge, MA.

Simon, H. A., & Hayes, J. R. (1976). The understanding process: Problem iso-morphs. Cognitive Psychology, 8, 165–190.

Page 13: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Bibliography 339

Sinha, U. (2000). Gedachtnisorganisation und Abruf von rekursiven Programm-schemata beim analogen Programmieren durch typindizierte Anti-Unifikation(Memory organization and retrieval of recursive program schemes in pro-gramming by analogy with anti-unification). Master’s thesis, FachbereichInformatik, TU Berlin.

Smith, D. R. (1990). KIDS: A semiautomatic program development system. IEEETransactions on Software Engineering, 16 (9), 1024–1043.

Smith, D. R. (1991). KIDS: A knowledge-based software development system. InLowry, M., & McCartney, R. (Eds.), Automating Software Design, chap. 4,pp. 483–514. MIT Press, Menlo Park.

Smith, D. R. (1993). Constructing specification morphisms. Journal of SymbolicComputation, 15 (5-6), 571–606.

Smith, D. R. (1985). Top-down synthesis of divide-and-conquer algorithms. Arti-ficial Intelligence, 27, 43–96.

Smith, D. (1984). The synthesis of LISP programs from examples: A survery.In Biermann, A., Guiho, G., & Kodratoff, Y. (Eds.), Automatic ProgramConstruction Techniques, pp. 307–324. Macmillan.

Soloway, E., & Spohrer, J. (Eds.). (1989). Studying the Novice Programmer.Lawrence Erlbaum, Hillsdale, NJ.

Sommerville, I. (1996). Software Engineering (5th edition). Addison-Wesley.Spellman, B. A., & Holyoak, K. (1996). Pragmatics in analogical mapping. Cogni-

tive Psychology, 31, 307–346.Spivey, J. M. (1992). The Z Notation: A Reference Manual (2nd edition). Prentice-

Hall.Sterling, L., & Shapiro, E. (1986). The Art of Prolog: Advanced Programming

Techniques. MIT Press.Summers, P. D. (1977). A methodology for LISP program construction from ex-

amples. Journal ACM, 24 (1), 162–175.Sun, R., & Sessions, C. (1999). Self-segmentation of sequences: Automatic formation

of hierarchies of sequential behaviors. Tech. rep. 609-951-2781, NEC ResearchInstitute, Princeton. http://cs.ua.edu/~rsun/sun.sss.ps.

Sutton, R. S., & Barto, A. G. (1998). Reinforcement Learning – An Introduction.MIT Press.

Thompson, C. A., Califf, M. E., & Mooney, R. J. (1999). Active learning for naturallanguage parsing and information extraction. In Bratko, I., & Dzeroski, S.(Eds.), Proc. 16th International Conference on Machine Learning (ICML-99), pp. 406–414. Morgan Kaufmann.

Thompson, S. (1991). Type Theory and Functional Programming. Addison-Wesley,Workingham, Eng.

Tversky, A. (1977). Features of similarity. Psychological Review, 85, 327–352.Unger, S., & Wysotzki, F. (1981). Lernfahige Klassifizierungssysteme (Classifier

Systems). Akademie-Verlag, Berlin.Valiant, L. (1984). A theory of the learnable. Communications of the ACM, 27 (11),

1134–1142.Veloso, M. (1994). Planning and Learning by Analogical Reasoning. Springer.Veloso, M., Carbonell, J., Perez, M. A., Borrajo, D., Fink, E., & Blythe, J. (1995).

Integrating planning and learning: The Prodigy architecture. Journal of Ex-perimental and Theoretical Artificial Intelligence, 7 (1), 81–120.

Veloso, M. M., & Carbonell, J. G. (1993). Derivational analogy in Prodigy: Au-tomating case acquisition, storage, and utilization. Machine Learning, 10,249–278.

Vosniadou, S., & (Eds.), A. O. (1989). Similarity and Analogical Reasoning. Cam-bridge University Press.

Page 14: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

340 Bibliography

Vosniadou, S., & Ortony, A. (1989). Similarity and analogical reasoning: A syn-thesis. In Vosniadou, S., & Ortony, A. (Eds.), Similarity and AnalogicalReasoning, pp. 1–17. Cambridge University Press, Cambridge.

Waldinger, R. (1977). Achieving several goals simultaneously. In Elcock, E. W., &Michie, D. (Eds.), Machine Intelligence, Vol. 8, pp. 94–136. Wiley.

Walsh, T. (1983). Iteration strikes back - at the cyclic Towers of Hanoi. InformationProcessing Letters, 16, 91–93.

Weber, G. (1996). Episodic learner modeling. Cognitive Science, 20, 195–236.Weld, D. (1999). Recent advances in AI planning. AI Magazine, 1999, 20 (2),

93–123.Widowski, D., & Eyferth, K. (1986). Comprehending and recalling computer pro-

grams of different structural and semantic complexity by experts and novices.In Willumeit, H. (Ed.), Human Decision Making and Manual Control. Else-vier, North-Holland.

Winston, P. (1980). Learning and reasoning by analogy. Communications of theACM, 23, 689–703.

Winston, P. (1992). Artificial Intelligence, 3rd Edition. Addison-Wesley, Reading,MA.

Winston, P., & Horn, B. (1989). LISP. Addison-Wesley, Reading, MA.Wolf, B. (2000). Automatic generation of state sets from domain specifications.

Master’s thesis, Fachbereich Informatik, TU Berlin.Wysotzki, F. (1983). Representation and induction of infinite concepts and recursive

action sequences. In Proc. 8th International Joint Conference on ArtificialIntelligence (IJCAI-83), Karlsruhe, pp. 409–414. William Kaufmann.

Wysotzki, F. (1987). Program synthesis by hierarchical planning. In Jorrand, P., &Sgurev, V. (Eds.), Artificial Intelligence: Methodology, Systems, Applications,pp. 3–11. Elsevier, Amsterdam.

Wysotzki, F., & Schmid, U. (2001). Synthesis of recursive programs from finiteexamples by detection of macro-functions. Tech. rep. 01-2, Dept. of ComputerScience, TU Berlin, Germany.

Yu, T., & Clark, C. (1998). Recursion, lambda abstraction and genetic program-ming. In Late Breaking Papers at EuroGP’98, pp. 26–30 University of Birm-ingham, UK.

Zelinka, B. (1975). On a certain distance between isomorphism classes of graphs.

Casopos pro pestovanı matematiky, 100, 371–372.Zeugmann, T., & Lange, S. (1995). A guided tour across the boundaries of learning

recursive languages. Lecture Notes in Computer Science, 961, 193–262.Zweben, M., & Fox, M. S. (1994). Intelligent Scheduling. Morgan Kaufmann.

Page 15: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Appendices

Page 16: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

A. Implementation Details

A.1 Short History of DPlan

The original idea of using universal planning as initial step for inductiveprogram synthesis was presented in Wysotzki (1987): sets of basic programsand axioms are used to construct a kind of conditional plan with top-levelgoals as root-node; ordering of dependent goals was realized by backtrackingover plan construction. From 1995 to 1997 we explored (implemented andtested) several strategies for generating finite programs by planning:

– The approach proposed in Wysotzki (1987) was implemented in differentversions by Ute Schmid, by Baback Paradian and by Olaf Brandes Paran-dian, Schmid, and Wysotzki (1995, see).

– Furthermore, we investigated combining forward search with decision treelearning:As first step, optimal plans are generated for each possible initial state of adomain with small complexity. Initial states then are represented as featurevectors where the set of all different literals occurring over states are usedas features with value 1 if this literal occurs in a state description and value0 otherwise. Each initial state is associated with the (or an) optimal actionsequence for transforming it into a goal state. A decision tree algorithm(Unger & Wysotzki, 1981, CAL2, see) is used to generate a classificationprogram.Transforming such a program into a finite program fit for generalization-to-n involves the same problems as discussed in chapter 8 along with someadditional problems: First, the decision tree does not necessarily repre-sent an order over the number of transformations involved: it is possiblethat the first attribute already branches to a leaf representing the mostcomplex transformation sequence. Second, there is no interleaving betweenactions and conditions which have to be fulfilled before executing these ac-tions which is typical for recursion (see for example the finite program forunload-all in chap. 8). Instead, all conditions necessary to execute the com-plete transformation sequence are checked first (along a path in the decisiontree) and the transformation sequence is executed completely afterwards.

U. Schmid: Inductive Synthesis of Functional Programs, LNAI 2654, pp. 343-356, 2003. Springer-Verlag Berlin Heidelberg 2003

Page 17: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

344 A. Implementation Details

A possible way to split such compound actions is discussed in Wysotzki(1983).1

A second forward-search approach is presented in (Briesemeister et al.,1996) (see sect. 2.5.2).

– In 1998 we came up with the first version of DPlan as a state-based non-linear backward-planner constructing universal plans (implemented by UteSchmid). This first algorithm, its formalization, proofs of completeness andcorrectness are presented in (Schmid, 1999). DPlan1.0 works for STRIPS-like domain specifications extended to binary conditioned effects. The gen-erated universal plan is a minimal spanning tree. That is, for domains withsets of optimal solutions only one alternative is calculated for each possiblestate. DPlan1.0 was extended over the last year by several people in severalways:– Domain specification in PDDL, STRIPS + general conditioned effects

(see report of the student project “Extending DPlan To an ADL-subset”by Janin Toussaint, Ulrich Wagner, and Hakan Irican, 1999, http://ki.cs.tu-berlin.de/~schmid/mlpj2/ss99/mlpj2_99.html).

– Constructing universal plans without a predefined set of states (see re-port of the student project “Universal Planning without state sets” byMichael Christmann and Stefan Ronnecke, 1999, URL as above).

– Preliminary work on plan construction using control knowledge repre-sented as recursive macros (see report of the student project “Planningwith recursive macros”, Mischa Neumann and Ralf Ansorg, 1999, URLas above).

– Calculating DAG’s instead of minimal spanning trees (Schmid, 1999).– Extending DPlan to function application, as reported in chapter 4. This

work is based on a diploma thesis by Marina Muller, Mai 2000 (http://ki.cs.tu-berlin.de/projects/dipl.html).

– Current work in progress is to extend DPlan to further features of PDDL –all-quantification and negation. And investigate soundness and complete-ness of planning for these more general operator definitions (diploma thesisby Bernhard Wolf, Sept. 2000, http://ki.cs.tu-berlin.de/projects/dipl.html).

– DPlan was extended (January 2001) to include recursive program schemesfulfilling a certain class of goals in the domain specification. If the top-level goals of a new planning problem match with the goals of a recursiveprogram scheme, this scheme is applied and an optimal transformation se-quence is constructed directly, without planning. This extention was cur-rently realized by Janin Toussaint as part of her diploma thesis on plantransformation for program synthesis (see below).

1 This work is not documented in a paper. The documented program together withsome examples can be obtained from Ute Schmid.

Page 18: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

A.2 Modules of DPlan 345

A.2 Modules of DPlan

DPlan is implemented in Common Lisp. The program together with exam-ple domains and further information can be obtained via http://ki.cs.tu-berlin.de/~schmid/IPAL/dplan.html.

– dplan.lsp: main programconstruction of universal plan, saved in plan-steps as list of pstep-structures; recursive calculation of pre-images, as described in table 3.1

– plan-dstruc.lsd: global data structure pstep, see below– ps-back.lsp: calculating the set of legal predecessors of a state (match

and backward operator application)function (apply-rules <current-state>) is called from dplan, returnslist of new psteps; the new psteps are pruned with respect to the currentplan

– showplan.lsp: graphical and term output of plansfunction (show plan-steps) is called from dplan; graphs can be displayedwith graphlet, trees can be displayed with xterm

– parse.lsp, selectors.lsp: handling the PDDL-input (implemented byMarina Muller)

– generate.lsp, range.lsp, update.lsp: handling update effects for plan-ning with function application (implemented by Marina Muller)

A plan-step is a pair 〈 instop, child 〉. Additional informations are givenfor construction of graphical outputs and for plan transformation:

(defstruct pstepinstop ; instantiated operator, cf. puttable(A)parent ; parent node: in backward planning successor of instop

; input in ps-back "state"child ; child node: in backward planning predecessor of instop

; constructed in ps-backprec ; instantiated preconditions of operator

; for conditioned operators: union of global and specific; precondition

add ; instantiated add-listdel ; instantiated del-listenodeid ; identifierlevel ; level in the ms-dag

)

A.3 DPlan Specifications

DPlan specifications are given in standard PDDL. For a complete descriptionof the syntax of PDDL see (McDermott, 1998b). We give a Tower of Hanoiproblem as example:

Page 19: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

346 A. Implementation Details

(define (domain disk-world-domain)(:action move

:parameters (?d ?x ?y):precondition (and(on ?d ?x)(clear ?d)(clear ?y)(smaller ?d ?y)):effect((and ((on ?d ?y)(clear ?x))(not ((on ?d ?x)(clear ?y)))))

))

(define (problem towers-of-hanoi):domain ’disk-world-domain:goal ((on d3 p1) (on d2 d3) (on d1 d2))

)

(define (states states-in-disk-world-domain) ; domain restriction:states ; complete goal state(((on d3 p3) (on d2 d3) (on d1 d2) (clear d1) (clear p2) (clear p3)

(smaller d1 p1)(smaller d2 p1)(smaller d3 p1) (smaller d4 p1)(smaller d1 p2)(smaller d2 p2)(smaller d3 p2) (smaller d4 p2)(smaller d1 p3)(smaller d2 p3)(smaller d3 p3) (smaller d4 p2)(smaller d1 d2)(smaller d1 d3)(smaller d2 d3) (smaller d1 d4)(smaller d2 d4)(smaller d3 d4))

))

For planning with update effects we extended PDDL as described in chap-ter 4. An example of a Tower of Hanoi problem is:

(define (domain disk-world-domain)(:action move

:parameters (?x ?y):precondition ():effect((change (?L2 in (on ?y ?L2) (to (cdr ?L2)))

(?L1 in (on ?x ?L1) (to (cons-car ?L1 ?L2)))))

:post (((on ?x ?L1) (on ?y ?L2))(f1 ?L2) ; L2 is not empty(f6 ?L1 ?L2)) ; (car L2) < (car L1)

))

(defun cons-car (l1 l2);(print ‘(cons-car ,l1 ,l2 ,(cons (car l2) l1)))(cons (car l2) l1))

(defun f6 (l1 l2);(print ‘(f6 ,l1 ,l2))(cond ((null l1) T)

(T (> (car l1) (car l2)))))

Page 20: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

A.4 Development of Folding Algorithms 347

(define (problem towers-of-hanoi):domain ’disk-world-domain:goal ((on p1 [ ]) (on p2 [ ]) (on p3 [ 1 2 3 ]))

)

A.4 Development of Folding Algorithms

Starting point for our development of a folding algorithm was the approachby Summers (1977) and its extension from a subset of Lisp to term algebrasby Wysotzki (1983). Both approaches rely on the identification of differencesbetween pairs of traces in a given finite program. Since the original algorithmof Summers was no longer available and there did not exist an implementationfor Wysotzki’s approach, we started from the scratch:

– In the beginning (1997), a simple string pattern-matcher was realizedin Common Lisp by Imre Szabo in his diploma thesis (http://ki.cs.tu-berlin.de/projects/dipl.html). This algorithm did only work fordetecting linear recursions with independent variable substitutions in therecursive call.

– At the same time, an algorithm based on the identification of unifiablesub-terms of a given finite program term was implemented in CommonLisp by Dirk Matzke in his diploma thesis (http://ki.cs.tu-berlin.de/projects/dipl.html). He used the tree-transformation algorithm ofLu (1979) which was originally implemented in a student project whereit was applied to matching of different finite programs in the context ofprogramming by analogy (Schadler, Scheffer, Schmid, & Wysotzki, 1995).

– In a student project in 1997, Martin Mhlpfordt and Markus Jurinka workedon a formalization of folding based on induction of context-free tree gram-mars (Muhlpfordt & Schmid, 1998) and Martin Mhlpfordt implementeda pattern matcher in Prolog based on this formalization. This implemen-tation works already for recursive equations realizing arbitrary recursionforms (linear, tree, combinations) and for interdependent variable substi-tutions.

– Heike Pisch (2000) provided an efficient implementation of this approach inher diploma thesis (http://ki.cs.tu-berlin.de/projects/dipl.html),using a different approach for generating valid hypotheses for segmenta-tions.

– The preceding two projects were the starting point for the powerful foldingalgorithm realized in Common Lisp by Martin Mhlpfordt in his diplomathesis 2000 (http://ki.cs.tu-berlin.de/projects/dipl.html) whichis presented in detail in chapter 7: the original approach was extended toRPSs with sets of recursive equations and to the identification of hiddenvariables in substitution terms.

Page 21: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

348 A. Implementation Details

A.5 Modules of TFold

TFold is implemented in Common Lisp (by Martin Mhlpfordt). The programtogether with example domains and further information can be obtained viahttp://ki.cs.tu-berlin.de/~schmid/IPAL/fold.html.

– folder.lsp: main program. The outer control loop for search, as given infigure 7.13 in chapter 7.– Main function: search-RPS with an initial tree as input and an RPS or

nil as output. First, it is tried to find a sub-program for the tree, if thisfails, it is tried to find an RPS for each argument of the top-level-symbolof the tree.

– Function: search-SubProg-Backtrack implements the search for a sub-program by backtracking, as described in figure 7.12 in chapter 7. Theidea is to incrementally build skeletons of the subprogram-body until avalid segmentation is found (a) which can be extended to the body and(b) so that the resulting instances of the variable can be explained bysubstitutions.

– some test-functions (converting, folding, and printing the result)The initial tree is initially converted in a special representation: all (maxi-mal) sub-trees which do not containing “Omegas” are converted to a vectorwith 2 elements.

– base-fkt-rps.lsp: Functions for constructing an RPS from the inducedcomponents (signature, head, parameters, body, substitutions)

– base-fkt-tree.lsp: Functions for searching and partitioning trees, anti-unification of terms.

– skeleton.lsp: constructing a skeleton and checking whether it is valid– substitutions.lsp: Calculating the substitutions for variables in recur-

sive calls.– term2gml.lsp": Transformation of terms into gml format for graphlet

which provides a graphical output.– "read-gml.lsp": Transformation of gml represented terms in terms as lists

of lists.– pretty-rps-print.lsp: Functions for the output of RPSs.– unfolder.lsp: Unfolding RPSs into finite programs.– rps-example.lsp: Collection of RPSs.

Currently, a graphical user interface (GUI) realized in Java for the folderis under construction.

RPSs are represented as collection of sub-programs and a main program.Sub-programs are represented as structures. An example for ModList pre-sented in chapter 7 is:

(defun ModList ()(setq Sub1 (make-sub :name ’ModList

:params ’(n l)

Page 22: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

A.6 Time Effort of Folding 349

:body ’(if (empty l)nill(cons (if (eq0 (Mod n (head l)))

truefalse)

(ModList n (tail l))))))

(setq Sub2 (make-sub :name ’Mod:params ’(n k):if ’(g (< n k) n (Mod (- n k) k))

))(make-rps :main ’(ModList 8 (list 9 4 7))

:sigma (list Sub1 Sub2)))

A.6 Time Effort of Folding

The time measures were performed on a SUN Sparc 20 workstation. For mea-surement the Common Lisp time macro was used for unfolding, the completefolding and the subprocesses calculating valid segmentations and determiningsubstitutions. Because CPU-time measures are not overly precise, the averageof three runs for each problem was calculated.

A.7 Main Components of Plan-Transformation

Plan transformation is work in progress. The current implementation wasrealized in Common Lisp by Ute Schmid, 2000 (plan-transform.lsp). Cur-rently, the implementation is extended and the concept of linearization basedon the assumption of sub-goal indendence proposed in Wysotzki and Schmid(2001) is included by Janin Toussaint in her diploma thesis.

The main function of the current implementation is (plantransform<plan>):

– Input: plan as list of psteps from dplan.lsp– (decompose plan):

initial decomposition; generation and initialization of the global variablessubplanlist (list of tplan structures), planstruc (sub-plan structure asterm of sub-plan names)

– (intro-type subplanlist):data type inferece for each sub-plan, successively filling the slots of tplan

– (ptransform subplanlist):introducing situation variables and rewriting into conditional expression

– Output: transformation information as given in subplanlist and plan-struc; tplan.term is passed to the generalization-to-n algorithm for eachsub-plan; planstruc is extended from sub-plan names to arguments andrewritten into a “main” program

Page 23: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

350 A. Implementation Details

Global data-structure:; input from dplan.lsp is a plan saved in plan-steps as a list of psteps; global variables(setq subplanlist nil) ; list of all transformed subplans

; (special case: one subplan)(setq planstruc nil) ; structure of global plan (subtrees replaced by

; subplan names)

(setq mstlist nil) ; list of possible minimal spanning trees; (backtrack-point for dags which are not sets!)

; .....................................................................; new structure for transformed plan (one for each subplan); initial generation in decompose(defstruct tplan

pname ; name of plan (decompose)suplan ; plan as structure (plan-steps) (decompose)coplan ; plan with data types and relevant predicatesterm ; plan as termptype ; type of the plan (singleop, seq, set, list)newdat ; *newly constructed datastructure (a list/set of objects)newpred ; *newly constructed predicate (if newdat =/= nil) as patternnpfct ; *function definition for new predicategoalpred ; goal-predicate (might be newpred)bottom ; bottom-element (might be newdat)passocs ; const/constr rewrite pairspafct ; function definition for the rewritingpparams ; input parameterspvalues ; initial values

);; newdat, newpred, npfct are not filled for ptype = singleop and ptype = set;; newpred: p* (... CO ...) with CO as place-holder for complex object;; maybe additional constant arguments;; f.e.: (at* CO B) for rocket;; bottom == newdat (-> newdat might be superfluous)

A.8 Plan Decomposition

Please note: Extracts from the program are given in an abbreviated pseudo-Lisp notation, omitting implementation details!

; call decompose with complete plan and level = 0 (root)(decompose plan level) ==(mapcar $\lambda$x.(r-dec-p (get-first-op plan lv) x lv)(partition plan))

(get-first-op plan level) ==get the set first operator-symbol at the upper-most level of theplan

(partition plan) == for each root of ‘‘plan’’, return its subplan

(r-dec-p op plan lv) ==(let ((dlv (disag-level op plan))

(pname (gensym "P")))(cond ((not dlv) (save-subplan pname plan)) ; single plan

; subplan((> dlv lv) (save-subplan pname (get-suplan dlv plan))

Page 24: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

A.9 Introduction of Situation Variables 351

(decompose (append (make-root (1- dlv) plan)(rest-suplan dlv plan))

dlv))

(T ; (= dlv lv) ; different operators(save-subplan pname plan) ; at one level

) ; currently not treated by decomposition))

(disag-level op plan) ==returns the level in ‘‘plan’’ on which appears for the first time adifferent operator-name than ‘‘op’’

(save-suplan pname plan) ==generate a new tplan-structure and instantiate the slots pname with‘‘pname’’ and suplan with ‘‘plan’’ (see appendix for tplan-structure)

(get-suplan dlv plan) ==return a partial plan from the current top-level to level ‘‘dlv’’

(make-root lv plan) == get the leaf of the newly generated subplan

(rest-suplan dlv plan) ==return the remaining plan with level ‘‘dlv’’ as new top-level

A.9 Introduction of Situation Variables

For each subplan:

(rewrite-to-term coplan) ==(list ’if (intro-s (first-state coplan))

’s (r-rewrite (rest coplan)) )

(intro-s e) == extend e by an argument ‘‘s’’

(r-rewrite coplan) ==(cond ((null cp) ’omega)

(T (append (intro-s (first-op coplan))(list (list ’if(intro-s (first-state coplan))’s (r-rewrite (rest coplan)) )))))

A.10 Number of MSTs in a DAG

To calculate the number of minimal spanning trees in an DAG, we can useformula

∏ni=0 ci, multiplying the number of alternative choices ci on each

level i. For the sorting of three elements, we have: 1 · 1 · 9 = 9, with a singleoption for the root and the first level and 9 different options for the third

Page 25: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

352 A. Implementation Details

level. To calculate the number of options on a level, the connective structureof the DAG has to be know. The different alternatives on one level can becalculated as described for the construction of minimal spanning trees. Forsorting lists with 4 elements we have: 1 · 1 · 52488 · 46656 = 2.448.880.128!

To omit the explicit calculation of options per level, an upper boundestimate is

ci =(ai

ni

)

with ai as number of arcs from level i−1 to level i and ni as number of nodeson level i.

We cannot provide a formula for calculating the proportion of regular-izable trees to all trees, because regularizability depends on the identity ofedge-labels which is variable between domains.

A.11 Extracting Minimal Spanning Trees from a DAG

; lst is tree until lv (initially: root) ; sp is all plan-steps on levels > lv(defun extract-trees (lst sp lv)

(let* ((tvec (number-of-msts 1 sp (1+ lv)))(tcnt (reduce ’* tvec)))

(print ‘(There are * ,tvec = ,tcnt minimal spanning trees))(fresh-line)(cond ((> tcnt 576) (write-string "How many trees? <number> ")

(setq k (read-number))(fresh-line)(extract-msts (list lst) sp (1+ lv) k))

(T (write-string "Generate all alternatives")(fresh-line)(extract-msts (list lst) sp (1+ lv) tcnt))

)))

(defun extract-msts (lst sp lv k)(print ‘(Include next level ,lv))

(let ((lp (remove-if #’(lambda(x) (/= lv (pstep-level x))) sp))(rp (remove-if #’(lambda(x) (= lv (pstep-level x))) sp)))

(cond ((null lp) nil)((null rp) (first-k k (lift

(mapcar #’(lambda(x) (pmerge lst x)) (all-combs lp k))))); in the last step this is only a "throw-away" of; already calculated trees!

(T (extract-msts (first-k k(lift (mapcar #’(lambda(x) (pmerge lst x))

(all-combs lp k))))rp (1+ lv) k)

))) )

(defun pmerge (lst e)(cond ((null lst) (list e)) ; should never occur

((flatlst lst) (join lst e))(T (mapcar #’(lambda(x) (join x e)) lst))

))

(defun all-combs (lp k)(let* ((dc (make-sset (mapcar #’(lambda(x) (pstep-child x)) lp)))

(lcs (child-split lp dc k))(tlcs (mapcar #’(lambda(x) (generate-trees x (1- (length x)))) lcs)))

(cart-product (car tlcs) (cdr tlcs) k)))

Page 26: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

A.12 Regularizing a Tree 353

(defun child-split (lp dc k)(cond ((null dc) nil)

(T (cons (first-k k (remove-if #’(lambda(x)(not (setequal (pstep-child x) (car dc)))) lp) )

(child-split lp (cdr dc) k)))))

(defun generate-trees (lp cnt)(cond ((< cnt 0) nil)

(T (setq cur-lp (copy-plan lp))(cons (nth cnt cur-lp)

(generate-trees lp (1- cnt))))))

(defun cart-product (fst rst k)(cond ((null rst) fst)

(T (cart-product (first-k k (comb fst (car rst))) (cdr rst) k))))

(defun comb (f r)(cond ((null f) nil)

(T (append (mapcar #’(lambda(x) (join (car f) x)) r)(comb (cdr f) r)))

))

A.12 Regularizing a Tree

(defun regularize-tree (mst lv)(let ((cur (remove-if #’(lambda(y) (/= lv (pstep-level y))) mst))

(nxt (remove-if #’(lambda(y) (/= (1+ lv) (pstep-level y))) mst)))

(cond ((null cur) nil)((null nxt) mst)(T (let ((opsetcl (mapcar #’(lambda(x) (pstep-instop x)) cur))

(opsetnl (mapcar #’(lambda(x) (pstep-instop x)) nxt)))(cond ((subsetp opsetnl opsetcl :test ’equal)

(regularize-tree (smerge (lv-shift cur opsetnl)mst) (1+ lv)))

(T (regularize-tree mst (1+ lv)))

))) ) ) )

(defun lv-shift (cur opsetnl)(cond ((null cur) nil)

((member (pstep-instop (car cur)) opsetnl :test ’equal)(setf nc (copy-pstep (car cur)))(setf (pstep-parent nc)

(id-insert (pstep-parent (car cur))))(setf (pstep-level nc) (1+ (pstep-level (car cur))))(cons(make-pstep

:instop ’id:parent (pstep-parent (car cur)):child (id-insert (pstep-parent (car cur))):nodeid (+ 1000 (pstep-nodeid (car cur))):level (pstep-level (car cur))

)(cons nc (lv-shift (cdr cur) opsetnl)))

)(T (cons (car cur) (lv-shift (cdr cur) opsetnl)))

))

Page 27: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

354 A. Implementation Details

(defun id-insert (s)(cond ((null s) nil)

((idlist (car s)) (cons (cons ’id (car s)) (cdr s)))(T (cons ’(id) s))

))

(defun idlist (l)(cond ((null l) T)

((equal ’id (car l)) (idlist (cdr l)))(T nil)

))

; if pstep-child of new is equal to a pstep-child in old -> keep new; (has higher level); if pstep-child without "idlist" is equal to a pstep-child in old; and both are on the same level -> keep old (the one which has; no or a shorter "idlist"); ==> parent-node for the nodes in nw with pstep-child of new as; parent has to be set to "old" parent!; these nodes are still in new because of the sequence of; node construction in lv-shift!(defun smerge (nw old)

(cond ((null nw) old)((member (car nw) old :test ’node-equal)

(smerge (cdr nw) (cons (car nw)(remove-if #’(lambda(x) (node-equal

(car nw) x)) old))))((member (car nw) old :test ’level-equal)

(smerge (old-parent (car nw) (cdr nw) old) old))(T (smerge (cdr nw) (cons (car nw) old)))

))

(defun node-equal (s1 s2)(and (= (length (find-if #’(lambda(x) (idlist x)) (pstep-child s1)))

(length (find-if #’(lambda(x) (idlist x)) (pstep-child s2))))(setequal (remove-if #’(lambda(x) (idlist x)) (pstep-child s1))

(remove-if #’(lambda(x) (idlist x)) (pstep-child s2)))) )

(defun level-equal (s1 s2)(and (setequal (remove-if #’(lambda(x) (idlist x)) (pstep-child s1))

(remove-if #’(lambda(x) (idlist x)) (pstep-child s2)))(= (pstep-level s1) (pstep-level s2))

))

; if smerge keeps the old state, then the new-state with cld as parent; has to keept the corresponding old parent(defun old-parent (cld sl old)

(cond ((null sl) nil)((setequal (pstep-child cld) (pstep-parent (car sl)))

(cons (update-par (copy-pstep (car sl))(pstep-child (find-if #’(lambda(x)

(level-equal cld x)) old))) (cdr sl)))(T (old-parent cld (cdr sl) old))

))

(defun update-par (sl ud) (setf (pstep-parent sl) ud) sl)

Page 28: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

A.13 Programming by Analogy Algorithms 355

A.13 Programming by Analogy Algorithms

In contrast to other approaches to programming by analogy (in computerscience) and analogical problem solving (in cognitive science), our approachallows to deal likewise with analogy and abstraction: Recursive programschemes can be defined over a signature consisting of a set of primitive func-tions with fixed interpretation or over a signature containing (second-order)function variables. Analogy is realized by mapping concrete programs whileabstraction is realized by mapping a second-order scheme with a concreteprogram. We use the same similarity criterium for retrieval and mapping.

Originally, we investigated tree-transformation, currently we are investi-gating anti-unification as an approach to mapping. Tree-transformation re-turns a quantitative measure of structural similarity based on the necessarynumber of substitutions, deletions, and insertions to make a source programidentical to a target candidate. Tree-transformation is introduced in chapter10 as an example for a measure of structural similarity. Anti-unification isdescribed in chapter 12.

We did some preliminary work on adaptation of non-isomorphic programs.Again in contrast to other approaches, our focus is on generalization over pro-grams, that is, on constructing a hierarchy of recursive program schemes. Anoverview is given at http://ki.cs.tu-berlin.de/~schmid/IPAL/aprog.html.

Several aspects of programming by analogy and analogical problem solv-ing were investigated:

– Memory organization and retrieval using tree-transformation was investi-gated by Mark Muller in his diploma thesis (Aug. 1997, http://ki.cs.tu-berlin.de/projects/dipl.html). He used hierarchical cluster analy-sis to build a hierarchical memory and could show that memory organiza-tion resulted in a grouping of programs sharing recursive structures (tailrecursion, linear recursion, tree recursion).

– In two psychological experiments, Knuth Polkehn and Joachim Wirthinvestigated the use of non-isomorphical base problems of human prob-lem solvers (Sept. 1997 and Mai 1998, http://ki.cs.tu-berlin.de/projects/dipl.html). They could show that human problem solvers cansolve new problems if base and target have a structural overlap of at leastfifty percent. For our automatic programming system, we take this resultas a hint that non-isomorphic sources should be considered for adapta-tion. But, we need a (qualitative) criterum for deciding whether the sourcecandidate is “similar enough” to prefer adaptation over folding from thescratch.

– Adaptation of program schemes to new problems was researched by ReneMercy in his diploma thesis (April 1998, http://ki.cs.tu-berlin.de/projects/dipl.html). He used tree-transformation to construct a map-ping. If the mapping only contains unique substitutions, the programs are

Page 29: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

356 A. Implementation Details

isomorphic and adaptation is realized simply by replacing the symbols inthe base program by the symbols they are mapped upon in the target. Fornon-unique substitutions, deletions and insertions, context in the programtree is used to determine the adaptation of the source program (see chapter12).

– If base and target share some similarity, but adaptation fails, it couldbe tried to rewrite the base problem to obtain a better mapping. Thatis, structure mapping is enriched by a background theory. This idea iscurrently discussed in cognitive science as “re-representation” and MartinMuhlpfordt investigated re-representation in a psychological experiment inhis diploma thesis (June 1999, http://ki.cs.tu-berlin.de/projects/dipl.html).

– In 2000 we replaced tree-transformation by (second order) anti-unificationas backbone of programming by analogy. Thereby we gained a parame-ter free, qualitative approach to structural similarity which furthermoreallowed to deal with mapping and generalization in an uniform way. Adap-tation can be modeled by the inverse process, that is, (second order)unification. Mapping and generalization was researched by Uwe Sinha inhis diploma thesis (Aug. 2000, http://ki.cs.tu-berlin.de/projects/dipl.html) and is described in chapter 12.

– Currently, Ulrich Wagner is investigating retrieval and adaptation.

Page 30: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

B. Concepts and Proofs

B.1 Fixpoint Semantics

Fixpoint theory in general concerns the following question: Given an orderedset P and an order-preserving map Φ : P → P , does there exist a fixpointfor Φ? A point x ∈ P is called fixpoint if Φ(x) = x.

The following introduction of fixpoint semantics closely follows Davey andPriestley (1990). We first introduce some basic concepts and notations. Then,we present the fixpoint theorem for complete partial orders. Finally, we givean illustration for the factorial function.

Definition B.1.1 (Map). (Davey & Priestley, 1990, 1.6) Let X be a setand consider a map f : X → X. f assigns a member f(x) ∈ X to eachmember x ∈ X and is determined by its graph, graphf = {(x, f(x)) | x ∈ X}with graphf ⊆ X ×X.A partial map is a map σ : S → X where S ⊂ X. With dom σ = S wedenote the domain of σ. The set of partial maps on X is denoted (X ◦→ X)and is ordered in the following way: given σ, τ ∈ (X ◦→ X), define σ ≤ τ iffdom σ ⊆ dom τ and σ(x) = τ(x) for all x ∈ dom σ. A subset G of X ×Xis the graph of a partial map iff ∀s ∈ X : (s, x) ∈ G and (s, x′) ∈ G⇒ x = x′.

Definition B.1.2 (Order-Preserving Map). (Davey & Priestley, 1990,1.11) Let P and Q be ordered sets. A map φ : P → Q is order-preserving (ormonotone) if x ≤ y in P implies φ(x) ≤ φ(y) in Q.

Definition B.1.3 (Bottom Element).(Davey & Priestley, 1990, 1.23,1.24)Let P be an ordered set. The least element of P , if such exists, is called thebottom element and denoted ⊥.Given an ordered set P (with or without ⊥), we form P⊥ (called “P lifted”)as follows: Take an element 0 /∈ P , construct P⊥ = P ∪ {0} and define ≤ onP⊥ by x ≤ y iff x = 0 or x ≤ y in P .

Definition B.1.4. (Order-Isomorphism between Partial and StrictMaps). (Davey & Priestley, 1990, 1.29) With each π ∈ (S ◦→ S) we associatea map ψ(π) : S → S⊥, given by ψ(π) = π⊥ where

π⊥(x) ={π(x) if x ∈ dom π0 otherwise.

U. Schmid: Inductive Synthesis of Functional Programs, LNAI 2654, pp. 357-367, 2003. Springer-Verlag Berlin Heidelberg 2003

Page 31: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

358 B. Concepts and Proofs

Thus ψ is a map from (S ◦→ S) to (S → S⊥). We have used the extraelement 0 to convert a partial map on S into a total map. ψ sets up anorder-isomorphism between (S ◦→ S) and (S → S⊥) (Davey & Priestley,1990, p. 21).

Definition B.1.5 (Supremum). (Davey & Priestley, 1990, 2.1) Let P bean ordered set and let S ⊆ P . An element x ∈ P is an upper bound of S ifs ≤ x for all s ∈ S. x is called least upper bound of S if

1. x is an upper bound of S, and2. x ≤ y for all upper bounds y of S.

Since least elements are unique, the least upper bound is unique if it exists.The least upper bound is also called the supremum of S and is denoted supS,or

∨S (“join of S”), or

⊔S if S is directed.

Definition B.1.6 (CPO). (Davey & Priestley, 1990, 3.9) An ordered setP is a CPO (complete partial order) if

1. P has a bottom element ⊥,2. supD exists for each directed sub-set D of P .

The simplest example of a directed set is a chain, such as 0 ≤ succ(0) ≤succ(succ(0)) ≤ succ(succ(succ(0))) ≤ . . .. Fixpoint theorems are based onascending chains.

Definition B.1.7 (Continuous and Strict Maps). (Davey & Priestley,1990, 3.14) A map φ : P → Q is continuous, if for every directed set D inP : φ(

⊔D) =

⊔(φ(D)). Every continuous map is order preserving (Davey &

Priestley, 1990, 3.15).A map φ : P → Q such that φ(⊥) = ⊥ is called strict.

Definition B.1.8 (Fixpoint). (Davey & Priestley, 1990, 4.4) Let P be anordered set and let Φ : P → P be a map. We say x ∈ P is a fixpoint of Φif Φ(x) = x. The set of all fixpoints of Φ is denoted by fix(Φ); it carries theinduced order. The least element of fix(Φ), when it exists, is denoted by µ(Φ).The n-fold composite, Φn, of a map Φ : P → P is defined as follows: Φn is theidentity map if n = 0 and Φn = Φ ◦Φn−1 for n ≥ 1. If Φ is order preserving,so is Φn.

Theorem B.1.1 (CPO Fixpoint). (Davey & Priestley, 1990, 4.5) Let Pbe a complete partial order (CPO), let Φ : P → P be an order-preserving mapand define α =

⊔n≥0 Φ

n(⊥).

1. If α ∈ fix(Φ), then α = µ(Φ).2. If Φ is continuous then the least fixpoint µ(Φ) exists and equals α.

Page 32: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

B.1 Fixpoint Semantics 359

Proof (CPO Fixpoint). (Davey & Priestley, 1990, 4.5)

1. Certainly ⊥ ≤ Φ(⊥). Applying the order preserving map Φn, we haveΦn(⊥) ≤ Φn+1(⊥) for all n. Hence we have a chain ⊥ ≤ Φ(⊥) ≤ . . . ≤Φn(⊥) ≤ Φn+1(⊥) ≤ . . . in P . Since P is a CPO, α =

⊔n≥0 Φ

n(⊥)exists. Let β be any fixpoint of Φ. By induction, Φn(β) = β for all n. Wehave ⊥ ≤ β, hence we obtain Φn(⊥) ≤ Φn(β) = β by applying Φn. Thedefinition of α forces α ≤ β. Hence if α os a fixpoint then it is the leastfixpoint.

2. It will be enough to show that α ∈ fix(Φ). We have

Φ(⊔

n≥0 Φn(⊥)) =

⊔n≥0 Φ(Φn(⊥)) (since Φ is continuous)

=⊔

n≥1 Φn(⊥)

=⊔

n≥0 Φn(⊥) (since ⊥ ≤ Φn(⊥) for all n).

Consider the factorial function facu(x) = x! with its recursive definition:

facu(x) ={

1 if x = 0x · facu(x− 1) if x ≥ 1.

To each map f : N0 → N0 we may associate a new map f given by:

f(x) ={

1 if x = 0x · f(x− 1) if x ≥ 1.

The equation satisfied by facu can be reformulated as Φ(f) = f , whereΦ(f) = f . The entire factorial function cannot be unwound from its recursivespecification in a finite number of steps. However we can, for each n = 0, 1, . . .,determine in a finite number of steps the partial map fn which is a restrictionof facu to {0, 1, . . . , n}. The graph of fn is {(0, 1), (1, 1), . . . , (n, n!)}. There-fore, to accommodate approximations of facu we can consider all partialmaps on N0 and regard f as having the domain {0} ∪ {k | k − 1 ∈ domf}.Similarly, we can work with all maps from N0 to (N0)⊥ and take f = ⊥precisely when f(k − 1) = ⊥. The factorial function can be regarded as asolution of a recursive equation Φ(f) = f , where f ∈ (N0

◦→ N0) or equiva-lently can be regarded as a solution of a recursive equation Φ(f) = f , wheref ∈ (N0 → (N0)⊥):

Φ(f)(x) ={

1 if x = 0x · f(x− 1) if x ≥ 1.

The domain P of Φ as given in the fixpoint theorem above, is (N0◦→ N0).

That is, Φ maps partial maps on partial maps. The bottom element of (N0◦→

N0) is ∅. This corresponds to that map in (N0 → (N0)⊥) which sends everyelement to ⊥. We denote this map by ⊥.

It is obvious that Φ is order preserving: We have graph Φ(⊥) = {(0, 1)},graph Φ2(⊥) = {(0, 1), (1, 1)} and so on. An easy induction confirms that

Page 33: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

360 B. Concepts and Proofs

fn = Φn(⊥) for all n where {fn} is the sequence of partial maps (“Kleenesequence”) which approximate facu. Taking the union of all graphs, we seethat

⊔n≥0 Φ

n(⊥) is the map facu : x → x! on N0. This is the least fixpointof Φ in (N0

◦→ N0).

B.2 Proof: Maximal Subprogram Body

Theorem 7.3.1 states that if a recursive subprogram G exists for a set ofinitial trees Tinit = {te | e ∈ E} which can generate all te ∈ Tinit for a giveninitial instantiation βe, then there also exists a subprogram G′ such that theinstantiations (over all segments and all examples) do not share a non-trivialpattern (a common prefix). Therefore, it is feasible to generate a hypothesisabout the subprogram body for a given segmentation with recursion pointsUrec by calculating the maximal pattern of the segments.

Let Urec = pos(tG, G) be the set of recursion points in G. Because Gis a recursive subprogram, holds Urec = ∅. Let Urec be indexed over R ={1, . . . , |Urec|} and let be W the set of unfolding indices over R (def. 7.1.28).Let the body of the subprogram G be tG = tG[Urec ← G0]. That is, therecursive calls are replaced by function variable G with arity 0, and therefore,the substitutions in the recursive calls are not given in tG. Let sub : X×R→TΣ(X) be the substitution term of variable xi ∈ X in the r-th recursive call(r ∈ R) with sub(xi, r) = tG|ur◦i (def. 7.1.27).

For better readability we introduce unfolding indices w ∈W and examplee ∈ E as parameters of instantiation β : X ×W ×E → TΣ . We will use vari-able names with indices to indicate their position. If position is not relevant,variables are written without indices.

The instantiation β : X×W×E → TΣ of a variable x ∈ X in an unfoldingw ∈W in example e ∈ E is defined by the initial instantiation of this variableand by the substitution terms for this variable. In the first unfolding w = λ,β is the initial instantiation for the given example. Further instantiations canbe generated by substitutions:

Definition B.2.1 (Generating Instantiations). From instantiations inunfolding w of an example e, the instantiation in an unfolding w ◦ r canbe generated by application of the substitution:

1. β(x, λ, e) = βe(x).2. For all w ∈ W , r ∈ R, e = inE:

β(x,w ◦ r, e) = sub(x, r){x1 ← β(x1, w, e), . . . , xn ← β(xn, w, e)}.

There can exist substitutions for x with sub(x, r) ∈ X .

Lemma B.2.1 (Identical Instantiations). If for x ∈ X exists a substitu-tion sub(x, r) = x′, r ∈ R, x′ ∈ X, it holds for all e ∈ E: ∀w ∈ W∃v ∈ Wwith β(x′, w, e) = β(x, v, e).

Page 34: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

B.2 Proof: Maximal Subprogram Body 361

Proof (Identical Instantiations). If for an r ∈ R holds that sub(x, r) = x′

with x′ ∈ X then it follows for all e ∈ E and w ∈W :

β(x,w ◦ r, e) = sub(x, r){x1 ← β(x1, w, e), . . . , xn ← β(xn, w, e)}= x′{x1 ← β(x1, w, e), . . . , xn ← β(xn, w, e)}= β(x′, w, e),

and therefore for each e ∈ E and w ∈ W exists v ∈ W (v = w ◦ r) such thatβ(x′, w, e) = β(x,w, e).

From lemma B.2.1 follows: If a substitution of x is a replacement of x byanother variable x′ then in each example the instantiation of x′ is also aninstantiation of x. In general, for x can exist several substitutions where xis replaced by x′ and additionally, for x′ there can be further substitutionssub(x′, r) ∈ X . We define the set of instantiation generating variables for xand generalize lemma B.2.1.

Definition B.2.2 (Instantiation Generating Variables). We defineXc(x) as the minimal set of variables which directly generate instantiationsof x for which holds:

1. x ∈ Xc(x).2. If x′ ∈ Xc(x) then {sub(x′, r) | r ∈ R, sub(x′, r) ∈ X} ⊆ Xc(x).

Lemma B.2.2 (Continuation of Identical Instantiations). Let Xc(x)be the set of variables which directly generate instantiations of x. For allx′ ∈ Xc(x) and all e ∈ E holds: ∀w ∈W∃v ∈ W with β(x′, w, e) = β(x, v, e).

Proof (Continuation of Identical Instantiations). For the given definition ofXc(x) it must be shown that (1) lemma B.2.2 holds for x, and (2) if x′ ∈ Xc(x)then lemma B.2.2 holds for all x′′ ∈ {sub(x′, r) | r ∈ R, sub(x′, r) ∈ X}.

1. For x ∈ Xc(x) holds: for all e ∈ E and w ∈ W exists v ∈ W with v = wsuch that β(x,w, e) = β(x, v, e).

2. Let x′ ∈ Xc(x). By lemma B.2.2 holds for each x′′ ∈ {sub(x′, r) | r ∈R, sub(x′, r) ∈ X} that for each e ∈ E and w ∈ W exists a v′ ∈W withβ(x′′, w, e) = β(x′, v′, e). Because x′ ∈ Xc(x), for each β(x′, v′, e) mustexist a v ∈W with β(x′, v′, e) = β(x, v, e). Therefore, for each e ∈ E andw ∈W exists v ∈ W with β(x′′, w, e) = β(x, v, e).

From lemma B.2.2 follows: In each example is each instantiation of avariable from Xc(x) an instantiation of x. If there exists a non-trivial patternp of instantiations of x, then it must hold p ≤ β(x,w, e) for all w ∈ W ande ∈ E. Based on lemma B.2.2 for all x′ ∈ Xc(x) holds:

Definition B.2.3 (Non-Trivial Pattern of an Instantiation). For allx′ ∈ Xc(x) holds for a non-trivial pattern p of instantiations of x:

Page 35: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

362 B. Concepts and Proofs

1. p ≤ βe(x′) for all e ∈ E, and2. p ≤ sub(x′, r){x1 ← β(x1, w, e), . . . , xn ← β(xn, w, e)} for all e ∈ E,

w ∈W , and r ∈ R with sub(x′, r) /∈ X.

Furthermore, there must exist a non-trivial pattern p′ such that for all x′ ∈Xc(x) holds:

3. p′ ≤ βe(x′) for all e ∈ E, and4. p′ ≤ sub(x′, r) for all r ∈ R with sub(x′, r) /∈ X.

The proof of theorem 7.3.1 is divided in the two cases (1) p′ does notcontain variables, and (2) p′ does contain variables.

(1) p′ does not contain variables. From var(p) = ∅ follows

– p′ = βe(x′) for all e ∈ E, and– p′ = sub(x′, r) for all r ∈ R with sub(x′, r) /∈ X , and therefore– p′ = β(x′, w, e) for all x′ ∈ Xc(x), e ∈ E, w ∈ W .

That means, that instantiations of x′ ∈ Xc(x) are identical in all unfold-ings over all examples. Therefore, it holds for all w ∈ W and e ∈ E:tG{x′ ← β(x′, w, e)} = tG{x′ ← p′}. Consequently, variables x′ ∈ Xc(x)can be replaced by p′ in the program body.

Furthermore, for all variables x′′ ∈ X , e ∈ E, w ∈ W , and r ∈ R holds:

β(x′′, w ◦ r, e) = sub(x′′, r) {x′ ← β(x′, w, e) | x′ ∈ Xc(x)}{x′ ← β(x′, w, e) | x′ ∈ X \Xc(x)}

= sub(x′′, r) {x′ ← p′ | x′ ∈ Xc(x)}{x′ ← β(x′, w, e) | x′ ∈ X \Xc(x)}.

That is, in all substitutions the variables from Xc(x) can be replaced by p′

and these new substitutions generate identical instantiations in all unfoldingsover all examples.

Let X ′ = X \Xc(x) and m = |X ′|. Because |Xc(x) > 0 holds |X ′| < |X |.Let be i1, . . . , im the indices of variables in X ′. There can be constructed anew subprogram G′ which does not need the variables from Xc(x):

Head: G′(xi1 , . . . , xim).Recursive Calls: Each recursive call uses the new program nam G′ and the

new substitutions. The call at position ur ∈ Urec, r ∈ R is:

G′( sub(xi1 , r){x′ ← p | x′ ∈ Xc(x)},...sub(xim , r){x′ ← p | x′ ∈ Xc(x)}.

Maximal Pattern of Body: The maximal patterns of the instantiations canbe included in the body: tG′ = tG{x′ ← p′ | x′ ∈ Xc(x)}.

Page 36: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

B.2 Proof: Maximal Subprogram Body 363

Body: The program body can be constructed as:

tG′ = tG {x′ ← p′ | x′ ∈ Xc(x)}[ur ← G′( sub(xi1 , r){x′ ← p | x′ ∈ Xc(x)},

...sub(xi1 , r){x′ ← p | x′ ∈ Xc(x)}) | ur ∈ Urec].

The initial instantiations of variables x′ ∈ Xc(x) are no longer used in theexamples, because these variables do not belong to the new subprogram. Theinitial instantiations of the remaining variables inX ′ are unaffected. With thereduced initial instantiations the new subprogram still has the same languagefor each example.

(2) p′ does contain variables. Let |var(p′)| = m the number of variablesin p′. For each variable xj ∈ Xc(x) new variables xji, . . . , xjm are con-structed. The new variable set is X ′ = {xjk | xj ∈ Xc(x), k = 1, . . . ,m} ∪X \ Xc(x). For each xj ∈ Xc(x) for all e ∈ E and w ∈ W theremust exist terms β(xj1, w, e), . . . , β(xjm, w, e) ∈ TΣ such that β(xj , w, e) =p′[β(xj1 , w, e), . . . , β(xjm, w, e)] holds – because p′ is a non-trivial pattern ofeach β(xj , w, e).

Furthermore, for each xj ∈ Xc(x) and each r ∈ R with sub(x′, r) /∈X there must exist terms sub(xj1, r), . . . , sub(xjm, r) ∈ T Σ such thatsub(xj , r) = p′[sub(xj1, r), . . . , sub(xjm, r)] holds – because p′ is a non-trivial pattern of each sub(xj , r).

For all w ∈W , e ∈ E, and r ∈ R holds

β(xj , w ◦ r, e) = p′ [β(xj1, w ◦ r, e), . . . , β(xjm, w ◦ r, e)]= sub(xj , r){x1 ← β(x1, w, e), . . . , xn ← β(xn, w, e)}= p′ [sub(xj1, r), . . . , sub(xjm, r)]{x1 ← β(x1, w, e),

...xn ← β(xn, w, e)}

= p′ [sub(xj1, r){x1 ← β(x1, w, e), . . . , xn ← β(xn, w, e)}...sub(xjm, r){x1 ← β(x1, w, e), . . . , xn← β(xn, w, e)}].

It follows for k = 1, . . . ,m, w ∈W , e ∈ E, and r ∈ R:

β(xjk,w ◦ r, e) = mathbfsub(xjk , r){x1 ← β(x1, w, e), . . . , xn ← β(xn, w, e)}.

Consequently, the instantiations of all variables xjk can be generated by the(new) substitutions sub(xjk , r) in all unfoldings for all examples.

Page 37: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

364 B. Concepts and Proofs

For all x′ ∈ X ′, w ∈ W , e ∈ E, and r ∈ R holds:

β(x′, w ◦ r, e) = sub(x′, r) {xj ← β(xj , w, e) | xj ∈ Xc(x)}{xj ← β(xj , w, e) | xj ∈ X \ Xc(x)}

= sub(x′, r) {xj ← p′[β(xj1, w, e), . . . , β(xjm, w, e)] | xj ∈ Xc(x)}{xj ← β(xj , w, e) | xj ∈ X \ Xc(x)}

= sub(x′, r) {xj ← p′[xj1, . . . , xjm] | xj ∈ Xc(x)}{xjk ← β(xj1, w, e) | xj ∈ Xc(x), k = 1, . . . , m}{xj ← β(xj , w, e) | xj ∈ X \ Xc(x)}

= sub(x′, r) {xj ← p′[xj1, . . . , xjm] | xj ∈ Xc(x)}{x′′ ← β(x′′, w, e) | x′′ ∈ X ′}.

Consequently, substitutions can be modified in the following way: Eachvariable xj ∈ Xc(x) is replaced by the pattern p′, in which variables arereplaced by the new variables xj1, . . . , xjm. All substitutions are terms inTΣ(X ′) and do not refer to variables in Xc(x).

Furthermore, for all w ∈W , e ∈ E holds:

tG {xj ← β(xj , w, e) | xj ∈ Xc(x)}{xj ← β(xj , w, e) | xj ∈ X \Xc(x)}

= tG {xj ← p′[β(xj1, w, e), . . . , β(xjm, w, e)] | xj ∈ Xc(x)}{xj ← β(xj , w, e) | xj ∈ X \Xc(x)}

= tG {xj ← p′[xj1, . . . , xjm] | xj ∈ Xc(x)}{xjk ← β(xj1, w, e) | xj ∈ Xc(x), k = 1, . . . ,m}{xj ← β(xj , w, e) | xj ∈ X \Xc(x)}

= tG {xj ← p′[xj1, . . . , xjm] | xj ∈ Xc(x)}{x′ ← β(x′, w, e) | x′ ∈ X ′}.

If in the body tG variables xj ∈ Xc(x) are replaced by pattern p′ whichis instantiated with the new variables xj1, . . . , xjm then the same unfoldingsare generated with instantiations β(x′, w, e) of variables x′ ∈ X ′.

Consequently, a new subprogram can be constructed in an analogous wayas in case 1:

– Replace variables xj ∈ Xc(x) by pattern p′ which is instantiated withvariables xj1, . . . , xjm

in the body, andin all substitution terms.

– Delete the substitutions for the obsolete variables from Xc(x) in all calls.– Introduce the substitutions for the new variables in all calls.– Construct the parameter list of the program head consistent with variablesX ′ used in the recursive calls.

The initial instantiations for the new variables result from β(xj , w, e) =p′[β(xj1 , w, e), . . . , β(xjm, w, e)] (introduced above). The initial instantiationsfor variables from Xc(x) are no longer necessary. With the reduced initial in-stantiations the new subprogram still has the same language for each example.

Page 38: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

B.3 Proof: Uniqueness of Substitutions 365

Consequences. If instantiations of a variable x ∈ X in a subprogram share anon-trivial pattern p′ then a new – maximized – subprogram with new initialinstantiations can be constructed. By moving p′ to the program body, thenumber of nodes (symbols) in the initial instantiations gets reduced (case1). If a variable of the new subprogram has instantiations which share anon-trivial pattern, a new subprogram can be constructed iteratively (case2). Because the number of nodes in the initial instantiation is finite and isreduced in each iteration, after a finite number of steps a subprogram isconstructed together with initial instantiations such that the instantiationsof each variable do not share a non-trivial pattern.

B.3 Proof: Uniqueness of Substitutions

For the proof of theorem 7.3.4, the following lemma is needed:

Lemma B.3.1 (No Common Non-Trivial Pattern for Variables).Let ts ∈ TΣ(Xi) be a term, such that for xj ∈ Xi and r ∈ R holds: ∀w ∈W :βw◦r = ts{x1 ← βw(x1), . . . , xmi ← βw(xmi)}.1. For all u with u ∈

⋂w∈W pos(βw◦r(xj)) holds

2. if there exists a variable xk ∈ Xi with: ∀w ∈ W : βw◦r(xj)|u = βw(xk),and

3. ∀u′ < um∀w1, w2 ∈ W : node(βw1◦r(xj), u′) = node(βw2◦r(xj), u′)

then ts|u = xk.

Proof (No Common Non-Trivial Pattern for Variables). Assume there existsa position u which together with variable xk ∈ X fulfills properties (1), (2),and (3) stated in lemma B.3.1, but for which does not hold ts|u = xk. Wemust consider two cases: u ∈ pos(ts) and u /∈ pos(ts).

Case 1: u ∈ pos(ts)

1.a: Let ts|u = xh and xh = xk. Then holds for all w ∈W :

βw◦r(xj)|u = βw(xk) (because of lemma B.3.1, 2)βw◦r(xj)|u = βw(xh) (by assumption)

and therefore, for all w ∈W : βw(xk) = βw(xh). Because ti (that is,the body of subprogram Gi, see theorem 7.3.4) is a maximal pattern,it must hold xk = xh which is a contradiction to the assumption.

1.b: Let ts|u /∈ Xi. Then holds for all w ∈W :

βw◦r(xj)|u = βw(xk)(because of lemma B.3.1, 2)

because βw◦r(xj) = ts{x1 ← βw(x1), . . . , xmi ← βw(xmi)}βw◦r(xk) = ts{x1 ← βw(x1), . . . , xmi ← βw(xmi)}|uβw◦r(xk) = ts|u{x1 ← βw(x1), . . . , xmi ← βw(xmi)}

(because u ∈ pos(ts)).

Page 39: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

366 B. Concepts and Proofs

Because by assumption ts|u is not in Xi, ts|u must be a non-trivialpattern of instantiations βw(xk) of xk. This is a contradiction to theassumption that ti is a maximal pattern of unfoldings over β.

Case 2: u /∈ pos(ts)Because u is a position of all βw◦r(xj) and by assumption

βw◦r(xj) = ts{x1 ← βw(x1), . . . , xmi ← βw(xm1)}

there must exist a position us < u with us ∈ pos(ts) and ts|us ∈ Xi. Letts|us = xh. Then for all w ∈W holds:

βw◦r(xj)|us = ts{x1 ← βw(x1), . . . , xmi ← βw(xmi)}|us

= ts|us{x1 ← βw(x1), . . . , xmi ← βw(xmi)}|us

(because us ∈ pos(ts))= xh{x1 ← βw(x1), . . . , xmi ← βw(xmi)}|us

(because ts|us = xh)= βw(xh).

By assumption, equation (3) in lemma B.3.1 holds for all u′ < u andtherefore also for us < u: for all w ∈ W

node(βw1◦r(xj), us) = node(βw2◦r(xj), us)node(βw1(xh), λ) = node(βw2(xj), λ).

Consequently, instantiations of variable xh share a non-trivial pattern,which is a contradiction to the assumption that ti is a maximal patternof unfoldings over β.

Now we can proof theorem 7.3.4:

Proof (Uniqueness of Substitutions). Let ts ∈ TΣ(X) be a term, which fulfillsequation (*) ∀w ∈ W : βw◦r(xj) = ts{x1 ← βw(x1), . . . , xmi ← βw(xmi)}given in theorem 7.3.4. We must distinguish the cases that (1) ts has a variableat position u and (2) The subtree in ts at position u does not contain avariable.

Case 1: Let u ∈ pos(ts) with ts|u ∈ Xi and let ts|u = xk.Then holds: βw◦r(xj)|u = βw(xk) and because of (*) that for all u′ <u and all w1, w2 ∈ W node(βw1(xj), u′) = node(βw2(xj), u′). Be-cause of lemma B.3.1 holds sub(xj , r)|u = ts|u and for all u′ ≤ u:node(sub(xj , r), u′) = node(ts, u′).

Case 2: Let u ∈ pos(ts) with ts|u ∈ TΣ .Then holds:

βw◦r(xj)|u = ts{x1 ← βw(x1), . . . , xmi ← βw(xmi)}|u= ts|u{x1 ← βw(x1), . . . , xmi ← βw(xmi)}= ts|u.

Page 40: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

B.3 Proof: Uniqueness of Substitutions 367

2.a: Let u ∈ pos(sub(xj , r)). Let u′ ∈ pos(sub(xj , r)) be a position withu ≤ u′ and let sub(xj , r)|u′ ∈ Xi with sub(xj , r) = xk. Then holdsfor all w ∈ W : βw◦r(xj)|u′ = betaw(xk) and because βw◦r(xj)|u =ts|u it holds βw◦r(xj)|u′ = ts|u′ and therefore βw(xk) = ts|u′ .The instantiations of xk then must all be identical and share thenon-trivial pattern ts|u′ . This is a contradiction to the assumptionthat ti is a maximal pattern of unfoldings over β. It follows: if u ∈pos(sub(xj , r)) then term sub(xj , r)|u cannot contain variables andit must hold:

βw◦r(xj)|u = sub(xj , r)|u{x1 ← βw(x1), . . . , xmi ← βw(xmi)}= sub(xj , r)|u = ts|u.

2.b: Let u /∈ pos(sub(xj , r)). Because u is a position in all βw◦r(xj), insub(xj , r) at position u′ < u must be a variable xk ∈ Xi. Becausethen holds for all w ∈ W : βw◦r(xj)|u′ = βw(xk) and for all u′′ < uand w1, w2 ∈ W node(βw1◦r(xj), u′′) = node(βw2◦r(xj), u′′) andbecause of lemma B.3.1 must hold for ts that ts|u′ = xk. Becauseu′ < u this is a contradiction to the assumption u ∈ pos(t).

Because of case 1 holds for all positions u ∈ pos(ts) with ∃u′ : ts|u′ ∈Xi and u ≤ u′ that node(ts, u) = node(sub(xj , r), u). Because of case2 holds for all positions u ∈ pos(ts) with ¬∃u′ : ts|u′ ∈ Xi and u ≤ u′

that node(ts, u) = node(sub(xj , r), u). Therefore, for all u ∈ pos(ts) holdsnode(ts, u) = node(sub(xj , r), u) and therefore holds ts = sub(xj , r).

Page 41: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C. Sample Programs and Problems

C.1 Fibonacci with Sequence Referencing Function

The fibonacci function as inferred with a genetic programming algorithmKoza (1992, chap. 18.3):

(+ (SRF (- J 2) 0)(SRF (+ (+ (- J 2) 0) (SRF (- J J) 0))

(SRF (SRF 3 1) 1)))

A realization in Lisp:

(defun kfib (x)(+ (srf x (- x 2) 0)

(srf x (+ (+ (- x 2) 0) (srf x (- x x) 0))(srf x (srf x 3 1) 1))

) )

; sequence referencing function (sfr cur x def); if 0 >= x <= cur then fib(x) else default def(defun srf (cur x def)(cond ((<= x 0) def)

((<= x cur) (fib x))(T def)

))

; Koza’s SRF calculates the fibonacci numbers for x = 0..20; in ascending order and saves them in a global table; we just calculate it by standard fibonacci(defun fib (x)(cond ((= 0 x) 1)

((= 1 x) 1)(T (+ (fib (- x 1)) (fib (- x 2))))

))

A simplification of the synthesized function:

(defun skfib (x)(+ (srf x (- x 2) 0)

(srf x (- x 1)(srf x (srf x 3 1) 1))

) )

U. Schmid: Inductive Synthesis of Functional Programs, LNAI 2654, pp. 369-389, 2003. Springer-Verlag Berlin Heidelberg 2003

Page 42: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

370 C. Sample Programs and Problems

C.2 Inducing ‘Reverse’ with Golem

Trace of the ILP system Golem (Muggleton & Feng, 1990) for inducing therecursive clause reverse.Positive Examples:

rev([],[]).rev([1],[1]).rev([2],[2]).rev([3],[3]).rev([4],[4]).rev([1,2],[2,1]).rev([1,3],[3,1]).rev([1,4],[4,1]).rev([2,2],[2,2]).rev([2,3],[3,2]).rev([2,4],[4,2]).rev([0,1,2],[2,1,0]).rev([1,2,3],[3,2,1]).

Negative Examples:

rev([1],[]).rev([0,1],[0,1]).rev([0,1,2],[2,0,1]).app([1],[0],[0,1]).

Background Knowledge:

!- mode(rev(+,-)).!- mode(app(+,+,-)).rev([],[]).rev([1],[1]).rev([2],[2]).rev([3],[3]).rev([4],[4]).rev([1,2],[2,1]).rev([1,3],[3,1]).rev([1,4],[4,1]).rev([2,2],[2,2]).rev([2,3],[3,2]).rev([2,4],[4,2]).rev([0,1,2],[2,1,0]).rev([1,2,3],[3,2,1]).app([],[],[]).app([1],[],[1]).app([2],[],[2]).app([3],[],[3]).app([4],[],[4]).app([],[1],[1]).app([],[2],[2]).app([],[3],[3]).app([],[4],[4]).app([1],[0],[1,0]).app([2],[1],[2,1]).

Page 43: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C.2 Inducing ‘Reverse’ with Golem 371

app([2],[1],[2,1]).app([3],[1],[3,1]).app([4],[1],[4,1]).app([2],[2],[2,2]).app([3],[2],[3,2]).app([4],[2],[4,2]).app([2,1],[0],[2,1,0]).app([3,2],[1],[3,2,1]).nat(0).nat(1).nat(2).nat(3).nat(4).nat(5).nat(6).nat(7).nat(8).nat(9).

Trace:

[:- mode(rev(+,-)). - Time taken 0ms][:- mode(app(+,+,-)). - Time taken 0ms][Rlgg of pair:]rev([],[]).rev([2],[2]).[is]

rev(A,A) :- app(A,A,B).[Number of negatives covered=0][Number of positives covered=2][Rlgg of pair:]rev([],[]).rev([1,2],[2,1]).[is]

rev(A,B) :- app(C,D,B).[Number of negatives covered=1][Rlgg of pair:]rev([],[]).rev([1,3],[3,1]).[is]

rev(A,B) :- app(C,D,B).[Number of negatives covered=1][Rlgg of pair:]rev([1],[1]).rev([1,4],[4,1]).[is]

rev([1|A],[B|C]) :- rev(A,A), rev(C,C), app([2,1],[0],[2,1,0]),app(A,[1],[B|C]), app(D,A,[B]), nat(1), nat(B), rev([],[]), app([1],[0],[1,0]), app([1],[],[1]), app([2],[1],[2,1]), app([2],[2],[2,2]), app([3,2],[1],[3,2,1]), app([B],[],[B]), app([B],C,[B|C]), app([],[1],[1]), app([],[B],[B]), app([],[],[]), nat(0), nat(2).

[Number of negatives covered=0]

Page 44: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

372 C. Sample Programs and Problems

[Number of positives covered=4][Rlgg of pair:]rev([2],[2]).rev([1,3],[3,1]).[is]

rev([A|B],[C|D]) :- rev([1|E],[F|D]), rev(B,B), rev(D,D), app([G,A],[H],[G,A,H]), app(B,[A],[C|D]), app(I,B,[C]), nat(A),nat(C), rev([1],[1]), rev([F],[F]), rev([],[]), rev(E,E), app([1],[0],[1,0]), app([2,1],[0],[2,1,0]), app([3,2],[1],[3,2,1]), app([A],[H],[A,H]), app([A],[],[A]),app([A],I,[A|I]), app([C,F],[J],[C,F,J]), app([C],[1],[C,1]), app([C],[A],[C,A]), app([C],[F],[C,F]), app([C],[],[C]), app([C],D,[C|D]), app([C],E,[C|E]), app([C],I,[C|I]), app([F],[],[F]), app([F],D,[F|D]), app([F],E,[F|E]), app([G],[A],[G,A]), app([],[A],[A]), app([],[C],[C]), app([],[F],[F]), app([],[],[]), app(B,[F],[K|E]), nat(1), nat(F), nat(G), nat(H).

[Number of negatives covered=0][Number of positives covered=2][Rlgg of pair:]rev([2],[2]).rev([2,3],[3,2]).[is]

rev([2|A],[B|C]) :- rev([1|A],[D|E]), rev([F|C],[F|C]), rev(A,A), rev(C,C), app([2],[2],[2,2]), app([2],C,[2|C]), app([3,2],[1],[3,2,1]), app(A,[2],[B|C]), app(G,A,[B]), nat(2),nat(B), rev([1],[1]), rev([D],[D]), rev([F],[F]), rev([],[]), rev(E,E), app([1],[0],[1,0]), app([2,1],[0],[2,1,0]), app([2],[1],[2,1]), app([2],[],[2]), app([2],E,[2|E]), app([2],G,[2|G]), app([3],[1],[3,1]), app([3],[2],[3,2]), app([B,F],[H],[B,F,H]), app([B],[1],[B,1]), app([B],[2],[B,2]), app([B],[F],[B,F]), app([B],[],[B]), app([B],C,[B|C]), app([B],E,[B|E]), app([B],G,[B|G]), app([D],[],[D]), app([D],C,[D|C]), app([F],E,[F|E]), app([],[2],[2]), app([],[B],[B]), app([],[D],[D]), app([],[],[]),app(A,E,I), app(E,J,[1]), app(G,[D],[B|J]), app(J,E,[1]), app(K,[B],[3|G]), app(K,A,[3]), nat(1), nat(3),nat(D), nat(F).

[Number of negatives covered=0][Number of positives covered=2][Rlgg of pair:]rev([3],[3]).rev([1,4],[4,1]).[is]

rev([A|B],[C|D]) :- rev(B,B), rev(D,D), app(B,[A],[C|D]), app(E,B,[C]), nat(A), nat(C), rev([],[]), app([A],[],[A]),app([C],[],[C]), app([C],D,[C|D]), app([],[A],[A]), app([],[C],[C]), app([],[],[]).

[Number of negatives covered=0][Number of positives covered=10][Rlgg of pair:]rev([1,4],[4,1]).rev([2,3],[3,2]).

Page 45: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C.3 Finite Program for ‘Unstack’ 373

[is]rev([A,B],[B,A]):- rev([A],[A]), rev([B],[B]), rev([],[]), app([A],

[],[A]), app([B],[A],[B,A]), app([B],[],[B]), app([C,A],[D],[C,A,D]), app([],[A],[A]), app([],[B],[B]), app([],[],[]), nat(A), nat(B), rev([C],[C]), app([A],[D],[A,D]), app([C],[A],[C,A]), app([C],[],[C]), app([],[C],[C]), nat(C), nat(D).

[Number of negatives covered=0][Number of positives covered=6][Pos-neg cover=10,potential-examples=3][Rlgg of :]rev([],[]).rev([3],[3]).rev([1,4],[4,1]).[is]

rev(A,B).[Number of negatives covered=3][Overgeneral][Rlgg of :]rev([0,1,2],[2,1,0]).rev([3],[3]).rev([1,4],[4,1]).[is]

rev([A|B],[C|D]) :- rev(B,E), app(E,[A],[C|D]), nat(A), nat(C),rev([],[]), app([],[],[]).

[Number of negatives covered=0][OK][Rlgg of :]rev([1,2,3],[3,2,1]).rev([3],[3]).rev([1,4],[4,1]).[is]

rev([A|B],[C|D]) :- rev(B,E), rev(F,D), app(E,[A],[C|D]), nat(A),nat(C), rev([],[]), app([A],[],[A]), app([],[A],[A]),app([],[],[]).

[Number of negatives covered=0][OK][Best-atom rev([0,1,2],[2,1,0])][Pos-neg cover=12,potential-examples=0][Cover=12][Reducing clause][Adding clause rev([A|B],[C|D]) :- rev(B,E), app(E,[A],[C|D]).][REMOVED: 12 FORES, 0 NEGS][FORES: 1, BACKS: 41, NEGS: 4, RULES: 1][Induction time 498ms]

C.3 Finite Program for ‘Unstack’

#S(TPLAN :PNAME P1:SUPLAN(#S(PSTEP :INSTOP NIL :PARENT NIL

:CHILD

Page 46: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

374 C. Sample Programs and Problems

((CLEAR (SUCC (SUCC O3))) (CLEAR (SUCC O3))(CLEAR O3)):PREC NIL :ADD NIL :DEL NIL :NODEID 0 :LEVEL 0)

#S(PSTEP :INSTOP (UNSTACK (SUCC O3)):PARENT((CLEAR (SUCC (SUCC O3))) (CLEAR (SUCC O3))(CLEAR O3)):CHILD((ON O2 O3) (CLEAR (SUCC (SUCC O3)))(CLEAR (SUCC O3))):PREC ((ON O2 O3) (CLEAR O2)) :ADD ((CLEAR O3)):DEL ((CLEAR O2) (ON O2 O3)) :NODEID 1 :LEVEL 1)

#S(PSTEP :INSTOP (UNSTACK (SUCC (SUCC O3))):PARENT((ON O2 O3) (CLEAR (SUCC (SUCC O3)))(CLEAR (SUCC O3))):CHILD((ON O1 O2) (ON O2 O3) (CLEAR (SUCC (SUCC O3)))):PREC ((ON O1 O2) (CLEAR O1)) :ADD ((CLEAR O2)):DEL ((CLEAR O1) (ON O1 O2)) :NODEID 2 :LEVEL 2))

:COPLAN(#S(PSTEP :INSTOP NIL :PARENT NIL :CHILD ((CLEAR O3))

:PREC NIL :ADD NIL :DEL NIL :NODEID 0 :LEVEL 0)#S(PSTEP :INSTOP (UNSTACK (SUCC O3)) :PARENT ((CLEAR O3))

:CHILD ((CLEAR (SUCC O3))):PREC ((ON O2 O3) (CLEAR O2)) :ADD ((CLEAR O3)):DEL ((CLEAR O2) (ON O2 O3)) :NODEID 1 :LEVEL 1)

#S(PSTEP :INSTOP (UNSTACK (SUCC (SUCC O3))):PARENT ((CLEAR (SUCC O3))):CHILD ((CLEAR (SUCC (SUCC O3)))):PREC ((ON O1 O2) (CLEAR O1)) :ADD ((CLEAR O2)):DEL ((CLEAR O1) (ON O1 O2)) :NODEID 2 :LEVEL 2))

:TERM(IF (CLEAR O3 S)

S(UNSTACK (SUCC O3) S(IF (CLEAR (SUCC O3) S)

S(UNSTACK (SUCC (SUCC O3)) S(IF (CLEAR (SUCC (SUCC O3)) S) S OMEGA)))))

:PTYPE SEQ :NEWDAT NIL :NEWPRED NIL :NPFCT NIL:GOALPRED (CLEAR O3) :BOTTOM O3:PASSOCS ((O2 (SUCC O3)) (O1 (SUCC (SUCC O3)))):PAFCT(DEFUN SUCC (X S)(COND ((NULL S) NIL)

((AND (EQUAL (FIRST (CAR S)) ’ON)(EQUAL (NTH 2 (CAR S)) X))

(NTH 1 (CAR S)))(T (SUCC X (CDR S)))))

:PPARAMS NIL :PVALUES NIL)

Page 47: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C.4 Recursive Control Rules for the ‘Rocket’ Domain 375

For sequences, the slots newdat, newpred, and npfct are not required. Theslots pparams and pvalues are filled after folding.

C.4 Recursive Control Rules for the ‘Rocket’ Domain

; Control Knowledge for ROCKET; ----------------------------; call for example (rocket ’(o1 o2 o3) ’((at o1 a) (at o2 a) (at o3 a) (at rocket a))); or, including generation of oset; (start-r ’((at o1 b) (at o2 b) (at o3 b)) ’((at o1 a) (at o2 a) (at o3 a)

(at rocket a)))

; predefined set-selectors(defun pick (oset) (car oset) )(defun rst (oset) (cdr oset) )

; generalized predicates inferred during plan transformation(DEFUN AT* (ARGS S)

(COND ((NULL ARGS) T)((AND (NULL S) (NOT (NULL ARGS))) NIL)((AND (EQUAL (CAAR S) ’AT)

(INTERSECTION ARGS (CDAR S)))(AT* (SET-DIFFERENCE ARGS (CDAR S)) (CDR S)))

(T (AT* ARGS (CDR S)))))(DEFUN INSIDE* (ARGS S)

(COND ((NULL ARGS) T)((AND (NULL S) (NOT (NULL ARGS))) NIL)((AND (EQUAL (CAAR S) ’INSIDE)

(INTERSECTION ARGS (CDAR S)))(INSIDE* (SET-DIFFERENCE ARGS (CDAR S)) (CDR S)))

(T (INSIDE* ARGS (CDR S)))))

; explicit operator application; in combination with DPlan, the add-del-lists are applied to s(defun unload (o s)(print ‘(unload ,o ,s))

(cond ((null s) nil)((member o (car s) :test ’equal) (cons (list ’at o ’b) (cdr s)))(T (cons (car s) (unload o (cdr s)))) ))

(defun loadr (o s)(print ‘(load ,o ,s))

(cond ((null s) nil)((member o (car s) :test ’equal)

(cons (list ’inside o ’rocket) (cdr s)))(T (cons (car s) (loadr o (cdr s)))) ))

(defun move-rocket (s)(print ‘(move-rocket ,s))

(cond ((null s) nil)((equal (car s) ’(at rocket a)) (cons (list ’at ’rocket ’b) (cdr s)))(T (cons (car s) (move-rocket (cdr s)))) ))

; generalized control rules; abstraction from destination (B) (for at*) and vehicle (Rocket) (for inside*)(defun unload-all (oset s)

(if (at* oset s)s(unload (pick oset) (unload-all (rst oset) s)) ))

(defun load-all (oset s)(if (inside* oset s)

s(loadr (pick oset) (load-all (rst oset) s)) ))

(defun rocket (oset s) (unload-all oset (move-rocket (load-all oset s))) )

Page 48: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

376 C. Sample Programs and Problems

; "meta"-function, generating the set of objects to be transported; from the top-level goals(defun start-r (g s) (rocket (make-co g ’(at* CO x)) s))

(defun make-co (goal newpat)(cond ((null goal) nil)

((string< (string (caar goal)) (string (car newpat)))(cons (nth (position ’CO newpat) (car goal))

(make-co (cdr goal) newpat))) ))

Interleaving ‘at’ and ‘inside’. The generalized predicates (at* oset B) and(inside* oset Rocket) are complements. If all objects are at a location, noobject is inside the vehicle and vice versa. The unload-all function presup-poses, that all objects in oset are inside the rocket and the load-all functionpresupposes, that all objects in oset are at the current location. The constr-cution of a complex object from the plan is driven by the top-level goals (orfor a sub-plan by the predicates in its root-node). After transforming bothsub-plans into finite programs and generalizing over them, it becomes clear,that both sub-plans share the parameter oset whith initial value (o1 o2 o3).

Analyzing the relationship between the objects in the at*-set and theinside*-set, could lead to an alternative implementation of these generalizedpredicates: (at* oset l) is true, if no object is inside the rocket, that means,if for (inside* oset rocket) the oset is empty (s contains no literal (inside orocket)); and analogous for (inside* oset rocket).

C.5 The ‘Selection Sort’ Domain

Sorting Lists with Three Elements. The universal plan for sorting lists withthree elements is given in figure C.1. Note, that in constructing the universalplan, it is random whether (swap p q) or (swap q p) is the first instantiation.Because the operator is symmetric, both applications result in an identicalstate. Only the first application is integrated in the plan. Restriction of swap-ping only from smaller positions to larger ones (or the other way round) canbe done by extending state specifications by ((gt P3 P2) (gt P3 P1) (gtP2 P1)) and the application-condition of the swap-operator to ((isc p n1)(isc q n2) (gt n1 n2) (gt q p)).

((ISC P1 1) (ISC P2 2) (ISC P3 3))

((ISC P1 2) (ISC P2 1) (ISC P3 3)) ((ISC P1 3) (ISC P2 2) (ISC P3 1)) ((ISC P1 1) (ISC P2 3) (ISC P3 2))

((ISC P1 3) (ISC P2 1) (ISC P3 2)) ((ISC P1 2) (ISC P2 3) (ISC P3 1))

(SWAP P2 P3)(SWAP P1 P3)(SWAP P1 P2)

(SWAP P2 P3)(SWAP P1 P3) (SWAP P1 P3)(SWAP P1 P2)(SWAP P2 P3) (SWAP P1 P2)

Fig. C.1. Universal Plan for Sorting Lists with Three Elements

Page 49: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C.6 Recursive Control Rules for the ‘Tower’ Domain 377

Minimal Spanning Trees. The nine minimal spanning trees of the 3-sort prob-lem are given in figures C.2, C.3 and C.4. Three of these trees (the last threegiven) are generalizable to the selsort program: namely, all trees where thebranching factor is as regular as possible (i. e., 3 to 2 vs. 3 to 1).

((ISC P1 1) (ISC P2 2) (ISC P3 3))

((ISC P1 2) (ISC P2 1) (ISC P3 3)) ((ISC P1 3) (ISC P2 2) (ISC P3 1)) ((ISC P1 1) (ISC P2 3) (ISC P3 2))

((ISC P1 3) (ISC P2 1) (ISC P3 2)) ((ISC P1 2) (ISC P2 3) (ISC P3 1))

(SWAP P2 P3)(SWAP P1 P3)(SWAP P1 P2)

(SWAP P2 P3)(SWAP P1 P3)

((ISC P1 1) (ISC P2 2) (ISC P3 3))

((ISC P1 2) (ISC P2 1) (ISC P3 3)) ((ISC P1 3) (ISC P2 2) (ISC P3 1)) ((ISC P1 1) (ISC P2 3) (ISC P3 2))

((ISC P1 3) (ISC P2 1) (ISC P3 2)) ((ISC P1 2) (ISC P2 3) (ISC P3 1))

(SWAP P2 P3)(SWAP P1 P3)(SWAP P1 P2)

(SWAP P1 P3) (SWAP P1 P2)

((ISC P1 1) (ISC P2 2) (ISC P3 3))

((ISC P1 2) (ISC P2 1) (ISC P3 3)) ((ISC P1 3) (ISC P2 2) (ISC P3 1)) ((ISC P1 1) (ISC P2 3) (ISC P3 2))

((ISC P1 3) (ISC P2 1) (ISC P3 2))((ISC P1 2) (ISC P2 3) (ISC P3 1))

(SWAP P2 P3)(SWAP P1 P3)(SWAP P1 P2)

(SWAP P2 P3) (SWAP P2 P3)

Fig. C.2. Minimal Spanning Trees for Sorting Lists with Three Elements

C.6 Recursive Control Rules for the ‘Tower’ Domain

; TOWER -- assuming subgoal independence; abstract representation of control knowledge;G(n,n,s) = tow(n,n)(s, putt1(n,s));G(i,n,s) = tow(i,n)(s,put1(i,s(i),G(s(i),s,n)));put1(i,s(i),s) = put(i,s(i),clear(i,clear(s(i),s)));putt1(i,s)=putt(i,clear(i,s));clear(x,s)=ct(x,s)(s,putt(f(x),clear(f(x),s)))

; main: G(1,n,s); ------------------------------------------------------------------; call (G 1 maxblock s); s for maxblock = 3; ((on 1 2) (on 2 3) (ct 1) (ont 3)); ((on 1 3) (on 3 2) (ct 1) (ont 2)); ((on 2 1) (on 1 3) (ct 2) (ont 3)); ((on 2 3) (on 3 1) (ct 2) (ont 1)); ((on 3 1) (on 1 2) (ct 3) (ont 2)); ((on 3 2) (on 2 1) (ct 3) (ont 1)); ((on 1 2) (ct 1) (ct 3) (ont 2) (ont 3)); ((on 1 3) (ct 1) (ct 2) (ont 2) (ont 3)); ((on 2 1) (ct 2) (ct 3) (ont 1) (ont 3)); ((on 2 3) (ct 2) (ct 1) (ont 1) (ont 3)); ((on 3 1) (ct 3) (ct 2) (ont 1) (ont 2)); ((on 3 2) (ct 3) (ct 1) (ont 1) (ont 2))

Page 50: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

378 C. Sample Programs and Problems

((ISC P1 1) (ISC P2 2) (ISC P3 3))

((ISC P1 2) (ISC P2 1) (ISC P3 3)) ((ISC P1 3) (ISC P2 2) (ISC P3 1)) ((ISC P1 1) (ISC P2 3) (ISC P3 2))

(SWAP P2 P3)(SWAP P1 P3)(SWAP P1 P2)

((ISC P1 2) (ISC P2 3) (ISC P3 1)) ((ISC P1 3) (ISC P2 1) (ISC P3 2))

(SWAP P2 P3)(SWAP P1 P2)

((ISC P1 1) (ISC P2 2) (ISC P3 3))

((ISC P1 2) (ISC P2 1) (ISC P3 3)) ((ISC P1 3) (ISC P2 2) (ISC P3 1)) ((ISC P1 1) (ISC P2 3) (ISC P3 2))

((ISC P1 3) (ISC P2 1) (ISC P3 2)) ((ISC P1 2) (ISC P2 3) (ISC P3 1))

(SWAP P2 P3)(SWAP P1 P3)(SWAP P1 P2)

(SWAP P1 P3) (SWAP P1 P3)

((ISC P1 1) (ISC P2 2) (ISC P3 3))

((ISC P1 2) (ISC P2 1) (ISC P3 3)) ((ISC P1 3) (ISC P2 2) (ISC P3 1)) ((ISC P1 1) (ISC P2 3) (ISC P3 2))

((ISC P1 3) (ISC P2 1) (ISC P3 2))((ISC P1 2) (ISC P2 3) (ISC P3 1))

(SWAP P2 P3)(SWAP P1 P3)(SWAP P1 P2)

(SWAP P2 P3) (SWAP P1 P2)

Fig. C.3. Minimal Spanning Trees for Sorting Lists with Three Elements

((ISC P1 3) (ISC P2 1) (ISC P3 2))((ISC P1 2) (ISC P2 3) (ISC P3 1))

((ISC P1 1) (ISC P2 2) (ISC P3 3))

((ISC P1 2) (ISC P2 1) (ISC P3 3)) ((ISC P1 3) (ISC P2 2) (ISC P3 1))((ISC P1 1) (ISC P2 3) (ISC P3 2))

(SWAP P2 P3) (SWAP P1 P3)(SWAP P1 P2)

(SWAP P1 P3)(SWAP P1 P2)

((ISC P1 1) (ISC P2 2) (ISC P3 3))

((ISC P1 2) (ISC P2 1) (ISC P3 3)) ((ISC P1 3) (ISC P2 2) (ISC P3 1)) ((ISC P1 1) (ISC P2 3) (ISC P3 2))

((ISC P1 3) (ISC P2 1) (ISC P3 2)) ((ISC P1 2) (ISC P2 3) (ISC P3 1))

(SWAP P2 P3)(SWAP P1 P3)(SWAP P1 P2)

(SWAP P1 P3)(SWAP P2 P3)

((ISC P1 1) (ISC P2 2) (ISC P3 3))

((ISC P1 2) (ISC P2 1) (ISC P3 3)) ((ISC P1 3) (ISC P2 2) (ISC P3 1)) ((ISC P1 1) (ISC P2 3) (ISC P3 2))

((ISC P1 3) (ISC P2 1) (ISC P3 2))((ISC P1 2) (ISC P2 3) (ISC P3 1))

(SWAP P2 P3)(SWAP P1 P3)(SWAP P1 P2)

(SWAP P1 P2)(SWAP P1 P2)

Fig. C.4. Minimal Spanning Trees for Sorting Lists with Three Elements

Page 51: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C.6 Recursive Control Rules for the ‘Tower’ Domain 379

; ((ct 1) (ct 2) (ct 3) (ont 1) (ont 2) (ont 3)); some s for maxblock > 3; ((on 4 2) (on 2 1) (on 1 3) (ct 4) (ont 3)); ((on 2 5) (on 5 1) (on 4 3) (ct 2) (ct 4) (ont 1) (ont 3)); ------------------------------------------------------------------; this would also give true if block n is ont and there are; unsorted blocks above it!(defun tow (i n s)

(cond ((eq i n) (cond ((member (cons ’ont (list n)) s :test ’equal)(print ‘(tower ,@s)) T)(T nil)

) )(T (cond ((and

(member (cons ’on (list i (1+ i))) s :test ’equal)(tow (1+ i) n s)) T)

(T nil)) ) ))

(defun putt1 (i s) (puttable i (clear i s)))(defun put1 (i s) (put i (1+ i) (clear i (clear (1+ i) s))) )

(defun clear (x s)(cond ((member (cons ’ct (list x)) s :test ’equal)

(print ‘(block ,@(list x) is clear now)) s)(T (print ’puttable-call-by-clear)

(puttable (f x s) (clear (f x s) s))) ))

(defun put (x y s)(print s)(cond ((null s) (print ’error-in-put) nil)

((and (member (cons ’ct (list x)) s :test ’equal)(member (cons ’ct (list y)) s :test ’equal)

) (print ‘(put ,@(list x) on ,@(list y) in,@s)) (exec-put x y s))(T (print ’error-in-put-xy-not-clear) nil) ))

(defun exec-put (x y s)(cond ((null s) nil)

((and (equal (first (car s)) ’ont) (equal (second (car s)) x))(exec-put x y (cdr s)))

((and (equal (first (car s)) ’on) (equal (second (car s)) x))(cons (cons ’ct (list (third (car s)))) (exec-put x y (cdr s)))

)((and (equal (first (car s)) ’ct) (equal (second (car s)) y))

(cons (cons ’on (list x y)) (exec-put x y (cdr s))))(T (cons (car s) (exec-put x y (cdr s)))) ))

(defun puttable (x s)(cond ((null s) (print ’error-in-puttable) nil)

((member (cons ’ct (list x)) s :test ’equal)(print ‘(puttable ,@(list x) in ,@s)) (exec-puttable x s))

(T (print ’error-in-puttable-x-not-clear) nil) ))

(defun exec-puttable (x s)(cond ((null s) (print ’x-maybe-already-on-table) nil)

((and (equal (first (car s)) ’on) (equal (second (car s)) x))(cons (cons ’ct (list (third (car s))))

(cons(cons ’ont (list (second (car s))))(cdr s)))

)(T (cons (car s) (exec-puttable x (cdr s)))) ))

(defun f (x s)(cond ((null s) nil)

Page 52: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

380 C. Sample Programs and Problems

((and (equal (first (car s)) ’on) (equal (third (car s)) x))(second (car s)))

(T (f x (cdr s))) ))

(defun G (i n s)(print ‘(G ,@(list i) ,@(list n) ,@s))(cond ((eq i n) (cond ((tow i n s) s) ; G(n,n,s)

(T (putt1 n s))) )(T (cond ((tow i n s) s) ; G(i,n,s), i < n

(T (put1 i (G (1+ i) n s)))) ) ))

; TOWER-1; call e.g. (tower ’(a b c d) ’((on a d) (on d c) (on c b) (ct a)))

(defun tower (olist s)(if (subtow olist s)s(if (and (ct (first olist) s) (subtow (cdr olist) s))(put (first olist) (second olist) s)(if (and (singleblock olist) (ot (first olist) s))(clear-all (first olist) s)(if (and (singleblock olist) (ct (first olist) s))(puttable (first olist) s)(if (singleblock olist)(puttable (first olist) (clear-all (first olist) s))(if (and (ct (first olist) s) (on* (cdr olist) s))(put (first olist) (second olist) (clear-all (second olist) s))(if (ct (first olist) s)

(put (first olist) (second olist) (tower (cdr olist) s))(put (first olist) (second olist) (clear-all (first olist)

(tower (cdr olist) s)))))))))))

; clear-all macro(defun clear-all (o s )

(if (ct o s)s(puttable (succ o s) (clear-all (succ o s) s ) )) )

(defun succ (x s)(cond ((null s) nil)

((and (equal (first (car s)) ’on)(equal (nth 2 (car s)) x))

(nth 1 (car s)))(T (succ x (cdr s)))))

(defun singleblock (l) (null (cdr l)))

; correct subtower?(defun subtow (olist s) (and (ct (car olist) s) (on* olist s)))

; tower contains a currect subtower?(defun on* (olist s)

(cond ((null olist) T) ; should not happend((and (null (cdr olist)) (ot (car olist) s)) T)((member (list ’on (first olist) (second olist)) s :test ’equal)(on* (cdr olist) s))(T nil) ))

; given(defun ct (o s) (member (list ’ct o) s :test ’equal)); given as (ontable x) OR (here) inferred from state(defun ot (o s)

(null (mapcan #’(lambda(x) (and (equal ’on (first x))(equal o (second x))(list x))) s) ))

Page 53: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C.6 Recursive Control Rules for the ‘Tower’ Domain 381

; explicit application of put-operator(defun put (x y s )

(cond ((null s) (print ‘(put ,x ,y))nil)

((equal (car s) (list ’ct y)) (cons (list ’on x y) (put x y (cdr s) )))((and (equal (first (car s)) ’on)

(equal (second (car s)) x))(cons (list ’ct (third (car s))) (put x y (cdr s) )))

(T (cons (car s) (put x y (cdr s) ))))); explicit application of puttable-operator(defun puttable (x s )

(cond ((null s) (print ‘(puttable ,x))nil)

((and (equal (first (car s)) ’on)(equal (second (car s)) x)) (cons (list ’ct (third (car s)))

(puttable x (cdr s) )))(T (cons (car s) (puttable x (cdr s) )))

))

; TOWER-2 ;Building a list (tower) of sorted numbers (blocks); -----------------------------------------------------------------; Representation: each partial tower as list; Input: list of lists; Examples for the three blocks world; ((1 2 3)); ((2 3) (1)); ((1) (2) (3)); ((2 1) (3)); ((3 2 1)); ((1 2) (3)); ((1 3) (2)); ((3 1) (2)); ((3 2) (1)); ((1 3 2)); ((2 3 1)); ((2 1 3)); ((3 1 2)); -----------------------------------------------------------------

; help functions; ---------------; flattens a list l(defun flatten (l)

(cond ((null l) nil)(T (append (car l) (flatten (cdr l))))

))

; x+1 = y?(defun onedif (x y) (= (1+ x) y))

; blocks world selectors; ----------------------; topmost block of a tower(defun topof (tw) (car tw))

; bottom block (base) of a tower(defun bottom (tw) (car (last tw)))

; next tower; f.e. ((2 1) (3)) -> (2 1)(defun get-tower (l) (car l))

; tops of all current towers(defun topelements (l) (sort (map ’list #’car l) #’>))

; topblock with highest number

Page 54: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

382 C. Sample Programs and Problems

(defun greatest (l) (car (topelements l)))

; topblock mit second highest number(defun scndgreatest (l) (cadr (topelements l)))

; label of the block with the highest number(defun maxblock (l)

(cond ((null l) 0)(T (car (sort (flatten l) #’>)))

))

(defun get-all-no-towers (l max)(cond ((null l) nil)

((and (equal (bottom (car l)) max) (sorted (get-tower l)))(get-all-no-towers (cdr l) max))

((single-block (get-tower l)) (get-all-no-towers (cdr l) max))(T (cons (car l) (get-all-no-towers (cdr l) max)))

))

(defun find-greatest (max l)(cond ((null l) max)

((> (topof max) (topof (car l))) (find-greatest max (cdr l)))(T (find-greatest (car l) (cdr l)))

))

; find incorrect tower containing highest element(defun greatest-no-tower (l)(cond ((null l) nil)

(T (find-greatest (car (get-all-no-towers l (maxblock l)))(cdr (get-all-no-towers l (maxblock l)))))

))

; blocksworld predicates; ----------------------; is tower only a single block?(defun single-block (tw) (= (length tw) 1))

; exist two partial towers which top elements differ only by one?(defun exist-free-neighbours (l) (onedif (scndgreatest l) (greatest l)))

; exists a correct partial tower?; f.e. (2 3) or (B C)(defun exists-tower (l)

(cond ((null l) nil)((and (equal (bottom (get-tower l)) (maxblock l))

(sorted (get-tower l))) T)(T (exists-tower (cdr l)))

))

; is block x predecessor to top of a tower?(defun successor (x tw)

(cond ((null tw) T)((onedif x (car tw)) T) ;(successor x (cdr tw)))(T nil)

))

; is tower sorted?(defun sorted (tw)(cond ((null tw) T)

((successor (car tw) (cdr tw)) (sorted (cdr tw)))(T nil)

))

; exists only one tower?(defun single-tower (l) (null (cdr l)))

Page 55: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C.7 Water Jug Problems 383

; goal state?(defun is-tower (l) (and (single-tower l) (sorted (get-tower l)))); ------------------------------------------------------------------; blocksworld operators; ---------------------; put x on y(defun put (x y l)

(cond ((null l) (print ’put) (print x) (print y)nil)

((equal (caar l) x) (cond ((not (null (cdar l)))(append (list (cdar l)) (put x y (cdr l))))(T (put x y (cdr l)))))

((equal (caar l) y) (cons (cons x (car l)) (put x y (cdr l))))(T (cons (car l) (put x y (cdr l))))

)); puttable x(defun puttable (x l)

(cond ((null l) nil)((equal (caar l) x) (print ’puttable) (print x)

(cons (list x) (cons (cdar l) (cdr l))))(T (cons (car l) (puttable x (cdr l))))

)); ------------------------------------------------------------------; main function; -------------(defun tower (l)

(cond ((is-tower l) l)((and (exists-tower l)

(exist-free-neighbours l))(tower (put (scndgreatest l) (greatest l) l)))

(T (tower (puttable (topof (greatest-no-tower l)) l)))))

C.7 Water Jug Problems

In the following we present all problems used in experiment 1 and 2. Becausethe graphs of the problem structures are large, we give the equations andinequations (terms) instead. For each problem, a graph can be constructedby representing the terms as demonstrated in figure 11.3. The terms get inte-grated into a single structure by using each parameter (cA, cB . . ., qA, qB . . .)as unique node.

Problems for Experiment 1

Initial Problem

Jug A B C (D)Capacity 28 35 42 -Initial quantity 12 21 26 -goal quantity 19 0 40 -

Operator sequence: pour(C,B), pour(B,A), pour(A,C), pour(B,A)Relevant Procedural Information/Relevant Declarative Information (Con-straints), see Source problem

Page 56: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

384 C. Sample Programs and Problems

Source Problem

Jug A B C (D)Capacity 36 45 54 -Initial quantity 16 27 34 -goal quantity 25 0 52 -

Operator sequence: pour(C,B), pour(B,A), pour(A,C), pour(B,A)Relevant Procedural InformationqA(4) = qA(0) + (cA − qA(0))− cA + (cB − (cA − qA(0)))qB(4) = qB(0) + (cB − qB(0))− (cA − qA(0))− (cB − (cA − qA(0)))qC(4) = qC(0)− (cB − qB(0)) + cA

Relevant Declarative Information (Constraints)cA = 2 · (cB − qB(0))cB ≥ (cA − qA(0))qC(0) ≥ (cB − qB(0))cC < qC(0) + cA

Problem 1 (Isomorph/No Surface Change)

Jug A B C (D)Capacity 48 60 72 -Initial quantity 21 36 45 -goal quantity 33 0 69 -

Operator sequence: pour(C,B), pour(B,A), pour(A,C), pour(B,A)Structural distance to source: dS1 = 0Relevant Procedural Information/Relevant Declarative Information (Con-straints), see Source problem

Problem 2 (Isomorph/Small Surface Change) (Source/Target Map-ping: A/B, B/A, C/A)

Jug A B C (D)Capacity 60 48 72 -Initial quantity 36 21 45 -goal quantity 0 33 69 -

Operator sequence: pour(C,A), pour(A,B), pour(B,C), pour(A,B)Structural distance to source: dS2 = 0Relevant Procedural InformationqA(4) = qA(0) + (cA − qA(0))− (cB − qB(0))− (cA − (cB − qB(0)))qB(4) = qB(0) + (cB − qB(0))− cB + (cA − (cB − qB(0)))qC(4) = qC(0)− (cA − qA(0)) + cB

Relevant Declarative Information (Constraints)cB = 2 · (cA − qA(0))cA ≥ (cB − qB(0))qC(0) ≥ (cA − qA(0))cC < qC(0) + cB

Page 57: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C.7 Water Jug Problems 385

Problem 3 (Isomorph/Large Surface Change) (Source/Target Map-ping: A/B, B/C, C/A)

Jug A B C (D)Capacity 72 48 60 -Initial quantity 45 21 36 -goal quantity 69 33 0 -

Operator sequence: pour(A,C), pour(C,B), pour(B,A), pour(C,B)Structural distance to source: dS3 = 0Relevant Procedural InformationqA(4) = qA(0)− (cC − qC(0)) + cBqB(4) = qB(0) + (cB − qB(0))− cB + (cC − (cB − qB(0)))qC(4) = qC(0) + (cC − qC(0))− (cB − qB(0))− (cC − (cB − qB(0)))Relevant Declarative Information (Constraints)cB = 2 · (cC − qC(0))cC ≥ (cB − qB(0))qA(0) ≥ (cC − qC(0))cA < qA(0) + cBProblem 4 (Partial Isomorph/No Surface Change) (additional jug:A; Source/Target Mapping: A/B, B/C, C/D)

Jug A B C DCapacity 16 20 25 31Initial quantity 3 8 15 18goal quantity 0 13 3 28

Operator sequence: pour(D,C), pour(C,B), pour(B,D), pour(C,B),pour(A,C)Structural distance to source: dS4 = 0.37Relevant Procedural InformationqA(5) = qA(0)− qA(0)qB(5) = qB(0) + (cB − qB(0))− cB + (cC − (cB − qB(0)))qC(5) = qC(0) + (cC − qC(0))− (cB − qB(0))− (cC − (cB − qB(0)))qD(5) = qD(0)− (cC − qC(0)) + cB

Relevant Declarative Information (Constraints)cB = 2 · (cC − qC(0))cC ≥ (cB − qB(0))cC ≥ qA(0)qD(0) ≥ (cC − qC(0))cD < qD(0) + cBProblem 5 (Partial Isomorph/Small Surface Change) (additional jug:A; Source/Target Mapping: A/C, B/B, C/D)

Jug A B C DCapacity 16 25 20 31Initial quantity 3 15 8 18goal quantity 0 3 13 28

Page 58: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

386 C. Sample Programs and Problems

Operator sequence: pour(D,B), pour(B,C), pour(C,D), pour(B,C),pour(A,B)Structural distance to source: dS5 = 0.37Relevant Procedural InformationqA(5) = qA(0)− qA(0)qC(5) = qC(0) + (cC − qC(0))− cC + (cB − (cC − qC(0)))qB(5) = qB(0) + (cB − qB(0))− (cC − qC(0))− (cB − (cC − qC(0)))qD(5) = qD(0)− (cB − qB(0)) + cC

Relevant Declarative Information (Constraints)cC = 2 · (cB − qB(0))cB ≥ (cC − qC(0))cB ≥ qA(0)qD(0) ≥ (cB − qB(0))cD < qD(0) + cC

Problems for Experiment 2

(Initial problem and source problem are identical to experiment 1.)

Problem 1 (Target Exhaustiveness) (Source/Target Mapping: A/A,B/B, C/C)

Jug A B C (D)Capacity 48 60 72 -Initial quantity 21 36 45 -goal quantity 0 33 69 -

Operator sequence: pour(C,B), pour(B,A), pour(A,C)Structural distance to source: dS1 = 0.16Relevant Procedural InformationqA(3) = qA(0) + (cA − qA(0))− cAqB(3) = qC(0) + (cB − qB(0))− (cA − qA(0))qC(3) = qB(0)− (cB − qB(0)) + cA

Relevant Declarative Information (Constraints)cA = 2 · (cB − qB(0))cB ≥ (cA − qA(0))qC(0) ≥ (cB − qB(0))cC < qC(0) + cA

Problem 2 (Source Inclusiveness) (Source/Target Mapping: A/A, B/B,C/C)

Jug A B C (D)Capacity 48 60 72 -Initial quantity 21 36 45 -goal quantity 33 60 9 -

Page 59: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C.7 Water Jug Problems 387

Operator sequence: pour(C,B), pour(B,A), pour(A,C), pour(B,A),pour(C,B)Structural distance to source: dS2 = 0.17Relevant Procedural InformationqA(5) = qA(0) + (cA − qA(0))− cA + (cB − (cA − qA(0)))qB(5) = qB(0) + (cB − qB(0))− (cA − qA(0))− (cB − (cA − qA(0))) + cBqC(5) = qC(0)− (cB − qB(0)) + cA − cB

Relevant Declarative Information (Constraints)cA = 2 · (cB − qB(0))cB ≥ (cA − qA(0))qC(0) ≥ (cB − qB(0))qC(0) < cBcC < qC(0) + cA

Problem 3 (“High” Structural Overlap) (identical to problem 4 in ex-periment 1)

Problem 4 (“Medium” Structural Overlap) (additional jug: A; Source/Target Mapping: A/B, B/C, C/D)

Jug A B C DCapacity 16 20 25 31Initial quantity 6 9 15 18goal quantity 14 14 0 20

Operator sequence: pour(D,C), pour(C,B), pour(D,A), pour(B,D),pour(C,B)Structural distance to source: dS4 = 0.55Relevant Procedural InformationqA(5) = qA(0) + (qD(0)− (cC − qC(0)))qB(5) = qB(0) + (cB − qB(0))− cB + (cC − (cB − qB(0)))qC(5) = qC(0) + (cC − qC(0))− (cB − qB(0))− (cC − (cB − qB(0)))qD(5) = qD(0)− (cC − qC(0))− (qD(0)− (cC − qC(0))) + cB

Relevant Declarative Information (Constraints)cA < (qA(0) + qD(0))qA(0) < (cC − qC(0))cC > (cB − qB(0))cD > (qD(0) + cB

Problem 5 (“Low” Structural Overlap) (additional jug: A; Source/Tar-get Mapping: A/B, B/C, C/D)

Jug A B C DCapacity 17 20 25 31Initial quantity 7 9 15 18goal quantity 16 5 0 28

Page 60: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

388 C. Sample Programs and Problems

Operator sequence: pour(B,A), pour(D,C), pour(C,B), pour(B,D),pour(C,B)Structural distance to source: dS5 = 0.59Relevant Procedural InformationqA(5) = qA(0) + qB(0)qB(5) = qB(0)− qB(0) + cB − cB + (cC − qC(0))qC(5) = qC(0) + (cC − qC(0))− cB − (cC − qC(0))qD(5) = qD(0)− (cC − qC(0)) + cB

Relevant Declarative Information (Constraints)cB = 2 · (cC − qC(0))cA ≥ qA(0) + qB(0)cC ≥ cBqD(0) ≥ (cC − qC(0))cD < qD(0) + cB

C.8 Example RPSs

In the following the ten example programs used in the empirical evaluationof retrieval in chapter 12 are given. Note, that we only give the subprograms.The complete RPSs contain additionally a call of the according subprogram.

#S(sub:NAME REVERT:PARAMS (L1 L2):BODY (IF (NULL L1) L2 (REVERT (TL L1) (++ (HD L1) L2)))

)#S(sub

:NAME SUM:PARAMS (X):BODY (IF (EQ0 X) 0 (+ X (SUM (PRE X))))

)#S(sub

:NAME CB:PARAMS (X SIT):BODY (IF (CT X) SIT (PT (TO X) (CB (TO X) SIT)))

)#S(sub

:NAME GGT:PARAMS (X Y):BODY (IF (EQ0 Y) X (GGT Y (MOD X Y)))

)#S(sub

:NAME MOD:PARAMS (X Y):BODY (IF (< X Y) X (MOD (- X Y) Y))

)#S(sub

:NAME FAC:PARAMS (X):BODY (IF (EQ0 X) 1 (* X (FAC (PRE X))))

)#S(sub

:NAME APPEND:PARAMS (L1 L2)

Page 61: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

C.8 Example RPSs 389

:BODY (IF (NULL L1) L2 (++ (HD L1) (APPEND (TL L1) L2))))#S(sub

:NAME MEMBER:PARAMS (X L):BODY (IF (NULL L) F (IF (== X (HD L)) T (MEMBER X (TL L))))

)#S(sub

:NAME LAH:PARAMS (X):BODY(IF (EQ0 X) 1

(IF (EQ0 (PRE X)) 1(- (* (PRE (* 2 X)) (LAH (PRE X)))

(* (* (PRE X) (PRE (PRE X))) (LAH (PRE (PRE X))))) ) )

)#S(sub

:NAME BINOM:PARAMS (X Y):BODY (IF (EQ0 X) 1

(IF (== X Y) 1 (+ (BINOM (PRE X) (PRE Y)) (BINOM (PRE X) Y))))

)#S(sub

:NAME FIBO:PARAMS (X):BODY(IF (EQ0 X) 0

(IF (EQ0 (PRE X)) 1 (+ (FIBO (PRE X)) (FIBO (PRE (PRE X))))))

)

Page 62: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Index

A∗, 32, 59θ-subsumption, 141

absorption, 144abstraction, 116, 280, 312ACME, 294ACT, 52, 273action, 16, 24, 29, 72– minimal number, 19– selection, 19– semantics, 73ADL, 20, 72, 73airplane domain, 83analogical reasoning, 275, 293analogy, 280– adaptation, 282, 293, 312, 320– between-domain, 280– causal, 286– cognitive model, 294– derivational, 282, 287, 295– explanatory, 286, 293– first-order mapping, 282– generalization, 281, 282– inference, 282– internal, 309– mapping, 280, 281– mapping constraints, 281– problem solving, 286, 290– programming, 288– proportional, 284– re-description, 284– replay, 282– retrieval, 281– structural, 283– transfer, 281, 293– transformational, 282, 295– within-domain, 280anti-instance, 314– most specific, 314anti-unification, 171, 198, 206, 283, 289– algorithm, 315

– first order, 314– higher order, 314– second order, 289, 315– sets of terms, 172anti-unificator, 171approximation chain, 157Aries, 101assertion, 113, 120attribute, 280automatic programming, 99–103, 123automaton, 130– deterministic finite, 133, 150, 163axiom– domain, 44, 122, 271– effect, 25– frame, 25– induction, 111, 112

background knowledge, 105, 108, 126,141, 147, 164

bias– semantic, 147– syntactic, 146Blackbox, 43blocks-world, 14, 17, 18, 20, 22, 26, 30,

35, 59, 70, 88– clearblock, 59, 114, 160, 163, 222,

237, 273– complexity, 32– tower, 63, 137, 260, 264BMWk, 160bottom test, 235, 240, 241, 257breadth-first search, 35, 46, 58

canonical combinatorial problem, 43case-based reasoning, 280–282CIP, 115, 119, 121, 123clause– ij-determinate, 148– determinate, 147– recursive, 150

Page 63: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

392 Index

closed world assumption, 13, 15, 20, 25cognition– adaptive, 272cognitive psychology, 48, 274concept learning, 124, 128, 272constraint– free variable, 75, 76constraint programming, 90Constraint Prolog, 90constraint satisfaction, 43, 49, 73, 90constructor, 234control knowledge– rocket, 246– unstack, 237control rule– completeness, 238– correctness, 238– learning, 222, 227, 273– recursive, 53, 160, 222, 227Curry-Howard isomorphism, 110cycle, 30Cypress, 123

DAG, 47, 58, 65, 251data type, 147, 233– abstract, 117, 120– binary tree, 118– change, 117– inference, 228, 229, 233– list, 234, 247– pre-defined, 271– refinement, 123– sequence, 234, 235– set, 234, 240decision tree, 45, 105, 128Dedalus, 113deductive planning, 20deductive tableau, 113depth-first search, 28, 58– backtracking, 28, 30design pattern, 323design strategy, 108design tactic, 122Dijkstra-algorithm, 55DPlan, 42, 47, 55, 57, 59, 64–66, 70,

273– algorithm, 57– completeness, 70– goal, 56– operator, 56– planning problem, 57– soundness, 70– state, 56

– termination, 66– universal plan, 56, 58DPlan, universal plan, 59

effect, 16– context dependent, 23– mixed ADD/DEL– – update, 88– secondary, 23elevator domain, 71eureka, 116, 117, 119, 121evaluation– expression, 75, 76– ground term, 76execution monitoring, 45expression simplification, 122

factorial, 196– folding, 220– unfolding, 196, 220Fibonacci function, 116, 138, 178, 198,

259– folding, 220– unfolding, 220filter, 246, 256, 258finite differencing, 122fitness measure, 134fixpoint semantics, 157, 182fluents, 16, 24, 72Foil, 147, 148folding, 116, 157, 167, 177, 182, 271,

273– fault tolerance, 219formal language, 130FPlan– assignement, 77– goal representation, 78– language, 74– performance gain, 88, 90– planning problem, 79– state representation, 76– user-defined functions, 82FPlan, matching, 77function approximation, 155– algorithms, 128– order, 157function merging, 161

generalization– analogy, 281, 282– bottom-up, 142, 144, 148– explanation-based, 52, 163– recursive, 150, 274– RPS, 319

Page 64: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Index 393

generalization-to-n, 163generate and test, 108, 263genetic operations, 135genetic programming, 53, 108, 129, 134,

164, 266goal, 113– ordering, 38– regression, 36goal regression, 56goals– independence, 38– interleaving, 38Golem, 147, 148GPS, 40, 48, 52grammar– context-free, 133– context-free tree, 176– regular, 133grammar inference, 108, 124, 129, 150,

165graph, 280graph distance, 300Graphplan, 34, 42, 58, 71

hanoi domain, 62, 72, 87hat-axiom, 114, 229heuristic, 50– function, 44, 48– search, 44, 272hidden variable, 212, 222– identification, 212– identifying, 213higher-order function, 246HSP, 43HSPr, 42hypothesis, 105, 127, 140, 141, 162– complete, 141– consistent, 141– non-recursive, 128hypothesis language, 106, 129

IAM, 294ID, 133, 150, 163identification– by enumeration, 131, 163– in the limit, 128, 130– probably approximately correct, 128indexing, 54induction, 114, 156, 272– structural, 113induction bias, 129– syntactic, 129induction scheme, 112

inductive functional programming, 108,129, 150, 164, 167

inductive logic programming, 53, 108,129, 140, 164

inference– deductive, 104, 106, 122– inductive, 104–106, 124information– incomplete, 45– incorrect, 45initial instantiation, 197, 203, 215initial program, 133, 177– order, 177– recursive explanation, 181– segments, 192initial tree– decomposition, 202– reduction, 205input/output examples, 104, 108– rewriting, 150, 153instantiation– main program, 215– partial, 38IPP, 43Isabelle, 115

KIDS, 101, 122, 165, 288, 311knowledge– declarative, 272– procedural, 272, 275

Lah-number, 33, 317language– context-free tree, 176– generator, 131– learnability, 132, 133– primitive recursive, 133– regular, 132, 133– tester, 131learning, 124– algorithm, 128– analytical, 164– approximation-driven, 129– batch, 126, 129– control programs, 53– control rule, 52– data-driven, 129– decision list, 52– decision tree, 52– experience, 125– from observation, 125– incremental, 126, 129– language, 130

Page 65: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

394 Index

– model-driven, 129, 140– policy, 52– simultaneous composite, 262– speed-up, 52– speed-up effects, 273– supervised, 125, 130– task, 124– unsupervised, 125– with informant, 130least general generalization, 142– relative, 143, 148linked clause, 146LISA, 286, 294Lisp, 106, 108, 110–112, 118, 122, 123,

129, 134, 151, 160, 163Lisp tutor, 101list– introduction, 248– order, 246list problem– semantic, 246, 249– structural, 246, 271logistics domain, 71, 244

machine learning, 45, 50, 105, 124, 272macro operator– iterative, 53– linear, 51, 273map, 246match-select-apply cycle, 28, 48MBP, 46McCarthy-conditional, 151means-end analysis, 49, 309memory– hierarchical, 318– linear, 316Merlin, 150meta-programming, 115, 119, 123metaphor, 280minimal spanning tree, 66– extraction, 251, 253MIPS, 46MIS, 145, 148ML, 134mode, 147modlist, 174, 189, 199, 204, 221mutex relation, 42

network flow problem, 42NOAH, 41novice programmer, 102Nuprl, 115

OBDD, 46, 59operator– abstract, 44– effect– – ADD/DEL, 16, 81– – update, 81– FPlan, 78– primitive, 44– Strips, 16operator application– backward, 19, 56– – completeness, 69– – FPlan, 79– – soundness, 68– conditional effect, 24– forward, 17, 56– – FPlan, 79operator schemes, 16

pattern– first order, 171– maximal, 171, 196, 198, 215pattern matching, 16, 22, 23PDDL, 17, 20, 27, 43, 56, 73– conditional effect, 21, 56– equality constraints, 21– typing, 21perceptual chunk, 274plan, 13, 18, 28– as program, 230– extraction, 29, 42– levelwise decomposition, 231– linearization, 29, 246, 254, 259– optimal, 19, 59– optimality, 35, 46– partial, 28– partial order, 42– partially ordered, 29, 41– recursive, 114– reuse, 54– totally ordered, 29, 41– universal, 228plan construction, 19, 20, 259plan decomposition, 229, 241plan execution, 20, 229plan transformation, 271– completeness, 231– correctness, 231planner– partial order, 41– special purpose, 44planning, 160, 272– backward, 55

Page 66: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Index 395

– competition, 20– compilation approach, 43– completeness, 34– conditional, 45– control knowledge, 45, 51– correctness, 34– deductive, 41– forward, 29, 30, 44– function application, 71– heuristic, 71– hierarchical, 44, 50– infinite domain, 72– least commitment, 40, 41– linear, 34, 38– logical inference, 20– non-linear, 39, 41, 55– NP-completeness, 32, 58– number problems, 85, 272– progression, 29– PSPACE-completeness, 32, 46, 58– reactive, 45– regression, 35– resource constraints, 83– resource variable, 71– soundness, 34– state-based, 19, 55– Strips, 35, 48– strong commitment, 40– termination, 34– total order, 55– types, 44– universal, 43, 45, 55, 227, 271planning graph, 42policy learning, 45position– composition, 169– next right, 195– node, 169– order, 169– term, 169pre-image, 46, 56pre-planning analysis, 45precondition– secondary, 23– standard, 75predicate construction, 154predicate invention, 144, 150prefix, 171problem decomposition, 44problem scheme, 296problem solving, 48– analogical, 292, 293– human, 48, 53

– scheme, 275– strategy, 273problem solving constraints, 295, 299Prodigy, 41, 52, 283, 287production rule, 48, 101production system, 48, 52, 272program, 313– functional, 105, 114, 116, 128, 134,

173– morphism, 288– optimization, 122, 123, 158– regular, 161– validation, 100– verification, 99, 109, 111program body, 173– maximal, 197– maximization, 197program optimization, 122program reuse, 108, 311, 323program scheme, 108, 115, 165, 290,

311, 313– binary search, 288– divide-and-conquer, 122, 150– extension of Summers’, 159– global search, 122– local search, 122– second order, 146– semi-regular, 160– Summers, 151program synthesis, 45, 99, 102, 109,

111, 115, 271– deductive, 41, 99, 104, 107, 109, 110,

115, 164– inductive, 99, 104, 107, 109, 124, 128,

164– knowledge, 108– schema-based, 108program transformation, 107, 115, 123,

164, 311program verification, 101programming– abstract, 117– by analogy, 108, 288– by demonstration, 163, 167– declarative, 104programming tutor, 101, 109Prolog, 20, 27, 108, 129, 140, 146, 150,

164proof method, 49proof planning, 49proof tactic, 115proof-as-program, 113

Page 67: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

396 Index

Proust, 101PVS, 101

QA3, 24, 40, 110

re-planning, 45recurrence detection, 150, 155recurrence relation, 155, 156recursion– introduction, 121– lists, 111recursion point, 177, 189– valid, 191recursive– linear, 115, 124, 152, 158, 163, 222,

237, 249, 257, 317– tail, 115, 124, 158, 249, 257, 317– tree, 317recursive call– variable substitution, 206recursive clause, 140recursive function, 259recursive program, 99, 111, 112, 133recursive subprogram, 173, 215reduce, 246Refine, 122refinement, 116, 121– operator, 144– top-down, 142, 144reinforcement learning, 45, 52, 53, 125,

272relation– first order, 280– n-ary, 280– second order, 280– unary, 280resolution, 24, 27, 110, 113, 141– inverse, 143, 182resource constraint, 49retrieval, 312, 318– algorithm, 317reverse, 148, 158, 246rewrite rule– context-free, 167rocket, 40, 51, 61, 65, 70, 241, 243, 248,

287– recursive loading, 243– recursive unloading, 243RPS, 173, 271, 273, 274, 312– CFTG, 177– existence, 217– induction, 168, 227– interpretation, 174

– language, 175, 320, 321– minimality, 183– restrictions, 173, 183– rewrite system, 175– substitution uniqueness, 184Rutherford analogy, 280, 282, 286, 292

S-expression, 151– complete partial order, 154– form, 154SAT-solver, 43saturation, 143scheduling, 49, 85schema induction, 293search– algorithm, 27– backward, 48– forward, 48– tree, 28–30segment indices, 193segmentation, 177– algorithm, 193– recurrent, 188– strategy, 194– valid, 191– valid recurrent, 193, 215selection sort, 88selector function, 234, 249, 257, 274sequence– introduction, 235set– introduction, 241sets of list problem, 33, 264signature, 168similarity– attribute, 283– literal, 280– structural, 281, 283– superficial, 281single-source shortest-path algorithm,

55situation calculus, 20, 24, 40, 105situation variable, 229– introduction, 229, 234skeleton, 188skill– acquistition, 272– cognitive, 272skill acquisition, 52, 124SME, 286, 294SOAR, 52Soar, 273software engineering, 99, 100, 311

Page 68: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

Index 397

– computer-aided, 100– knowledge-based, 100, 109, 123solution, 28sorting, 71, 88, 248, 271– bubble-sort, 163, 249– domain, 40, 61– program, 112– selection sort, 248, 249, 251source inclusiveness, 294source program, 312specification, 99–101, 119, 311– as theorem, 107– by examples, 104– complete, 103– formal, 103, 123– incomplete, 99, 104– language, 103– natural language, 102, 109– non-executable, 121specification language, 104stack, 30STAN, 43, 262state description– complete, 37, 56– consistent, 38– inconsistent, 37, 38– partial, 37, 56state invariant, 51state representation, 15– complete, 48– partial, 36, 48state space, 17, 33, 70state transformation, 15state transition, 46state variable, 73state-action rule, 45statics, 16, 17, 23, 72Strips, 24, 27, 56– domain, 17– Functional, 73– goal, 16– language, 14, 43– operator, 16– planner, 40– planning problem, 17– state, 15structural similarity, 292, 301, 305, 309structure mapping, 286, 292, 312– theory, 292sub-plan– uniform, 231sub-schema– equivalence, 203

sub-schema position, 189– valid, 191sub-term, 169subprogram– body, 197– calculation steps, 216– calculation strategy, 217– dependency relation, 182– equality, 219– language, 175substitution, 16, 170, 177, 216– algorithm, 214– consistency, 212– inverse, 143– necessary condition, 211– product, 77– testing recurrence, 210– testing uniqueness, 209– uniqueness, 208substitution term, 177– calculation, 214– characteristics, 209– context, 321subsumption, 283succesor identification, 236Sussman anomaly, 34, 38symbolic model checking, 43, 46Synapse, 150synthesis problem, 185synthesis theorem, 155– basic, 157

tactic, 49target exhaustiveness, 294target problem, 312term, 168– depth, 147– level, 147– order, 171– replacement, 170– substitution, 314, 320– subsumption, 316term algebra, 160term rewrite system, 170term rewriting, 171theorem prover, 25– Boyer-Moore, 50theorem proving, 37, 40, 49– constructive, 110, 113, 123, 164– drawbacks, 112Thesys, 151TIM, 51, 150Tinker, 163

Page 69: link.springer.com978-3-540-44846-4/1.pdf · Bibliography Aho, A. V., Sethi, R., & Ullman, J. D. (1986). Compilers Principles, Techniques, and Tools. Addison Wesley, Reading, MA. Allen,

398 Index

Tower of Hanoi, 32, 62, 125, 126, 222,260, 267, 274, 295, 296

trace, 160traces, 104, 108, 150, 152, 160, 163, 167– user-generated, 133training examples, 125– positive/negative, 141transformation– correctness, 119, 120– distance measure, 283– lateral, 115– plan to program, 227– vertical, 115, 311transformation rule, 107, 113, 119transformational semantics, 119tree– regular, 253– regularization, 253type inference, 51

UMOP, 46unfolding, 116, 133, 157, 163, 180, 312,

313– depth, 150– indices, 178unification– higher order, 314universal plan, 65– full, 65– optimal, 65– order over states, 65unpack, 152, 155unstack, 237update, 72, 75, 78– backward, 81user-modeling, 101utility problem, 52

variable addition, 158variable instantiation, 174, 216– maximal pattern, 198– segment, 210variables– range, 76– sufficient instances, 211

Warshall-algorithm, 120water jug domain, 85, 296