This thesis puts forward a formal theory of levels and algorithms to provide a foundation for those terms as they are used in much of cognitive science and computer science. Abstraction with respect to concreteness is distinguished from abstraction with respect to detail, resulting in three levels of concreteness and a large number of algorithmic levels, which are levels of detail and the primary focus of the theory. An algorithm or ideal machine is a set of sequences of states defining a particular level of detail. Rather than one fundamental ideal machine to describe the behaviour of a complex system, there are many possible ideal machines, extending Turing's approach to reflect the multiplicity of system descriptions required to express more than weak input-output equivalence of systems. Cognitive science is concerned with stronger equivalence; e.g., do two models go through the same states at some level of description? The state-based definition of algorithms serves as a basis for such strong equivalence and facilitates formal renditions of abstraction and implementation as relations between algorithms. It is possible to prove within the new framework whether or not one given algorithm is a valid implementation of another, or whether two unequal algorithms have a common abstraction, for example. Some implications of the theory are discussed, notably a characterisation of connectionist versus classical models.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:567942 |
Date | January 1991 |
Creators | Foster, Carol Lynn |
Contributors | Stenning, Keith; Oberlander, Jon |
Publisher | University of Edinburgh |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | http://hdl.handle.net/1842/6591 |
Page generated in 0.0022 seconds