Spelling suggestions: "subject:"conscious automata"" "subject:"sonscious automata""
1 |
Machines cannot thinkGell, Robert George January 1966 (has links)
This paper is a critical essay on the question "Can machines think?", with particular attention paid to the articles appearing in an anthology Minds and Machines, A. R. Anderson editor. The general conclusion of this paper is that those arguments which have been advanced to show that machines can think are inconclusive.
I begin by examining rather closely a paper by Hilary Putnam called "Minds and Machines" in which he argues that the traditional mind-body problem can arise with a complex cybernetic machine. My argument against Putnam's is that either there are no problems with computers which are analogous to the ones raised by mental states, or where there are problems with machines, these problems do not have at bottom the same difficulties that human experiences raises.
I then continue by showing that a cybernetic machine is an instantiation of a formal system. This leads to a discussion of the relationship between formality and predictability in which I try to show that some types of machine are in principle predictable. In the next section I attempt to prove that any discussion of outward signs of imitative behavior presupposes that some linguistic theory, such as a type reduction, has been substantiated. The force of this argument is that such a theory has not in fact been substantiated. I offer some general theory about the complexity of concept-property relations.
Finally I give a demonstration that no test or set of tests can be found that will be logically sufficient for the ascription of the concept "capable of thought." If this is successful, then I have shown that no test can be found, which when a machine is built to pass it, is logically adequate for saying that that .machine can think. This argument is offered as further criticism of the Imitation Game which A. M. Turing proposed as an adequate test for thinking subjects. Besides the specific conclusion that insufficient evidence, has been offered to say that machines can think, this paper offers a more general conclusion that most standard problems have at bottom a linguistic difficulty. However, this general conclusion is a broad speculative one to which the work in this paper, is only a small exemplification and as such reflects mainly the further ambitions of the author. / Arts, Faculty of / Philosophy, Department of / Graduate
|
2 |
A computational theory of physical skill.Austin, Howard Allen January 1976 (has links)
Thesis. 1976. Ph.D.--Massachusetts Institute of Technology. Dept. of Electrical Engineering and Computer Science. / Microfiche copy available in Archives and Engineering. / Bibliography: leaves 414-421. / Ph.D.
|
3 |
On the evolution of autonomous decision-making and communication in collective roboticsAmpatzis, Christos 10 November 2008 (has links)
In this thesis, we use evolutionary robotics techniques to automatically design and synthesise<p>behaviour for groups of simulated and real robots. Our contribution will be on<p>the design of non-trivial individual and collective behaviour; decisions about solitary or<p>social behaviour will be temporal and they will be interdependent with communicative<p>acts. In particular, we study time-based decision-making in a social context: how the<p>experiences of robots unfold in time and how these experiences influence their interaction<p>with the rest of the group. We propose three experiments based on non-trivial real-world<p>cooperative scenarios. First, we study social cooperative categorisation; signalling and<p>communication evolve in a task where the cooperation among robots is not a priori required.<p>The communication and categorisation skills of the robots are co-evolved from<p>scratch, and the emerging time-dependent individual and social behaviour are successfully<p>tested on real robots. Second, we show on real hardware evidence of the success of evolved<p>neuro-controllers when controlling two autonomous robots that have to grip each other<p>(autonomously self-assemble). Our experiment constitutes the first fully evolved approach<p>on such a task that requires sophisticated and fine sensory-motor coordination, and it<p>highlights the minimal conditions to achieve assembly in autonomous robots by reducing<p>the assumptions a priori made by the experimenter to a functional minimum. Third, we<p>present the first work in the literature to deal with the design of homogeneous control<p>mechanisms for morphologically heterogeneous robots, that is, robots that do not share<p>the same hardware characteristics. We show how artificial evolution designs individual<p>behaviours and communication protocols that allow the cooperation between robots of<p>different types, by using dynamical neural networks that specialise on-line, depending on<p>the nature of the morphology of each robot. The experiments briefly described above<p>contribute to the advancement of the state of the art in evolving neuro-controllers for<p>collective robotics both from an application-oriented, engineering point of view, as well as<p>from a more theoretical point of view. / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
|
Page generated in 0.0678 seconds