Spelling suggestions: "subject:"chinese room"" "subject:"chinese boom""
1 |
Wittgenstein and the Chinese RoomPalmlöf, Otto January 2023 (has links)
No description available.
|
2 |
Intencionalidade e inteligência artificial no pensamento de Dennett / Intentionality and artificial intelligence in Dennett’s thoughtSantos, Guilherme Silveira de Almeida 05 September 2013 (has links)
Submitted by Luciana Ferreira (lucgeral@gmail.com) on 2014-10-09T11:38:54Z
No. of bitstreams: 2
Dissertação - Guilherme Silveira de Almeida Santos - 2013.pdf: 413472 bytes, checksum: 2613ed31d953050394ce6d727fb4ad1e (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2014-10-09T14:46:08Z (GMT) No. of bitstreams: 2
Dissertação - Guilherme Silveira de Almeida Santos - 2013.pdf: 413472 bytes, checksum: 2613ed31d953050394ce6d727fb4ad1e (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5) / Made available in DSpace on 2014-10-09T14:46:08Z (GMT). No. of bitstreams: 2
Dissertação - Guilherme Silveira de Almeida Santos - 2013.pdf: 413472 bytes, checksum: 2613ed31d953050394ce6d727fb4ad1e (MD5)
license_rdf: 23148 bytes, checksum: 9da0b6dfac957114c6a7714714b86306 (MD5)
Previous issue date: 2013-09-05 / The concept of intentional stance, a very important concept of Daniel Dennett’s
philosophical thought, is a central aspect to a naturalized viewpoint of
intentionality.
There are three levels to predicting the behavior of an object:
1) The physical stance, at this level, the prediction is based on the physical
properties or physical laws.
2) The design stance, at this level, the prediction is based on the function of
an object.
3) The intentional stance, at this level, the object is considered a intentional
agent, that has belief, thinking and intention.
More over, from Dennett’s point of view, the naturalization of intentionality is the
way to the possibility of construction of computers that will have intentional
behavior. Also, the development of intelligent computers is the objective of
Artificial Intelligence (AI) research.
And also, is important to show the refutations of two skeptical arguments that
try to prove the impossibility of machines intentionality. Two arguments against
some aspects of AI research are the Gödel’s theorem argument and the
chinese room argument.
The objective of dissertation is show that the concept pf intentional stance is a
possibility to construction of artificial intentional agents. / O conceito de postura intencional, um conceito de suma importância no
pensamento filosófico de Daniel Dennett, é um aspecto central para um ponto
de vista naturalizado da intencionalidade.
Há três modos distintos para predizer o comportamento de um objeto:
1) A postura física, através da qual a predição é feita baseando-se nas
propriedades físicas ou leis físicas
2) A postura de projeto, onde consideramos a função de um objeto.
3) A postura intencional, através da qual consideramos um objeto como um
agente intencional, dotado de crenças, pensamentos e intenções.
Adicionalmente, do ponto de vista de Dennett,em parte, a naturalização da
intencionalidade é o caminho para a possibilidade de construção de
computadores que apresentarão comportamento intencional.
Comparativamente, o desenvolvimento de computadores inteligentes é o
objetivo da pesquisa de inteligência artificial ( IA).
Ademais, é importante mostrar as refutações de dois argumentos céticos que
tentam provar a impossibilidade da intencionalidade em máquinas. Dois
argumentos críticos a certos aspectos da pesquisa de IA são o argumento do
teorema de Gödel e o argumento do quarto chinês.
O objetivo da dissertação é mostrar que o conceito de postura intencional é
uma possibilidade para a construção de agentes intencionais artificiais.
|
3 |
A crítica de John Searle à inteligência artificial: uma abordagem em filosofia da menteAmorim, Paula Fernanda Patrício de 27 February 2014 (has links)
Made available in DSpace on 2015-05-14T12:11:53Z (GMT). No. of bitstreams: 1
arquivototal.pdf: 459823 bytes, checksum: ba01cf778f8ecbe6426358c4f63af8e0 (MD5)
Previous issue date: 2014-02-27 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / This text is intended to present a critique of the philosopher John Searle on Artificial Intelligence, more specifically to what Searle calls Strong Artificial Intelligence. For this, the main researched and scrutinized text will be your 1980 s article, Minds, brains and programs, in which is presented his chinese room argument. The argument seeks to show that it is not possible to duplicate the mind by purely formal processes, namely the manipulation of numbers 0 and 1 in a computer program that could be executed on any computer with a capable hardware to run this type of program. Therefore, it is suggested by Searle a thought experiment involving what he calls the "chinese room", which aims to demonstrate that even a computer running a program that involves the ability of understanding can never understand anything, since in his thought experiment, the same computational processes would be emulated in a different environment (the room, which will be equivalent to the computer), by a human being (which would be equivalent to the program on the computer) and still would not have been possible to state some understanding (in this case, the Chinese language), nor by the room neither by the man who would be within the room performing the same processes as a computer running a program accomplish, according Searle. Through exposure of that argument and criticism of Searle to Strong Artificial Intelligence, among other theories of the mind, such as the computationalism and functionalism, we will seek to achieve the understanding of what would be the contribution of Searle for the Philosophy of Mind with regard to discussions on Artificial Intelligence, analyzing both the argument as its criticism and the strengths and weaknesses of Searle s argumentation. / Este texto é destinado a apresentar a crítica do filósofo John Searle à Inteligência Artificial, mais especificamente ao que Searle chama Inteligência Artificial Forte. Para isso, o principal texto pesquisado e esmiuçado será o seu artigo de 1980, Minds, brains and programs, no qual é apresentado o seu argumento do quarto chinês. O argumento busca demonstrar que não é possível duplicar a mente através de processos meramente formais, a saber, da manipulação de números 0 e 1 em um programa de computador, que poderia ser executado em qualquer computador com um hardware capacitado para executar esse tipo de programa. Para tanto, é sugerida por Searle uma experiência de pensamento que envolve o que ele chama de quarto chinês , que tem como objetivo demonstrar que nem um computador executando um programa que envolve a habilidade da compreensão jamais poderá compreender coisa alguma, já que em sua experiência de pensamento, os mesmos processos computacionais seriam emulados de em um ambiente diferente (o quarto, que seria o equivalente ao computador), por um ser humano (que seria o equivalente ao programa no computador) e ainda assim não seria possível afirmar ter havido compreensão alguma (nesse caso, do idioma chinês), nem por parte do quarto e nem por parte do homem, que estaria dentro do quarto realizando os mesmos processos que um computador rodando um programa realizaria, de acordo com Searle. Através da exposição do argumento mencionado e das críticas de Searle à Inteligência Artificial Forte, dentre outras teorias da mente, tais quais o computacionalismo e o funcionalismo, buscar-se-á atingir a compreensão do que seria a contribuição de Searle para a Filosofia da Mente, no que diz respeito às discussões em torno da Inteligência Artificial, analisando tanto o argumento quanto as suas críticas e os pontos fracos e fortes da argumentação searleana.
|
4 |
The Morse Code Room: Applicability of the Chinese Room Argument to Spiking Neural NetworksBrinz, Johannes 24 February 2023 (has links)
The Chinese room argument (CRA) was first stated in 1980. Since then computer technologies have improved and today spiking neural networks (SNNs) are “arguably the only viable option if one wants to understand how the brain computes.” (Tavanei et.al. 2019: 47) SNNs differ in various important respects from the digital computers the CRA was directed against. The objective of the present work is to explore whether the CRA applies to SNNs. In the first chapter I am going to discuss computationalism, the Chinese room argument and give a brief overview over spiking neural networks. The second chapter is going to be considered with five important differences between SNNs and digital computers: (1) Massive parallelism, (2) subsymbolic computation, (3) machine learning, (4) analogue representation and (5) temporal encoding. I am going to finish by concluding that, besides minor limitations, the Chinese room argument can be applied to spiking neural networks.:1 Introduction
2 Theoretical background
2.I Strong AI: Computationalism
2.II The Chinese room argument
2.III Spiking neural networks
3 Applicability to spiking neural networks
3.I Massive parallelism
3.II Subsymbolic computation
3.III Machine learning
3.IV Analogue representation
3.V Temporal encoding
3.VI The Morse code room and its replies
3.VII Some more general considerations regarding hardware
and software
4 Conclusion / Das Argument vom chinesischen Zimmer wurde erstmals 1980 veröffentlicht. Seit dieser Zeit hat sich die Computertechnologie stark weiterentwickelt und die heute viel beachteten gepulsten neuronalen Netze ähneln stark dem Aufbau und der Arbeitsweise biologischer Gehirne. Gepulste neuronale Netze unterscheiden sich in verschiedenen wichtigen Aspekten von den digitalen Computern, gegen die die CRA gerichtet war. Das Ziel der vorliegenden Arbeit ist es, zu untersuchen, ob das Argument vom chinesischen Zimmer auf gepulste neuronale Netze anwendbar ist. Im ersten Kapitel werde ich den Computer-Funktionalismus und das Argument des chinesischen Zimmers erörtern und einen kurzen Überblick über gepulste neuronale Netze geben. Das zweite Kapitel befasst sich mit fünf wichtigen Unterschieden zwischen gepulsten neuronalen Netzen und digitalen Computern: (1) Massive Parallelität, (2) subsymbolische Berechnung, (3) maschinelles Lernen, (4) analoge Darstellung und (5) zeitliche Kodierung. Ich werde schlussfolgern, dass das Argument des chinesischen Zimmers, abgesehen von geringfügigen Einschränkungen, auf gepulste neuronale Netze angewendet werden kann.:1 Introduction
2 Theoretical background
2.I Strong AI: Computationalism
2.II The Chinese room argument
2.III Spiking neural networks
3 Applicability to spiking neural networks
3.I Massive parallelism
3.II Subsymbolic computation
3.III Machine learning
3.IV Analogue representation
3.V Temporal encoding
3.VI The Morse code room and its replies
3.VII Some more general considerations regarding hardware
and software
4 Conclusion
|
5 |
Processreliabilistiska rättfärdigande som funktionalistiska förlopp: Är generalitetsproblemet ett frameproblem?Lundqvist, Johan January 2013 (has links)
Först presenteras metafysisk funktionalism. En Ramseysats för smärta spelar en central roll som en implicit definition av ett mentalt tillstånd över sensorisk input och beteendemässig output. Därefter presenteras reliabilismen som en teori om kunskap. Några allmänna kunskapsteoretiska problem, samt några av reliabilismens problem presenteras. De mest relevanta problem är följande: fallet med en elak demon, klärvoajans samt Mr. Truetemp, och generalitetsproblemet. En formell och schematisk presentation ges för reliabilism som en teori för kunskap, eller möjligen tillskrivande av kunskap, och processreliabilism som en teori för epistemiskt rättfärdigande. Därefter exponeras strukturella likheter mellan funktionalism och processreliabilism. Det får anses plausibelt att det råder ett nära släktskap mellan dessa teorier då Ramseysatser för rättfärdigade trosföreställningar presenteras. Dessa analyseras sedan utifrån möjliga fall. Med ny teoretisk bakgrund prövas reliabilismens problem ånyo inom ett funktionalistisk ramverk. Nya sätt att bemöta problemen presenteras genom en analys av det Kinesiska Rummet. Det svårlösta generalitetsproblemet kan ses som ett frameproblem och hanteras med hjälp av enveloping. / Firstly, a presentation of metaphysical functionalism. A Ramsey sentence plays a central role as an implicit definition of a mental state over sensory input and behavouristic output. A presentation of reliabilism as a theory of knowledge. A summary of some general epistemological problems and some specific to reliabilism; the main ones are the evil demon, clairvoyance and Mr. Truetemp, and the generality problem. A formal and schematic presentation of reliabilism as a theory of knowlege, or possibly knowlege attribution, and process reliabilism as a theory of belief justification. Structural similarities between functionalism and process reliabilism are exposed. A close kinship between these two theories seems plausible because of the possibility to present Ramsey sentences for justified beliefs. These Ramsey sentences are then analysed over possible cases. This new theoretical background, warrents another look at problems for reliabilism. New ways to counter these problems are presented via an analysis of the Chinese Room. The elusive genrality problem is seen as a frame problem and treated using enveloping.
|
6 |
Minds, Machines & Metaphors : Limits of AI UnderstandingMásson, Mímir January 2024 (has links)
This essay critically examines the limitations of artificial intelligence (AI) in achieving human-like understanding and intelligence. Despite significant advancements in AI, such as the development of sophisticated machine learning algorithms and neural networks, current systems fall short in comprehending the cognitive depth and flexibility inherent in human intelligence. Through an exploration of historical and contemporary arguments, including Searle's Chinese Room thought experiment and Dennett's Frame Problem, this essay highlights the inherent differences between human cognition and AI. Central to this analysis is the role of metaphorical thinking and embodied cognition, as articulated by Lakoff and Johnson, which are fundamental to human understanding but absent in AI. Proponents of AGI, like Kurzweil and Bostrom, argue for the potential of AI to surpass human intelligence through recursive self-improvement and technological integration. However, this essay contends that these approaches do not address the core issues of experiential knowledge and contextual awareness. By integrating insights from contemporary scholars like Bender, Koller, Buckner, Thorstad, and Hoffmann, the essay ultimately concludes that AI, while a powerful computational framework, is fundamentally incapaple of replicating the true intelligence and understanding unique to humans.
|
Page generated in 0.0444 seconds