Spelling suggestions: "subject:"deneral artificial intelligence"" "subject:"deneral artificial lntelligence""
1 |
The recreation of consciousness| Artificial intelligence and human individuationLoghry, John Brendan 25 January 2014 (has links)
<p> Starting from Edward Edinger's portrayal of Jung's process of individuation as the creation of consciousness, this dissertation asks in what ways the creation of artificial intelligence (AI) can be seen as the recreation of consciousness, and specifically whether the AI's maturation from nonconsciousness to something equivalent to consciousness will have an analogous effect on humanity's development out of unconsciousness toward a greater state of cognitive freedom. Taking a functional perspective, this dissertation asks whether B. F. Skinner's metaphor of the human psyche as a black box, normally seen as expressing the belief that humans are mechanistic and determined, is in fact an attempt to insulate the most intimate of human experiences (the soul) from the intrusive gaze of the scientific mindset. Juxtaposing this black box metaphor with two other metaphors—that of the box that holds Schrodinger's cat and that of Pandora's box—this dissertation asks whether the presence of an entirely constructed entity that displays all the signs of soul will cause the artificially intelligent entity to act as a mirror, reflecting humanity's gaze past our inner defenses, to an inner absence where a metaphysical soul was once surmised to be. Although such a change in self-image would initially entail an apparent loss of meaning, this dissertation notes that such a lacuna of meaning is already growing in society and concludes that the loss of this concept would eventually result in a new concept of self that would represent an important milestone for the collective individuation of the species.</p>
|
2 |
Dynamics in interactions with digital technology| A depth psychological/theoretical exploration of the evolutionary-biological, symbolic, and emotional psyche in the digital ageZiv, Ary 16 October 2014 (has links)
<p> The intention of this exploratory research is to shed light on the psychological impact of interactions with digital technology, which is increasingly pervasive in our culture. This dissertation asks what psychological phenomena are generated by human interactions with digital technology, in general, and with complex recommendation systems, in particular. Nondigital technology is contrasted with digital technology, which achieves new levels of interactivity through its artificial and virtual capabilities. It is proposed that the degree of increased interactivity made possible by digital technology crosses a threshold impacting the psyche in new ways. </p><p> A theoretical framework for understanding human-digital technology interactions is introduced and developed. The psyche is conceptualized as evolutionarily and biologically based, functioning symbolically and emotionally both consciously and unconsciously. Ramifications of this conceptualization are explored in the context of interactions with digital/algorithmic technology, using recommendation systems as illustrations. </p><p> The theoretical investigation concludes that psyche-digital technology interactions are new phenomena. Psychic processes—by nature evolutionarily and biologically symbolic and largely unconscious—interact with nonbiological digital/algorithmic technology. Because of the incongruence of value systems between biological phenomena and digital/algorithmic logic, unconscious psychic processes resulting from interactions between <i> the biological feeling psyche and nonbiological digital technology</i> are likely to significantly impact both psychic development of individuals, in the short term, and quite possibly the human species at large, in the long term. </p><p> The method of exploratory research is interpretive and theoretically oriented, while employing a depth psychological lens. Contemporary depth psychology is described as an integrative field that is receptive to insights from all other fields; it considers unconscious phenomena as vital to human psychological makeup. This study brings together depth psychological and neurobiological theory; and is grounded in the work of depth psychologist Erich Neumann, who describes biological-evolutionary-symbolic unconscious and conscious dynamics of the psyche. </p><p> As background, social psychology's discoveries of unconscious social behaviors triggered by interacting with new media are highlighted as fundamental in interactions with computing technology. From a depth psychological point of view, conscious and unconscious relationships to and with technology are explored historically as precursors to interactions with digital technology. </p><p> Keywords: human-computer interactions, depth psychology, big data, recommendation systems, digital technology, emotions, affect, feeling, neurobiology, Carl Jung, Erich Neumann.</p>
|
3 |
Hierarchical Temporal Memory Software Agent : In the light of general artificial intelligence criteriaHeyder, Jakob January 2018 (has links)
Artificial general intelligence is not well defined, but attempts such as the recent listof “Ingredients for building machines that think and learn like humans” are a startingpoint for building a system considered as such [1]. Numenta is attempting to lead thenew era of machine intelligence with their research to re-engineer principles of theneocortex. It is to be explored how the ingredients are in line with the design princi-ples of their algorithms. Inspired by Deep Minds commentary about an autonomy-ingredient, this project created a combination of Numentas Hierarchical TemporalMemory theory and Temporal Difference learning to solve simple tasks defined in abrowser environment. An open source software, based on Numentas intelligent com-puting platform NUPIC and Open AIs framework Universe, was developed to allowfurther research of HTM based agents on customized browser tasks. The analysisand evaluation of the results show that the agent is capable of learning simple tasksand there is potential for generalization inherent to sparse representations. However,they also reveal the infancy of the algorithms, not capable of learning dynamic com-plex problems, and that much future research is needed to explore if they can createscalable solutions towards a more general intelligent system.
|
4 |
AI-paradoxen / The AI ParadoxYtterström, Jonas January 2022 (has links)
Derek Parfit är kanske en av vår tids mest kända moralfilosofer. Parfit inleder sin första bok Reasons and Persons med att ställa frågan: vad har vi mest skäl att göra? Hans fråga berör vad som egentligen har betydelse, en fråga som han fortsätter att beröra i sin andra bok On What Matters. Filosofen Toby Ord argumenterar i sin bok The Precipice för att den utmaning som definierar vår tid, och bör ha en central prioritering, är utmaningen att skydda mänskligheten emot så kallade existentiella risker. En existentiell risk är en typ av risk som hotar att förstöra, eller förhindra, mänsklighetens långsiktiga potential. Ord menar att vi idag befinner oss vid en kritisk tidpunkt i mänsklighetens historia som kan vara helt avgörande för om det ens kommer existera en framtid för mänskligheten. Men om vi bör skydda mänskligheten emot existentiella risker, så kan en lämplig följdfråga vara i vilken ordning vi bör prioritera olika existentiella risker. Den svenske filosofen Nick Bostrom har liksom Ord länge förespråkat att existentiella risker bör tas på allvar. Han menar att preventiva åtgärder bör vidtas. I sin bok Superintelligens argumenterar Bostrom, både omfattande och väl, för att den existentiella risk som kan te sig som mest brådskande, och kanske allvarligast, är artificiell intelligens. Bostrom menar att vi har goda skäl att tro att utveckling av artificiell intelligens kan eskalera till den grad att mänsklighetens öde kan hamna bortom vår egen kontroll. Det han syftar på är att människan just nu är den dominerande agenten på jorden och därför innehar en stor kontroll, men att så inte alltid behöver vara fallet. Bostroms tes kunde te sig som okonventionell då den presenterades, men kan även te sig så idag vid en första anblick. Han har dock fått explicit medhåll av personer som Bill Gates, Stephen Hawking, Elon Musk, Yuval Noah Harari och Max Tegmark, som antingen håller med eller resonerar i liknande banor. Även jag själv finner Bostroms antaganden välgrundade. Slutsatsen som många drar är därför att vi bör betrakta artificiell intelligens som en existentiell risk som ska prioriteras högt. Jag kommer dock i denna text att argumentera för tesen att vi inte bör betrakta artificiell intelligens som en existentiell risk. Tesen följer från en invändning som jag kommer att kalla för AI-paradoxen. Det tycks enligt invändningen som att artificiell intelligens inte kan leda till en existentiell katastrof givet vissa premisser som flera i debatten om artificiell intelligens tycks acceptera. Texten i uppsatsen är strukturerad på följande sätt. I avsnitt 2 kommer jag att återge det övergripande argumentet som cirkulerar i debatten om artificiell intelligens som ett hot. I avsnittet kommer jag också förklara några viktiga termer och begrepp. I avsnitt 3 börjar jag med att titta på den första premissen i argumentet, samt resonera om dess rimlighet. I avsnitt 4 går jag sedan vidare till den andra premissen i argumentet och gör samma sak med den. Väl i avsnitt 5 så väljer jag att presentera min egen idé som jag kallar för AI-paradoxen, vilket är en invändning mot argumentet. I avsnitt 6 diskuterar jag sedan AI-paradoxens implikationer. Avslutningsvis, i avsnitt 7, så ger jag en övergripande sammanfattning och en slutsats, samt några sista reflektioner. / Derek Parfit is perhaps one of the most famous moral philosophers of our time. Parfit begins his first book Reasons and Persons by asking the question: what do we have most reason to do? His question touches upon what really matters, a question he continues to touch upon in his second book On What Matters. The philosopher Toby Ord argues in his book The Precipice that the challenge that defines our time, and should have a central priority, is the challenge of safeguarding humanity from so-called existential risks. An existential risk is a type of risk that threatens to destroy, or prevent, humanity’s longterm potential. Ord means that today we are at a critical time in the history of humanity that can be absolutely decisive for whether there will even exist a future for humanity. But if we are to safeguard humanity from existential risks, then an appropriate question may be in what order we should prioritize different existential risks. The Swedish philosopher Nick Bostrom, like Ord, has long advocated that existential risks should be taken seriously. He believes that preventive measures should be taken. In his book Superintelligence Bostrom argues, both extensively and well, that the existential risk that may seem most urgent, and perhaps most severe, is artificial intelligence. Bostrom believes that we have good reason to believe that the development of artificial intelligence can escalate to the point that the fate of humanity can end up beyond our own control. What he is referring to is that humans are currently the dominant agent on earth and therefore has great control, but that this does not always have to be the case. Bostrom's thesis may have seemed unconventional when it was presented, but it can also seem so today at first glance. However, he has been explicitly supported by people like Bill Gates, Stephen Hawking, Elon Musk, Yuval Noah Harari and Max Tegmark, who either agree or reason similarly. I myself also find Bostrom's assumptions well-founded. The conclusion that many draw is therefore that we should regard artificial intelligence as an existential risk that should be given a high priority. However, in this text I will argue for the thesis that we should not regard artificial intelligence as an existential risk. The thesis follows from an objection of my own, which I call the AI paradox. According to the objection, it seems that artificial intelligence cannot lead to an existential catastrophe given certain premises that many in the debate about artificial intelligence as a threat seem to accept. The text in the essay is structured as follows. In section 2 I will present the main argument circulating in the debate about artificial intelligence as a threat. In the section I will also explain some important terms and concepts. In section 3 I begin by looking at the first premise in the argument, and also reason about its plausibility. In section 4 I proceed to the second premise in the argument and examine it similarly. Once in section 5 I choose to present my own idea, which I call the AI paradox, which is an objection to the argument. In section 6 I discuss the implications of the AI paradox. Finally, in section 7, I give an overall summary and a conclusion, as well as some last reflections.
|
Page generated in 0.0984 seconds