Spelling suggestions: "subject:"kärnvapen"" "subject:"kärnvapenprogram""
1 |
Ballistisk missilforsvar - og nuklear magtbalanceDyvad, Peter January 2002 (has links)
En afhandling om nuklear magtbalancebelyst gennem en neorealistisk analyse af Ruslands og Kinas perceptioner af USA'sintentioner om deployering af et ballistisk missilforsvarssystemmed henblik på at konkludere, hvilke konsekvenser disse intentioner og perceptionerkan få for stabiliteten af den nukleare magtbalance mellem USA-Rusland ogUSA-Kina. / The aim of this dissertation is to examine how the United States deployment of aNational Missile Defense will affect the stability of the nuclear balance-of-powerrelative to Russia and China. The theories used is, first, the balance-of-threat theoryby Stephen M. Walt, which rests on Kenneth N. Waltz’ neo-realism and balance-ofpowertheory and, secondly, Robert Jervis’ notable ”Four Worlds Under the SecurityDilemma” theory, which attempts to explore the power-security dilemma and theconsequences caused by the ambiguity between offensive and defensive militarypower. The method used is based on a qualitative empirical analysis of literature aswell as official statements from governments and governmental institutionsconcerned. First, the United States’ intentions of deploying a National MissileDefense will be examined. Secondly, the Russian and Chinese perceptions of threatas well as their possible options of balancing the National Missile Defense will beexplored. Finally, the affect of the National Missile Defense on strategic stability isexamined. My conclusions are:Russia and China perceive National Missile Defense as a threat:Russia’s nuclear deterrence vis-à-vis the United States will be undermined in itspresent form towards 2015. Chinas current “minimal” nuclear deterrence will benullified instantly when the National Missile Defense is deployed. Furthermore,Chinas presently ongoing modernisation of its nuclear forces will be continuouslyundermined towards 2015. In addition to existing and potential areas of conflict withthe United States the deployment of the National Missile Defense might contribute toRussian and Chinese perceptions of malign American intentions.Russia and China will try to balance the perceived threat by increasing theoffensive capabilities of their strategic nuclear weapons:Russia will try to balance the threat from the National Missile Defense throughstrategic arms control agreements with the United States. Furthermore, Russia willprepare its nuclear infrastructure to be able to balance the National Missile Defense,if the system is deployed. The National Missile Defense will require a furtherexpansion of the Chinese nuclear force, perhaps significantly beyond its currentplans. The methods of balancing will be an increase of the number of missiles andwarheads and probably also deployment of sophisticated countermeasures.The deployment of the National Missile Defense can cause strategic instabilityand arms race:As the offensive nuclear weapons, i.e. ballistic missiles, have several obviousadvantages over the defensive nuclear weapons, i.e. the National Missile Defense,and as the offensive postures cannot be distinguished from the defensive ones thepower-security dilemma will operate unrestrained. All three states will be confrontedwith a very unstable situation and offensive strategic arms race will be likely. Thedeployment of a National Missile Defense will furthermore increase the difficultiesof achieving very low warhead levels in forthcoming START negotiations. / Avdelning: ALB - Slutet Mag 3 C-upps.Hylla: Upps. ChP 00-02
|
2 |
AI-paradoxen / The AI ParadoxYtterström, Jonas January 2022 (has links)
Derek Parfit är kanske en av vår tids mest kända moralfilosofer. Parfit inleder sin första bok Reasons and Persons med att ställa frågan: vad har vi mest skäl att göra? Hans fråga berör vad som egentligen har betydelse, en fråga som han fortsätter att beröra i sin andra bok On What Matters. Filosofen Toby Ord argumenterar i sin bok The Precipice för att den utmaning som definierar vår tid, och bör ha en central prioritering, är utmaningen att skydda mänskligheten emot så kallade existentiella risker. En existentiell risk är en typ av risk som hotar att förstöra, eller förhindra, mänsklighetens långsiktiga potential. Ord menar att vi idag befinner oss vid en kritisk tidpunkt i mänsklighetens historia som kan vara helt avgörande för om det ens kommer existera en framtid för mänskligheten. Men om vi bör skydda mänskligheten emot existentiella risker, så kan en lämplig följdfråga vara i vilken ordning vi bör prioritera olika existentiella risker. Den svenske filosofen Nick Bostrom har liksom Ord länge förespråkat att existentiella risker bör tas på allvar. Han menar att preventiva åtgärder bör vidtas. I sin bok Superintelligens argumenterar Bostrom, både omfattande och väl, för att den existentiella risk som kan te sig som mest brådskande, och kanske allvarligast, är artificiell intelligens. Bostrom menar att vi har goda skäl att tro att utveckling av artificiell intelligens kan eskalera till den grad att mänsklighetens öde kan hamna bortom vår egen kontroll. Det han syftar på är att människan just nu är den dominerande agenten på jorden och därför innehar en stor kontroll, men att så inte alltid behöver vara fallet. Bostroms tes kunde te sig som okonventionell då den presenterades, men kan även te sig så idag vid en första anblick. Han har dock fått explicit medhåll av personer som Bill Gates, Stephen Hawking, Elon Musk, Yuval Noah Harari och Max Tegmark, som antingen håller med eller resonerar i liknande banor. Även jag själv finner Bostroms antaganden välgrundade. Slutsatsen som många drar är därför att vi bör betrakta artificiell intelligens som en existentiell risk som ska prioriteras högt. Jag kommer dock i denna text att argumentera för tesen att vi inte bör betrakta artificiell intelligens som en existentiell risk. Tesen följer från en invändning som jag kommer att kalla för AI-paradoxen. Det tycks enligt invändningen som att artificiell intelligens inte kan leda till en existentiell katastrof givet vissa premisser som flera i debatten om artificiell intelligens tycks acceptera. Texten i uppsatsen är strukturerad på följande sätt. I avsnitt 2 kommer jag att återge det övergripande argumentet som cirkulerar i debatten om artificiell intelligens som ett hot. I avsnittet kommer jag också förklara några viktiga termer och begrepp. I avsnitt 3 börjar jag med att titta på den första premissen i argumentet, samt resonera om dess rimlighet. I avsnitt 4 går jag sedan vidare till den andra premissen i argumentet och gör samma sak med den. Väl i avsnitt 5 så väljer jag att presentera min egen idé som jag kallar för AI-paradoxen, vilket är en invändning mot argumentet. I avsnitt 6 diskuterar jag sedan AI-paradoxens implikationer. Avslutningsvis, i avsnitt 7, så ger jag en övergripande sammanfattning och en slutsats, samt några sista reflektioner. / Derek Parfit is perhaps one of the most famous moral philosophers of our time. Parfit begins his first book Reasons and Persons by asking the question: what do we have most reason to do? His question touches upon what really matters, a question he continues to touch upon in his second book On What Matters. The philosopher Toby Ord argues in his book The Precipice that the challenge that defines our time, and should have a central priority, is the challenge of safeguarding humanity from so-called existential risks. An existential risk is a type of risk that threatens to destroy, or prevent, humanity’s longterm potential. Ord means that today we are at a critical time in the history of humanity that can be absolutely decisive for whether there will even exist a future for humanity. But if we are to safeguard humanity from existential risks, then an appropriate question may be in what order we should prioritize different existential risks. The Swedish philosopher Nick Bostrom, like Ord, has long advocated that existential risks should be taken seriously. He believes that preventive measures should be taken. In his book Superintelligence Bostrom argues, both extensively and well, that the existential risk that may seem most urgent, and perhaps most severe, is artificial intelligence. Bostrom believes that we have good reason to believe that the development of artificial intelligence can escalate to the point that the fate of humanity can end up beyond our own control. What he is referring to is that humans are currently the dominant agent on earth and therefore has great control, but that this does not always have to be the case. Bostrom's thesis may have seemed unconventional when it was presented, but it can also seem so today at first glance. However, he has been explicitly supported by people like Bill Gates, Stephen Hawking, Elon Musk, Yuval Noah Harari and Max Tegmark, who either agree or reason similarly. I myself also find Bostrom's assumptions well-founded. The conclusion that many draw is therefore that we should regard artificial intelligence as an existential risk that should be given a high priority. However, in this text I will argue for the thesis that we should not regard artificial intelligence as an existential risk. The thesis follows from an objection of my own, which I call the AI paradox. According to the objection, it seems that artificial intelligence cannot lead to an existential catastrophe given certain premises that many in the debate about artificial intelligence as a threat seem to accept. The text in the essay is structured as follows. In section 2 I will present the main argument circulating in the debate about artificial intelligence as a threat. In the section I will also explain some important terms and concepts. In section 3 I begin by looking at the first premise in the argument, and also reason about its plausibility. In section 4 I proceed to the second premise in the argument and examine it similarly. Once in section 5 I choose to present my own idea, which I call the AI paradox, which is an objection to the argument. In section 6 I discuss the implications of the AI paradox. Finally, in section 7, I give an overall summary and a conclusion, as well as some last reflections.
|
Page generated in 0.0316 seconds