Spelling suggestions: "subject:"killed robots""
1 |
Kampaň pro zákaz "bojových robotů": vyhlídky regulace autonomních zbraňových systémů / Campaign to stop 'killer robots': prospects of a preemptive ban on autonomous weapons systemsRosendorf, Ondřej January 2018 (has links)
This thesis addresses the issue of autonomous weapons systems and their potential preventive prohibition with regard to current international discussions at multilateral forums such as the Human Rights Council, First Committee of the General Assembly, and Convention on Certain Conventional Weapons at UN. The aim of this thesis is to provide an extensive empirical account of the substance of those discussions and their most likely outcome, estimating state preferences with use of content analysis and the likely outcome with median voter prediction. From a theoretical standpoint, the thesis draws from defensive realism and contributions of arms control, arms trade as well as institutionalist literature from which it draws the concept of legalization. From a methodological standpoint, the thesis relies on quantitative methods, in particular, content analysis for collection of data and median voter theorem for prediction of the likely outcome. In addition, the thesis uses the method of regression analysis to examine states' activity at the aforementioned fora. In conclusion, the thesis finds that the most likely outcome of discussions on autonomous weapons systems is a moderate-obligation form of hybrid regulation, which includes solutions such as framework convention and moratorium. Further finding of...
|
2 |
Artificial Intelligence in Lethal Automated Weapon Systems - What's the Problem? : Analysing the framing of LAWS in the EU ethics guidelines for trustworthy AI, the European Parliament Resolution on autonomous weapon systems and the CCW GGE guiding principles.Beltran, Nicole January 2020 (has links)
Lethal automated weapon systems (LAWS) are developed and deployed by a growing number of state and non-state actors, although no international legally binding framework exists as of yet. As a first attempt to regulate LAWS the UN appointed a group of governmental experts (GGE) to create the guiding principles on the issue of LAWS AI. A few years later the EU appointed an expert group to create the Ethics guideline for trustworthy and the European Parliament passed a resolution on the issue of LAWS. This thesis attempts to make the underlying norms and discourses that have shaped these guiding principles and guidelines visible. By scrutinizing the documents through the ‘What’s the problem presented to be’-approach, the discursive practices that enables the framing is illuminated. The obscured problems not spoken of in the EU and UN documents are emphasised, suggesting that both documents oversimplifies and downplays the danger of LAWS, leaving issues such as gender repercussions, human dignity and the dangers of the sophisticated weapons system itself largely unproblematised and hidden behind their suggested dichotomised and anthropocentric solutions, which largely results in a simple “add human and stir”-kind of solution. The underlying cause of this tendency seems to stem from a general unwillingness of states to regulate as LAWS are quickly becoming a matter of have- and have nots and may potentially change warfare as we know it. A case can also be made as to AI’s ‘Hollywood-problem’ as influencing the framing of LAWS, where the dystopian terminator-like depiction in popular culture can be seen reflected in international policy papers and statements.
|
3 |
Conceptualizing lethal autonomous weapon systems and their impact on the conduct of war - A study on the incentives, implementation and implications of weapons independent of human controlSimon, Sascha January 2019 (has links)
The thesis has aimed to study the emergence of a new weapons technology, also known as ‘killer robots’ or lethal autonomous weapon system. It seeks to answer what factors drive the development and deployment of this weapon system without ‘meaningful human control’, a component that allows the decision to kill to be delegated to machines. The research question focuses on seeking the motivations to develop and deploy LAWS, as well as the consequences this would have on military conduct and conflict characteristics.The incentives they bring up and the way of adopting them has been studied by synthesizing antinomic democratic peace theory and adoption capacity theory respectively. The findings of this qualitative content analysis lead to two major conclusions. (1) That LAWS present severe risk avoidance and costs reduction potential for the user. These factors have a more prevalent pull on democracies than autocracies, since they stand to benefit from LAWS’ specific capabilities more in comparison. (2) That their adoption is aided by low financial intensity needed to adopt it, due to the high commercial profitability and applicability of AI technology, and the ease of a spillover to military sphere. Their adoption is hindered by high organizational capital needed to implement the drastic changes LAWS bring. All of this leads to the prediction that LAWS are likely to proliferate further, at a medium speed, and potentially upset the balance of power.
|
4 |
Artificiell Intelligens och krigets lagar : Kan skyddet i internationell humanitärrätt garanteras?Öholm, Emma January 2023 (has links)
Artificial intelligence (AI) is one of the fastest developing technologies globally. AI has recently entered warfare and thus taken a place in international law. Today the use of AI in warfare is through machine learning and autonomous weapon systems. Autonomous weapons are expected to play a decisive role in future war- fare and therefore have a major impact on both civilians and combatants. This gives rise to an examination of the role of artificial intelligence, machine learning and autonomous weapon systems in international law, specifically international humanitarian law (IHL). The purpose and main research question of the thesis is to examine how the use of AI, machine learning and autonomous weapon systems is regulated within international law. Further the thesis examines if the regulations sufficiently can ensure the protection that is guaranteed within IHL or if additional regulation is needed. The research question is answered by examining the relevant rules in IHL, compliance with the protection stated in the principles of distinction, pro- portionality and precautions in attack and lastly by analyzing the consequences for civilians and combatants. Conclusions that can be made is that the rules of IHL are both applicable and sufficient to, in theory, regulate autonomous weapon systems. However the weapon itself must be capable to follow IHL and in order to guarantee this ad- ditional regulation is needed on the use of autonomous weapons. The use of autonomous weapon systems does not necessarily violate the principles of dis- tinction, proportionality and precaution in attack. On the contrary, the use of autonomous weapons can possibly ensure that the principles are respected even further. This however depends on the actual capabilities of autonomous weapon systems and whether they can make the complex judgments required for each principle. It is although still of importance to ensure that the element of human control is never completely lost. The issue that keeps returning is the potential loss of human control. At all times human control must be guaranteed to ensure that the final decision always remains with a human. If humanity in warfare is lost the consequences and risks for civilians will increase. Not only is there a possibility of increase in use of violence but also an increase of indiscriminate attacks. The rules of IHL aim to protect civilians as well as combatants, and the use of this new weapon will lead to difficulties to navigate armed situations for combatants. This will increase the suffering of civilians, but also risk overriding the protection of combatants that IHL ensures.
|
Page generated in 0.0702 seconds