• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 7
  • 1
  • Tagged with
  • 17
  • 11
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Deepfakes - En risk för samhället?

Wardh, Eric, Wirstam, Victor January 2021 (has links)
En deepfake kan vara allt från en bild, video eller ljudklipp, manipulerad med hjälp av AI-teknologi. Deepfakes används legitimt i exempelvis spel- och filmindustrin, men det vanligaste användningsområdet för deepfakes är att skapa manipulerade bilder, videor eller ljudklipp för att sprida felaktig information. Ett annat användningsområde är för att få det att se ut som att personer som egentligen inte medverkat i den aktuella bilden, videon eller ljudklippet faktiskt har gjort det. Denna uppsats fokuserar på att undersöka hur deepfakes används och hur de kan användas för att påverka samhället nu och inom de kommande fem åren. Detta görs med hjälp av en litteraturstudie samt semi-strukturerade intervjuer. I dagsläget används inte deepfakes i större grad för att försöka påverka samhället. Det som istället används för detta är en enklare variant av deepfakes som kallas för cheapfakes eller shallowfakes, som är snabbare, enklare och billigare att ta fram. Så länge deepfakes kommer vara svårare och dyrare att ta fram än cheapfakes och shallowfakes kommer inte deepfakes att användas i större grad än vad det gör idag för att påverka samhället. I takt med att utvecklingen går framåt kommer också användandet av deepfakes öka.
12

Empowerment or exploitation: A qualitative analysis of online feminist communities’ discussions of deepfake pornography

Brieger, Alexandra Rose January 2024 (has links)
This thesis serves to provide insight into the textually constructed identities that online feminist groups create when discussing deepfake pornography, as well as positions that feminist users embody in regards to their ability to change dominant uses of deepfake pornography. Deepfakes, powered by artificial intelligence and deep learning, involve taking individuals’ faces and placing them on images and videos for various purposes and includes but is not limited to pornography. Much is known regarding the potential ramifications of deepfake technology in general, however, little is known concerning social groups and their perceptions of deepfake pornography. Additionally, there is no data in connection with feminist perspectives of deepfakes in online communities. In order to interpret the empirical data, this thesis employs various theoretical concepts in connection with technofeminism (Wajcman, 2004) in order to understand participants’ perceptions and attitudes towards deepfakes. It also utilizes moral foundations theory (Graham et al., 2013) to unpack moral concerns that communities may have regarding deepfake pornography.  Based on a discourse analysis of three Reddit feminist communities: r/PornIsMisogyny, r/fourthwavewomen, and r/TwoXChromosomes, this thesis finds that through their discussions, feminist communities construct a multitude of identities in relation to deepfake pornography, all of which are directly tied to their sense of moral principles. These identities contrast victims and perpetrators, as well as hold identities of governments and parents accountable for the spread of deepfake pornography. Additionally, many feminist users express vigilante justice and criminalization as potential catalysts for change, which is reflected in social constructionism, while other users express feelings of hopelessness and exhibit alternative positions more consistent with technological determinism. Thus, this emphasizes that collaborative efforts from governments, those in power, as well as private citizens are needed to address challenges that deepfake pornography poses to society as a whole.
13

Deepfakes: ett upphovsrättsligt problem : En undersökning av det upphovsrättsliga skyddet och parodiundantagets samspel med AI-assisterade skapandeprocesser / Deepfakes: A Copyright Issue : An Inquiry of the Copyright Protection and Parody Exception's Interplay with AI-assisted Creative Processes

Atala Labbé, Daniel Antonio January 2022 (has links)
In the age of digitalization several new ways of creating immaterial property have sprung up due to the resurgence of artificial intelligence (AI). This has paved the way for different kinds of tech including the assistance of AI in a more normalized way. A prominent variation of this tech is called "deepfake". Deepfakes are a technology that essentially places your face, likeness, mannerisms, and voice onto new situations that the creator then steers to make the deepfake do or say things that the person whose deepfake is based on hasn't done or said. This technology has been used in a myriad of ways all from humourous content to extorsion and revenge porn. The aim of this master thesis is to analyse how immaterial law protection is achieved through current Swedish immaterial law principles and how these fit within the context of heavily based AI-tech such as deepfakes. This is done through a dogmatic lens meaning that a systematization and mapping of both Swedish and EU based laws and praxis are done as well as discussing the current thoughts on AI-assistance throughout the creative process. Another subject that is touched upon is the parody exception in immaterial law and the concept of adaptation and how these work with and apply to AI-based creations. Part of the problems that we face right now is that we have no existing legal parameters to solve the problem of larger AI-involvement in creative processes, this is certainly going to change how we view copyright law today. When comparing and using EU as well as Swedish praxis to analyze the AI-problem a common denominator is that all copyright law and praxis is based around the presumption that there needs to be a human involved in the majority of the creative process. AI already exists as a part of many creative processes today without any questions asked, however when the AI-part is more significant in the process the question becomes complicated when paired with traditional copyright law perspectives. Howevwer, some discussions have been going on in both Swedish and EU legal spheres, mostly in the EU who are going to legislate more in the field of AI. In Sweden there have been no legislative processes when it comes to AI in copyright law however there have been some governmental organisations and essays that have shed a light on the matter. I conclude this master thesis by writing about the findings of each question as has been mentioned above, namely that AI becomes a significant factor in deciding if a deepfake achieves copyright protection or not and the same can be said about parodies. After this I make a concluding analysis of the urgency of a need for laws that tackle AI in the area of immaterial laws listing other areas that might need it more than immaterial laws as has been explored throughout this thesis as well and that Sweden need to take part in every discussion about this to form a sustainable legal framework for AIs in the context of immaterial laws. This will open up for a clear framework when assessing different technologies that use AI like deepfakes as well.
14

Deepfakes: ett upphovsrättsligt problem : En undersökning av det upphovsrättsliga skyddet och parodiundantagets samspel med AI-assisterade skapandeprocesser / Deepfakes: A Copyright Issue : An Inquiry of the Copyright Protection and Parody Exception's Interplay with AI-assisted Creative Processes

Atala Labbé, Daniel Atala January 2022 (has links)
In the age of digitalizarion several new ways of creating immaterial property have sprung up due to the resurgence of artificial intelligence (AI). This has paved the way for different kinds of tech including the assistance of AI in a more normalized way. A prominent variation of this tech is called "deepfake". Deepfakes are a technology that essentially places your face, likeness, mannerisms, and voice onto new situations that the creator then steers to make the deepfake do or say things that the person whose deepfake is based on hasn't done or said. This technology has been used in a myriad of ways, all from humourous content to extorsion and revenge porn. The aim of this master thesis is to analyse how immaterial law protection is achieved through current Swedish immaterial law principles and how these fit within the context of heavily based AI-tech such as deepfakes. This is done through a dogmatic lens, meaning that a systematization and mapping of both Swedish and EU-based laws and praxis are done as well as discussing the current thoughts on AI-assistance throughout the creative process. Another subject that is touched upon is the parody exception in immaterial law and the concept of adaptation and how these work with and apply to AI-based creations. Part of the problems that we face right now is that we have no existing legal parameters to solve the problem of larger AI-involvement in creative processes, this is certainly going to change how we view copyright law today. When comparing and using EU as well as Swedish praxis to analyze the AI-problem a common denominator is that all copyright law and praxis is based around the presumption that there needs to be a human involved in the majority of the creative process. AI already exists as a part of many creative processes today without any questions asked, however when the AI-part is more significant in the process the question becomes complicated when paired with traditional copyright law perspectives. However, some discussions have been going on in both Swedish and EU legal spheres, mostly in the EU who are going to legislate more in the field of AI. In Sweden there have been no legislative processes when it comes to AI in copyright law however there have been some governmental organisations and essays that have shed a light on the matter. I conclude this master thesis by writing about the findings of each question as has been mentioned above, namely that AI becomes a significant factor in deciding if a deepfake achieves copyright protection or not and the same can be said about parodies. After this I make a concluding analysis of the urgency of a need for laws that tackle AI in the area of immaterial laws listing other areas that might need it more than immaterial laws as has been explored throughout this thesis as well and that Sweden needs to take part in every discussion about this to form a sustainable legal framework for AIs in the context of immaterial laws. This will open up for a clear framework when assessing different technologies that use AI like deepfakes as well.
15

AI - ett framtida verktyg för terrorism och organiserad brottslighet? : En framtidsstudie

Gustav, Lindström, Ludvig, Lerbom January 2021 (has links)
This paper explores the future of Artificial Intelligence (AI) and how it can be used by organised crime orterrorist organisations. It exploresthe fundamentals of AI, its history and how its use is affecting the waypolice operate. The paper shows how the development rate of AI is increasing and predicts how it willcontinue to evolve based on different parameters. A study of different types of AI shows the different usesthese systems have, and their potential misuse in the near future. By using the six pillars approach, aprediction concerning AI and the development of Artificial General Intelligence (AGI) is explored, andits ramifications to our society. The results show that in a world with AGI, AI-enabled crime as we knowit would cease to exist, but up until that point, the use of AI in crime will continue to impact our daily livesand security / Denna uppsats undersöker framtiden för AI och hur den kan användas av organiserad brottslighet ellerterroristorganisationer. Den utforskar grunderna för AI, dess historia och hur dess användning påverkarpolisens verksamhet. Uppsatsen visar hur utvecklingshastigheten för AI ökar och förutsäger hur denkommer att fortsätta utvecklas baserat på olika parametrar. En studie av olika typer av AI visar de olikaanvändningsområdena dessa system har och deras potentiella missbruk inom en snar framtid. Genom attanvända metoden sex pelare undersöks en förutsägelse om AI och utvecklingen av Artificiell Generellintelligens (AGI) och dess konsekvenser för vårt samhälle. Resultaten visar att i en värld med AGIkommer AI-aktiverad brottslighet som vi vet att den skulle upphöra att existera, men fram till den tidenkommer användningen av AI i brottslighet att fortsätta att påverka vårt dagliga liv och säkerhet.
16

Mediální reprezentace fenoménu deepfakes / Media representation of deepfake

Janjić, Saška January 2022 (has links)
This master thesis explores the media representation of deepfakes. The first part summarizes previous research followed by a comprehensive review of deepfakes, including the technology allowing for their emergence, current uses and methods of regulation and detection. The second part connects the phenomenon with important theoretical concepts such as social construction of reality and the crucial role of media in this process. The empirical part consists of research combining two methods - quantitative content analysis and qualitative critical discourse analysis. The research analysis is focused on media articles dealing with deepfakes in order to find out how the media represent this phenomenon. The results show that current media discourse of deepfakes is strongly negative as the media frame them as a security threat. This negative representation is highly speculative since journalists often invent their own stories of future disastrous consequences of the technology for national security due to lack of current examples. The findings show an apparent hierarchy of the harms posed by deepfakes which is present in media coverage, and reflects gender sereotypes and inequality in the current society. Harm in the form of non-consensual fake pornography targeting women is neglected in the media...
17

Intimt eller sexuellt deepfakematerial? : En analys av fenomenet ‘deepfake pornografi’ som digitalt sexuellt övergrepp inom det EU-rättsliga området / Intimate or sexual deepfake material? : An analysis of the phenomenon ’deepfake pornography’ as virtual sexual abuse in the legal framework of the European Union

Skoghag, Emelie January 2023 (has links)
No description available.

Page generated in 0.0823 seconds