• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 5
  • 1
  • Tagged with
  • 14
  • 9
  • 5
  • 5
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Skenet bedrar: Fake porn i svensk straffrätt / Looks are deceiving: Fake porn in Swedish criminal law

Inevik, Mina January 2021 (has links)
Sedan år 2017 har många internetanvändare fått bekanta sig med fenomenet deepfakes som är en slags digital imitation där en persons ansikte kan fogas samman med en annan persons kropp. Så kallade face swap-appar och filter är inget nytt men det som särskiljer deepfakes är att de skapas med hjälp av artificiell intelligens som kan ge extremt verk- liga resultat. Dessutom blir tekniken allt mer avancerad och lättillgänglig för gemene man, vilket innebär större risk för att den missbrukas t.ex. för att skapa fake porn där en persons ansikte infogas i en existerande bild med sexuellt innehåll. Syftet med uppsatsen är att utreda om det genom befintlig svensk straffrätt går att lagföra den som skapar eller sprider fake porn. Detta innefattar en utredning av huruvida skyddet för den personliga integriteten omfattar sådant sexuellt material som inte visar en persons faktiska kropp men ändock dennes ansikte, eller om fake porn avslöjar luck- or i det straffrättsliga skyddet för den personliga integriteten. För att uppnå syftet utreds brotten sexuellt ofredande och ofredande när det gäller skapande av fake porn samt för- tal och olaga integritetsintrång när det gäller spridande av sådant material. De slutsatser som dras är att det i allmänhet inte går att lagföra den som skapat fake porn men att det i många fall troligtvis går att bedöma spridande av materialet som för- tal. Detta är dock inte ett önskvärt sätt att angripa fake porn på då det inte fokuserar på skyddet för den personliga integriteten utan istället skyddet för en persons ära, vilket kan ge en del märkliga resultat vid tillämpningen på fake porn. Detta visar på ett behov av att lagstiftaren agerar proaktivt mot fake porn, innan det blir ett utbrett problem som är svårt för rättsväsendet att komma åt. / Since 2017 many internet users have gotten acquainted with deepfakes which is a type of digital imitation that allows you to attach a person’s face onto a picture or video of somebody else’s body. So called face swap apps and filters are hardly a novelty any- more but what distinguishes deepfakes is the fact that they are created using artificial in- telligence which can provide extremely realistic results. This technology is becoming increasingly more advanced and easily available even for those lacking more than aver- age skills in technology and computers. This increases the risk of deepfakes being abused e.g. for the purpose of creating fake porn where a person’s face is inserted into existing sexual content. The purpose of the thesis is to examine whether creating and spreading fake porn constitutes a crime according to Swedish criminal law. This includes investigating whether the protection of personal integrity extends to this sort of sexual material that portrays a person’s face but not their own body, or if fake porn has revealed blind spots in the protection of personal integrity in Swedish criminal law. For this purpose, the crimes sexual molestation and molestation will be tried regarding the creation of fake porn while defamation and unlawful intrusion of integrity will be tried regarding the spread of such content. It is concluded that, in general, creating fake porn is not punishable by criminal law, although it is likely that spreading it in many cases could constitute defamation. However, this is not a desirable way of managing fake porn since defamation is a crime designed to protect a person’s honor or reputation, not a person’s personal integrity. Applying the defamation provision on fake porn can therefore make for odd results in some cases. This highlights the need for proactivity from the legislator before fake porn becomes a widespread problem that the criminal justice system cannot handle.
2

Stronger Together? An Ensemble of CNNs for Deepfakes Detection / Starkare Tillsammans? En Ensemble av CNNs för att Identifiera Deepfakes

Gardner, Angelica January 2020 (has links)
Deepfakes technology is a face swap technique that enables anyone to replace faces in a video, with highly realistic results. Despite its usefulness, if used maliciously, this technique can have a significant impact on society, for instance, through the spreading of fake news or cyberbullying. This makes the ability of deepfakes detection a problem of utmost importance. In this paper, I tackle the problem of deepfakes detection by identifying deepfakes forgeries in video sequences. Inspired by the state-of-the-art, I study the ensembling of different machine learning solutions built on convolutional neural networks (CNNs) and use these models as objects for comparison between ensemble and single model performances. Existing work in the research field of deepfakes detection suggests that escalated challenges posed by modern deepfake videos make it increasingly difficult for detection methods. I evaluate that claim by testing the detection performance of four single CNN models as well as six stacked ensembles on three modern deepfakes datasets. I compare various ensemble approaches to combine single models and in what way their predictions should be incorporated into the ensemble output. The results I found was that the best approach for deepfakes detection is to create an ensemble, though, the ensemble approach plays a crucial role in the detection performance. The final proposed solution is an ensemble of all available single models which use the concept of soft (weighted) voting to combine its base-learners’ predictions. Results show that this proposed solution significantly improved deepfakes detection performance and substantially outperformed all single models.
3

Deepfake detection by humans : Face swap versus lip sync / Människors förmåga att upptäcka deepfakes : Face swap mot lipsync

Sundström, Isak January 2023 (has links)
The term “deepfakes” refers to media content that has been manipulated using deep learning. This thesis project seeks to answer the question of how well humans are able to detect deepfakes. In particular, the project compares people’s ability to detect deepfakes between two different deepfake categories; face swap and lip sync. In order to achieve this, a perceptual user test was performed, in which 30 participants were given a number of lip sync, face swap and unaltered videos and were asked to classify which of them were unaltered and which of them were manipulated using deepfake technology. These results serve to fill in the gap in knowledge regarding perceptual user tests on deepfakes, for which only a small amount of research has been made. The results also serve to shed light on which types of deepfakes pose the biggest threat regarding the problem of malicious impersonation. The main conclusion from this study was that lip sync is likely harder for humans to detect than face swap. The percentage of correct classifications of lip sync videos was 52.7%, and the percentage of correct classifications of face swap videos was 91.3%. / Deepfakes är videor som har blivit manipulerade med hjälp av deep learning. Detta examensarbete utforskar huvudsakligen två olika kategorier av deepfakes, dessa två är: face swap och lip sync. Syftet med projektet är att svara på frågan: Hur bra är människor på att se om en video innehåller deepfakes eller inte? Dessutom ställs frågan: Vilken typ av deepfake mellan face swap och lip sync är svårare för människor att märka av? För att svara på dessa frågor genomfördes en användarsudie där 30 deltagare fick titta på ett antal lip sync, face swap och icke-manipulerade videor, och fick sedan försöka avgöra vilka av dom som var manipulerade och vilka som inte var manipulerade. Resultaten från den här studien hjälper till att fylla kunskapsklyftan som finns angående människors förmåga att upptäcka deepfakes, där bara en väldigt begränsad mängd studier finns. Resulaten kan också användas för att peka ut på vilka typer av deepfakes som utgör större hot angående lurendrejeri. Slutsatsen från studien var att lip sync är troligtvis svårare för människor att märka av än face swap, eller åtminstone för datasetet FakeAVCeleb. Andelen korrekta gissningar för lip sync videorna i studien var 52.7%, medan andelen korrekta gissningar för face swap var 91.3%.
4

The Contribution of Visual Explanations in Forensic Investigations of Deepfake Video : An Evaluation

Fjellström, Lisa January 2021 (has links)
Videos manipulated by machine learning have rapidly increased online in the past years. So called deepfakes can depict people who never participated in a video recording by transposing their faces onto others in it. This raises the concern of authenticity of media, which demand for higher performing detection methods in forensics. Introduction of AI detectors have been of interest, but is held back today by their lack of interpretability. The objective of this thesis was therefore to examine what the explainable AI method local interpretable model-agnostic explanations (LIME) could contribute to forensic investigations of deepfake video.  An evaluation was conducted where 3 multimedia forensics evaluated the contribution of visual explanations of classifications when investigating deepfake video frames. The estimated contribution was not significant yet answers showed that LIME may be used to indicate areas to start examine. LIME was however not considered to provide sufficient proof to why a frame was classified as `fake', and would if introduced be used as one of several methods in the process. Issues were apparent regarding the interpretability of the explanations, as well as LIME's ability to indicate features of manipulation with superpixels.
5

How fake is fake enough? : Deepfakes potential effect on the way news media is used and experienced today and in the near future

Lundberg, Ebba January 2023 (has links)
Deepfakes are synthetic content, like pictures, videos, and sounds, that are generated with advanced deep learning and AI-technology. Anyone can create deepfakes and therefore, deepfakes create threats to the individual, financially and to the society. For example, bullying, defamation, fraud, damage to democracy and news media manipulation. The aim of the study is to analyze and discuss whether deepfakes potentially can affect the way society views and uses news media in a positive or negative way. The research methodology chosen was a cross-sectional study. A cross-sectional study is done to measure a particular aspect of a social phenomenon or trend. In this study the phenomenon is deepfakes and how they might affect the way news media is used, what are the affordances and constraints with this phenomenon? The study was conducted in two steps. The first step of was to study discussion forums and blogs. Discussion forums and blogs where AI and deepfakes were discussed. The second step was to conduct semi-structured interviews with people selected through purposive sampling. The results show that deepfakes have both positive and negative aspects to them. Deepfakes could have great impact on the entertainment business and within news media and journalism. They can be used to show things that only are speculative or very hard to capture on video or in a picture. They can be used within news media or journalism to protect people and get a broader following. The results also shows that deepfakes can lead to psychological, financial and social harm. Source criticism is a topic that needs to be more discussed and have a bigger part in education. Without source criticism and with a larger use of deepfakes people could experience a lot of fear and confusion since it might get harder to know what source to trust and to determine whether published content is fake or not.
6

THE CONSTRUCTION OF IDENTITY AND EVOLUTION OF DESIRE THROUGH SYNTHETIC MEDIA

Schenker, Dylan, 0009-0005-9499-760X January 2023 (has links)
he specter of deepfakes and artificial intelligence enabled media productioncontinues to exacerbate the fear brought on by a degraded ability to discern the real from the fake, syn- thetic, or fabricated in a networked society. While these fears are well-founded especially as they pertain to issues of involuntary pornography their introduction into an already oversaturated media landscape, if anything, extended trends in mediated indeterminacy already being fostered by the universalization of social media platforms. Sites such as Facebook, Instagram, Snapchat and TikTok, made more explicit the contingency and per- formative nature of identity. As younger generations came of age through social media they learned how to navigate and present themselves through it in novel ways unique to each platform. Oftentimes, these strategies were harmful to people’s perception of themselves and their mental health. Other times, however, it gave them the ability to experiment with new forms of identity more in line with how they actually felt. Further, more experimentation through ubiquitous mediation extended what kinds of identities are possible in general as well. In turn, the discovery and extension of identity has led to the evolution of desire. Identities and desires hitherto not possible in a physical space precipitated the creation of new objects of desire that can be pursued and materially experienced regardless of their virtual nature. Deepfakes, and now generative AI, anticipate a further, exponentially more complicated relationship with identity and desire formation through the adoption of increasingly unreal presentations of each. / Media Studies & Production
7

Rätten till ditt eget ansikte? : En rättsutredning av fenomenet deepfakes

Westberg, Frida January 2021 (has links)
This thesis aims to investigate the phenomenon of deepfakes and the right to one’s own face. The phenomenon is only a couple of years old when this essay is written. A deepfake is an AI-generated video that often depicts a natural person. A deepfake can have different purposes and violate a person’s privacy at various degrees. As the technology for generating deepfakes becomes more common, the pros and cons become increasingly clear.    In order to create an understanding of what the right to privacy really means, the essay will account for the personality law. Personality law is a broad concept that encompasses physical and personal privacy as well as other rights linked to the person, such as copyright. In personality law, there is scope for different perspectives and theories about the right to privacy. The perspectives that this essay will highlight are the right to be left alone, a moral ownership of the self, the right to classified information, control over one’s own data and intimacy.    An analysis of the right to privacy guaranteed by Article 8 ECHR will then take place. It is followed by a review of Swedish law in order to answer the question of which legal figures a deepfake can update. In Swedish law there is no general provision guaranteeing the right to privacy between individuals. Instead, the Swedish legislator has chosen to regulate special situations in order to guarantee the right and not to restrict freedom of expression in an unjustified way. Copyright is central to the petition and is followed by a review of the Law on Name and Image in Advertising, the Marketing Act, the Trademark Act and the Criminal Code. It can be concluded that Swedish law is not comprehensive. The legislation is something of a patchwork. The nature of the deepfake determines which law becomes applicable, if any. The question is whether it is compatible with Article 8 ECHR. The European Court of Human Rights has ruled that the protection of privacy is extensive and, in any case, guarantees individuals the right not to have their image published unless there is a public interest. Swedish law cannot be said with certainty to guarantee the right to privacy in regard to the relation between individuals.
8

Deepfakes inom social engineering och brottsutredningar

Björklund, Christoffer January 2020 (has links)
”Deepfake” är en förkortning av ”deep learning” och ”fake”. Deepfakes är syntetisk audiovisuell media som använder sig av maskininlärning för att generera falska videoklipp, bilder och/eller ljudklipp. Detta projekt fokuserar på deepfakes inom förfalskat videomaterial, där en persons ansikte i en video är utbytt mot en annan persons ansikte. Fokuset för den här rapporten är att undersöka hur enkelt det är att göra en egen deepfake med grundläggande kunskap. Detta är gjort med ett experiment som avser att mäta kvantitativa och kvalitativa resultat från intervjuer. Intervjuobjekten har tittat på två videor där de försökt identifiera författarens egna förfalskade videoklipp blandade med legitima videoklipp. Experimentet visar på att det är möjligt och relativt enkelt att skapa övertygande högkvalitativa deepfakes gjorda för social engineering. Det är däremot svårare, men fortfarande möjligt, att förfalska audiovisuellt material i bildbevis. Vidare undersöks vad det finns för typer av preventiva forensiska verktyg och metoder som utvecklas till att upptäcka deepfakes inom förfalskat videomaterial. I nuläget finns det många tekniker som föreslagits som metoder för att identifiera deepfakes. Denna rapport granskar även deepfakes gjorda för social engineering. Deepfakes anses bli ett av de större hoten i framtiden där de kan användas till att effektivt sprida propaganda och desinformation. Nyhetsmedia står inför stora utmaningar framöver på grund av misstro från konsumenter av audiovisuellt nyhetsmaterial. Utifrån de kvantitativa och kvalitativa resultaten, föreslår författaren att nyhetsmedia och social media kan informera om vad deepfakes är och hur sådana förfalskade videoklipp typiskt ser ut. / ’Deepfake’ is an abbreviation of ’deep learning’ and ’fake’. Deepfakes are synthetical audiovisual media that uses machine learning to create fake videos, images and/or audio clips. This project is focused on deepfakes within forged videos, where one person’s face is swapped with another person’s face. This technique is usually referred to as ’face swapping’. However, deepfakes goes beyond what usual face swaps can achieve. The focus for this project is to investigate how easy it is to forge your own deepfakes with basic technical knowledge. This is achieved through an experiment that measures result from fourteen interviews. The interviewees watched two different videos, where each person tried to identify the writers’ own deepfaked video clips, that were mixed with legitimate video clips. The experiment shows that it is possible and relatively easy to create convincing deepfakes aimed at social engineering. It is also possible, but harder, to create deepfakes to forge videos within criminal investigations. This report examines the potential forensic techniques and tools that exists and are developed to identify deepfakes. Furthermore, this report also examines deepfakes made for social engineering. Deepfakes are considered being one of the more significant threats in the future and could be used to effectively spread propaganda and misinformation. The results generated from the experiment in this report, lead to a proposition from the writer that news outlets and social media platforms could aim at an informative approach towards deepfakes. This approach means informing their consumers on what deepfakes are, how they typically look and what consumers can do themselves to identify them.
9

Deepfakes - En risk för samhället?

Wardh, Eric, Wirstam, Victor January 2021 (has links)
En deepfake kan vara allt från en bild, video eller ljudklipp, manipulerad med hjälp av AI-teknologi. Deepfakes används legitimt i exempelvis spel- och filmindustrin, men det vanligaste användningsområdet för deepfakes är att skapa manipulerade bilder, videor eller ljudklipp för att sprida felaktig information. Ett annat användningsområde är för att få det att se ut som att personer som egentligen inte medverkat i den aktuella bilden, videon eller ljudklippet faktiskt har gjort det. Denna uppsats fokuserar på att undersöka hur deepfakes används och hur de kan användas för att påverka samhället nu och inom de kommande fem åren. Detta görs med hjälp av en litteraturstudie samt semi-strukturerade intervjuer. I dagsläget används inte deepfakes i större grad för att försöka påverka samhället. Det som istället används för detta är en enklare variant av deepfakes som kallas för cheapfakes eller shallowfakes, som är snabbare, enklare och billigare att ta fram. Så länge deepfakes kommer vara svårare och dyrare att ta fram än cheapfakes och shallowfakes kommer inte deepfakes att användas i större grad än vad det gör idag för att påverka samhället. I takt med att utvecklingen går framåt kommer också användandet av deepfakes öka.
10

Deepfakes: ett upphovsrättsligt problem : En undersökning av det upphovsrättsliga skyddet och parodiundantagets samspel med AI-assisterade skapandeprocesser / Deepfakes: A Copyright Issue : An Inquiry of the Copyright Protection and Parody Exception's Interplay with AI-assisted Creative Processes

Atala Labbé, Daniel Antonio January 2022 (has links)
In the age of digitalization several new ways of creating immaterial property have sprung up due to the resurgence of artificial intelligence (AI). This has paved the way for different kinds of tech including the assistance of AI in a more normalized way. A prominent variation of this tech is called "deepfake". Deepfakes are a technology that essentially places your face, likeness, mannerisms, and voice onto new situations that the creator then steers to make the deepfake do or say things that the person whose deepfake is based on hasn't done or said. This technology has been used in a myriad of ways all from humourous content to extorsion and revenge porn. The aim of this master thesis is to analyse how immaterial law protection is achieved through current Swedish immaterial law principles and how these fit within the context of heavily based AI-tech such as deepfakes. This is done through a dogmatic lens meaning that a systematization and mapping of both Swedish and EU based laws and praxis are done as well as discussing the current thoughts on AI-assistance throughout the creative process. Another subject that is touched upon is the parody exception in immaterial law and the concept of adaptation and how these work with and apply to AI-based creations. Part of the problems that we face right now is that we have no existing legal parameters to solve the problem of larger AI-involvement in creative processes, this is certainly going to change how we view copyright law today. When comparing and using EU as well as Swedish praxis to analyze the AI-problem a common denominator is that all copyright law and praxis is based around the presumption that there needs to be a human involved in the majority of the creative process. AI already exists as a part of many creative processes today without any questions asked, however when the AI-part is more significant in the process the question becomes complicated when paired with traditional copyright law perspectives. Howevwer, some discussions have been going on in both Swedish and EU legal spheres, mostly in the EU who are going to legislate more in the field of AI. In Sweden there have been no legislative processes when it comes to AI in copyright law however there have been some governmental organisations and essays that have shed a light on the matter. I conclude this master thesis by writing about the findings of each question as has been mentioned above, namely that AI becomes a significant factor in deciding if a deepfake achieves copyright protection or not and the same can be said about parodies. After this I make a concluding analysis of the urgency of a need for laws that tackle AI in the area of immaterial laws listing other areas that might need it more than immaterial laws as has been explored throughout this thesis as well and that Sweden need to take part in every discussion about this to form a sustainable legal framework for AIs in the context of immaterial laws. This will open up for a clear framework when assessing different technologies that use AI like deepfakes as well.

Page generated in 0.0347 seconds