• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 8
  • 1
  • Tagged with
  • 22
  • 10
  • 9
  • 8
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Defending Against Misuse of Synthetic Media: Characterizing Real-world Challenges and Building Robust Defenses

Pu, Jiameng 07 October 2022 (has links)
Recent advances in deep generative models have enabled the generation of realistic synthetic media or deepfakes, including synthetic images, videos, and text. However, synthetic media can be misused for malicious purposes and damage users' trust in online content. This dissertation aims to address several key challenges in defending against the misuse of synthetic media. Key contributions of this dissertation include the following: (1) Understanding challenges with the real-world applicability of existing synthetic media defenses. We curate synthetic videos and text from the wild, i.e., the Internet community, and assess the effectiveness of state-of-the-art defenses on synthetic content in the wild. In addition, we propose practical low-cost adversarial attacks, and systematically measure the adversarial robustness of existing defenses. Our findings reveal that most defenses show significant degradation in performance under real-world detection scenarios, which leads to the second thread of my work: (2) Building detection schemes with improved generalization performance and robustness for synthetic content. Most existing synthetic image detection schemes are highly content-specific, e.g., designed for only human faces, thus limiting their applicability. I propose an unsupervised content-agnostic detection scheme called NoiseScope, which does not require a priori access to synthetic images and is applicable to a wide variety of generative models, i.e., GANs. NoiseScope is also resilient against a range of countermeasures conducted by a knowledgeable attacker. For the text modality, our study reveals that state-of-the-art defenses that mine sequential patterns in the text using Transformer models are vulnerable to simple evasion schemes. We conduct further exploration towards enhancing the robustness of synthetic text detection by leveraging semantic features. / Doctor of Philosophy / Recent advances in deep generative models have enabled the generation of realistic synthetic media or deepfakes, including synthetic images, videos, and text. However, synthetic media can be misused for malicious purposes and damage users' trust in online content. This dissertation aims to address several key challenges in defending against the misuse of synthetic media. Key contributions of this dissertation include the following: (1) Understanding challenges with the real-world applicability of existing synthetic media defenses. We curate synthetic videos and text from the Internet community, and assess the effectiveness of state-of-the-art defenses on the collected datasets. In addition, we systematically measure the robustness of existing defenses by designing practical low-cost attacks, such as changing the configuration of generative models. Our findings reveal that most defenses show significant degradation in performance under real-world detection scenarios, which leads to the second thread of my work: (2) Building detection schemes with improved generalization performance and robustness for synthetic content. Many existing synthetic image detection schemes make decisions by looking for anomalous patterns in a specific type of high-level content, e.g., human faces, thus limiting their applicability. I propose a blind content-agnostic detection scheme called NoiseScope, which does not require synthetic images for training, and is applicable to a wide variety of generative models. For the text modality, our study reveals that state-of-the-art defenses that mine sequential patterns in the text using Transformer models are not robust against simple attacks. We conduct further exploration towards enhancing the robustness of synthetic text detection by leveraging semantic features.
2

Generativní adversariální sítě zdivočely: Veřejné vnímání Deepfake technologií na YouTube / GANs gone wild: Public perceptions of Deepfake technologies on YouTube

Poon, Jessica January 2021 (has links)
Deepfake technologies are a form of artificial intelligence (AI) which are based on generative adversarial networks (GANs), a development which has emerged out of deep learning (DL) and machine learning (ML) models. Using a data range which spans the years 2018 - 2021, this research explores public perceptions of deepfake technologies at scale by closely examining commentary found on the social video-sharing platform, YouTube. This open source, ground-level data documents civilian responses to a selection of user-produced, labelled deepfake content. This research fills a gap regarding public perception of this emerging technology at scale. It gauges an underrepresented set of responses in discourse to find that users demonstrate a spectrum of responses which veer between irony and concern, with greater volumes of commentary skewed towards the former. This study of user commentary also finds that YouTube as a wild space ultimately affords reflexive and critical thinking around the subject of deepfake technologies and could prove to be effective as a form of inoculation against disinformation.
3

Jag tror det är jag ser det... eller? : AI-genererade deepfakes och dess användning inom desinformation

Lundberg, Erik, Knutsson, Henrik January 2024 (has links)
Utvecklingen av artificiell intelligens (AI) har tagit världen med storm, men har samtidigt visat sig vara en av de största och svåraste utmaningarna vårt samhälle står inför. En av utmaningarna är användandet av AI-genererade deepfakes som ett desinformationverktyg. Vid granskning av tidigare forskning konstaterades att majoriteten av dem behandlar detektion och förebyggande åtgärder. Konsekvenserna av deepfakes undersöks i varierande omfattning men ingen har studerat konsekvenserna för just det svenska samhället på ett djupare sätt, en lucka detta arbete ämnar fylla. Syftet med arbetet är att undersöka deepfakes påverkan på spridandet av desinformation, vilka konsekvenser denna teknik kan medföra för det svenska samhället och hur dessa kan hanteras, samt ta reda på hur deepfakes kan se ut och användas framöver. För att besvara arbetets frågeställningar genomfördes en litteraturöversikt i kombination med semistrukturerade intervjuer.   Resultatet av arbetet visar att deepfakes tack vare sin lättillgänglighet har stor påverkan på hur desinformation skapas och sprids. Desinformation med deepfakes kan medföra flera allvarliga konsekvenser, däribland ökade sociala splittringar och ett minskat förtroende till digital information. För att hantera dessa konsekvenser bör en kombination av filtrering, lagstiftning och källkritik tillämpas. Tekniken är under konstant utveckling och användandet av deepfakes som ett desinformationsverktyg kommer med stor sannolikhet att utnyttjas i ännu större omfattning framöver, vilket visar på behovet av fortsatt forskning inom området.
4

Manipulation i rörligt format - En studie kring deepfake video och dess påverkan

Weidenstolpe, Louise, Jönsson, Jade January 2020 (has links)
Med deepfake-teknologi kan det skapas manipulerade videor där det produceras falska bilder och ljud som framställs vara verkliga. Deepfake-teknologin förbättras ständigt och det kommer att bli svårare att upptäcka manipulerade videor online. Detta kan innebära att en stor del mediekonsumenter omedvetet exponeras för tekniken när de använder sociala medier. Studiens syfte är att undersöka unga vuxnas medvetenhet, synsätt och påverkan av deepfake videor. Detta eftersom deepfake-teknologin förbättras årligen och problemen med tekniken växer samt kan få negativa konsekvenser i framtiden om den utnyttjas på fel sätt. Insamlingen av det empiriska materialet skedde genom en kvantitativ metod i form av en webbenkät och en kvalitativ metod med tre fokusgrupper. Slutsatsen visade på att det finns ett större antal unga vuxna som inte är medvetna om vad en deepfake video är, dock existerar det en viss oro för deepfake-teknologin och dess utveckling. Det upplevs att det finns risker för framtiden med teknologin i form av hot mot demokratin och politik, spridning av Fake news, video-manipulation samt brist på källkritik. De positiva aspekterna är att tekniken kan användas i sammanhang av humor, inom film- och TV-industrin samt sjukvård. Ytterligare en slutsats är att unga vuxna kommer att vara mer källkritiska till innehåll de exponeras av framöver, dock kommer de med stor sannolikhet ändå att påverkas av deepfake-teknologin i framtiden. / Manipulated videos can be created with deepfake technology, where fake images and sounds are produced and seem to be real. Deepfake technology is constantly improving and it will be more problematic to detect manipulated video online. This may result in a large number of media consumers being unknowingly exposed to deepfake technology while using social media. The purpose of this study is to research young adults' awareness, approach and impact of deepfake videos. The deepfake technology improves annually and more problems occur, which can cause negative consequences in the future if it’s misused. The study is based on a quantitative method in the form of a web survey and a qualitative method with three focus groups. The conclusion shows that there’s a large number of young adults who are not aware of what a deepfake video is, however there’s some concern about deepfake technology and its development. It’s perceived that there can be risks in the future with the technology in terms of threats to democracy and politics, distribution of Fake news, video manipulation and lack of source criticism. The positive aspects are that the technology can be used for entertainment purposes, in the film and television industry also in the healthcare department. Another conclusion is that young adults will be more critical to the content they are exposed to in the future, but likely be affected by deepfake technology either way.
5

NoiseLearner: An Unsupervised, Content-agnostic Approach to Detect Deepfake Images

Vives, Cristian 21 March 2022 (has links)
Recent advancements in generative models have resulted in the improvement of hyper- realistic synthetic images or "deepfakes" at high resolutions, making them almost indistin- guishable from real images from cameras. While exciting, this technology introduces room for abuse. Deepfakes have already been misused to produce pornography, political propaganda, and misinformation. The ability to produce fully synthetic content that can cause such mis- information demands for robust deepfake detection frameworks. Most deepfake detection methods are trained in a supervised manner, and fail to generalize to deepfakes produced by newer and superior generative models. More importantly, such detection methods are usually focused on detecting deepfakes having a specific type of content, e.g., face deepfakes. How- ever, other types of deepfakes are starting to emerge, e.g., deepfakes of biomedical images, satellite imagery, people, and objects shown in different settings. Taking these challenges into account, we propose NoiseLearner, an unsupervised and content-agnostic deepfake im- age detection method. NoiseLearner aims to detect any deepfake image regardless of the generative model of origin or the content of the image. We perform a comprehensive evalu- ation by testing on multiple deepfake datasets composed of different generative models and different content groups, such as faces, satellite images, landscapes, and animals. Further- more, we include more recent state-of-the-art generative models in our evaluation, such as StyleGAN3 and probabilistic denoising diffusion models (DDPM). We observe that Noise- Learner performs well on multiple datasets, achieving 96% accuracy on both StyleGAN and StyleGAN2 datasets. / Master of Science / Images synthesized by artificial intelligence, commonly known as deepfakes, are starting to become indistinguishable from real images. While these technological advances are exciting with regards to what a computer can do, it is important to understand that such technol- ogy is currently being used with ill intent. Thus, identifying these images is becoming a growing necessity, especially as deepfake technology grows to perfectly mimic the nature of real images. Current deepfake detection approaches fail to detect deepfakes of other content, such as sattelite imagery or X-rays, and cannot generalize to deepfakes synthesized by new artificial intelligence. Taking these concerns into account, we propose NoiseLearner, a deep- fake detection method that can detect any deepfake regardless of the content and artificial intelligence model used to synthesize it. The key idea behind NoiseLearner is that it does not require any deepfakes to train. Instead, NoiseLearner learns the key features of real images and uses them to differentiate between deepfakes and real images – without ever looking at a single deepfake. Even with this strong constraint, NoiseLearner shows promise by detecting deepfakes of diverse contents and models used to generate them. We also explore different ways to improve NoiseLearner.
6

Použitelnost Deepfakes v oblasti kybernetické bezpečnosti / Applicability of Deepfakes in the Field of Cyber Security

Firc, Anton January 2021 (has links)
Deepfake technológia je v poslednej dobe na vzostupe. Vzniká mnoho techník a nástrojov pre tvorbu deepfake médií a začínajú sa používať ako pre nezákonné tak aj pre prospešné činnosti. Nezákonné použitie vedie k výskumu techník pre detekciu deepfake médií a ich neustálemu zlepšovaniu, takisto ako k potrebe vzdelávať širokú verejnosť o nástrahách, ktoré táto technológia prináša. Jedna z málo preskúmaných oblastí škodlivého použitia je používanie deepfake pre oklamanie systémov hlasovej autentifikácie. Názory spoločnosti na vykonateľnosť takýchto útokov sa líšia, no existuje len málo vedeckých dôkazov. Cieľom tejto práce je preskúmať aktuálnu pripravenosť systémov hlasovej biometrie čeliť deepfake nahrávkam. Vykonané experimenty ukazujú, že systémy hlasovej biometrie sú zraniteľné pomocou deepfake nahrávok. Napriek tomu, že skoro všetky verejne dostupné nástroje a modely sú určené pre syntézu anglického jazyka, v tejto práci ukazujem, že syntéza hlasu v akomkoľvek jazyku nie je veľmi náročná. Nakoniec navrhujem riešenie pre zníženie rizika ktoré deepfake nahrávky predstavujú pre systémy hlasovej biometrie, a to používať overenie hlasu závislé na texte, nakoľko som ukázal, že je odolnejšie proti deepfake nahrávkam.
7

Manipulation i rörligt format - En studie kring deepfake video och dess påverkan

Jönsson, Jade, weidenstolpe, louise January 2020 (has links)
Med deepfake-teknologi kan det skapas manipulerade videor där det produceras falska bilder och ljud som framställs vara verkliga. Deepfake-teknologin förbättras ständigt och det kommer att bli svårare att upptäcka manipulerade videor online. Detta kan innebära att en stor del mediekonsumenter omedvetet exponeras för tekniken när de använder sociala medier. Studiens syfte är att undersöka unga vuxnas medvetenhet, synsätt och påverkan av deepfake videor. Detta eftersom deepfake-teknologin förbättras årligen och problemen med tekniken växer samt kan få negativa konsekvenser i framtiden om den utnyttjas på fel sätt. Insamlingen av det empiriska materialet skedde genom en kvantitativ metod i form av en webbenkät och en kvalitativ metod med tre fokusgrupper. Slutsatsen visade på att det finns ett större antal unga vuxna som inte är medvetna om vad en deepfake video är, dock existerar det en viss oro för deepfake-teknologin och dess utveckling. Det upplevs att det finns risker för framtiden med teknologin i form av hot mot demokratin och politik, spridning av Fake news, video-manipulation samt brist på källkritik. De positiva aspekterna är att tekniken kan användas i sammanhang av humor, inom film- och TV-industrin samt sjukvård. Ytterligare en slutsats är att unga vuxna kommer att vara mer källkritiska till innehåll de exponeras av framöver, dock kommer de med stor sannolikhet ändå att påverkas av deepfake-teknologin i framtiden. / Manipulated videos can be created with deepfake technology, where fake images and sounds are produced and seem to be real. Deepfake technology is constantly improving and it will be more problematic to detect manipulated video online. This may result in a large number of media consumers being unknowingly exposed to deepfake technology while using social media. The purpose of this study is to research young adults' awareness, approach and impact of deepfake videos. The deepfake technology improves annually and more problems occur, which can cause negative consequences in the future if it’s misused. The study is based on a quantitative method in the form of a web survey and a qualitative method with three focus groups. The conclusion shows that there’s a large number of young adults who are not aware of what a deepfake video is, however there’s some concern about deepfake technology and its development. It’s perceived that there can be risks in the future with the technology in terms of threats to democracy and politics, distribution of Fake news, video manipulation and lack of source criticism. The positive aspects are that the technology can be used for entertainment purposes, in the film and television industry also in the healthcare department. Another conclusion is that young adults will be more critical to the content they are exposed to in the future, but likely be affected by deepfake technology either way.
8

Could you imagine that face on that body? : A study of deepfakes and performers’ rights in EU law

Tyni, Emil January 2023 (has links)
The natural desire to express the human experience through song, dance, speech and movement have characterised culture and society throughout history. From frantic dances around fires, to comedies and dramas at the ancient theatres, to sold out arena concerts, all driven by the same fundamental spirit of creation and expression. The unification of intellectual creation and physical action that constitutes a performance were for a long time only a transient activity, but that all changed with the introduction of recording technology. To protect the economic and moral interests of performers, a new form of intellectual property right, known as performers right, was introduced. This new form of IP-right was based on a similar rationale as the one for copyright protection of artistic and literary works, and it allowed performing artists to exercise a certain degree of control over fixations of their work. The modern development of image manipulation has come to challenge the integrity of performers rights however. Deepfakes is a form of AI-assisted technology that has made the synthetic manipulation of images, sound and video possible in ways previously unseen, in terms of quality, quantity and accessibility. The manipulation of videos by changing faces or voices of individuals raises a number of questions in a variety of legal areas. In order to bring clarity to the relation between deepfakes and performers rights, the condition under which a deepfake can constitute an infringement, and a granter, of performers rights under EU law, was investigated in this thesis. By primarily relying on a legal dogmatic method to interpret and systematise the existing EU legislation, case law and international treaties in the field of intellectual property, the synthetic manipulations of recorded performances were studied in relation to the applicable law. It was concluded that deepfakes may infringe on the rights of performers if the manipulated content constituted a reproduction of a fixation of a performance. Furthermore, it was also established that a deepfake in itself can not constitute a source of performers rights, due to its synthetic nature
9

Consumer perception of Deepfake Technology in Marketing : An abductive study on consumer attitude, trust and brand authenticity.

Huang, Qirong, Maracic, Julian January 2024 (has links)
Background: In the topic of marketing, Artificial Intelligence has a tremendous influence on social media. Deepfake technology which is a product of Generative Artificial Intelligence (GAI) is the tool that simplifies the creation of hyper-realistic videos. In general, this technology has been used in identification thefts, pornographic, propaganda, and spreading misinformation. Thus, this study wonders about the possibility of using this technology in marketing. Therefore, this study focuses on Consumer perception connected to concepts such as Consumer attitude, Consumer trust, and Brand authenticity  Purpose: The purpose of this study is to explore how customer attitude, consumer trust, and brand authenticity are influenced by deepfake videos to understand consumer perceptions towards deepfake videos.  Methodology: This abductive master’s thesis employed a qualitative approach to collect empirical data. Drawing inspiration from semi-structured interviews, questions were formulated for use in five focus groups comprising a total of 24 participants. Thematic analysis was employed as the research methodology to code and categorize the transcriptions. The data obtained from the focus groups were transcribed and coded using Delvetool.  Findings: Consumer perception on the use of deepfake technology in marketing, noting both opportunities and concerns. While deepfake offers potential for brand expansion and streamlined content creation, its illegal use poses societal risks. Participants expressed mixed feelings about deepfake technology, finding it both impressive and daunting. For marketing, deepfake simplifies content creation but must be used legally and aligned with brand identity. Transparency and credibility are crucial to shaping consumer attitudes and trust, which in turn affect brand authenticity. Misuse of deepfake content can harm brand image and credibility, leading to negative consumer perceptions and behaviors. Ultimately, whether perceived positively or negatively, deepfake use influences consumer behavior, either fostering brand loyalty or eroding trust and advocacy.
10

Intimt eller sexuellt deepfakematerial? : En analys av fenomenet ‘deepfake pornografi’ som digitalt sexuellt övergrepp inom det EU-rättsliga området / Intimate or sexual deepfake material? : An analysis of the phenomenon ’deepfake pornography’ as virtual sexual abuse in the legal framework of the European Union

Skoghag, Emelie January 2023 (has links)
No description available.

Page generated in 0.0297 seconds