• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 5
  • Tagged with
  • 14
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Defending Against Misuse of Synthetic Media: Characterizing Real-world Challenges and Building Robust Defenses

Pu, Jiameng 07 October 2022 (has links)
Recent advances in deep generative models have enabled the generation of realistic synthetic media or deepfakes, including synthetic images, videos, and text. However, synthetic media can be misused for malicious purposes and damage users' trust in online content. This dissertation aims to address several key challenges in defending against the misuse of synthetic media. Key contributions of this dissertation include the following: (1) Understanding challenges with the real-world applicability of existing synthetic media defenses. We curate synthetic videos and text from the wild, i.e., the Internet community, and assess the effectiveness of state-of-the-art defenses on synthetic content in the wild. In addition, we propose practical low-cost adversarial attacks, and systematically measure the adversarial robustness of existing defenses. Our findings reveal that most defenses show significant degradation in performance under real-world detection scenarios, which leads to the second thread of my work: (2) Building detection schemes with improved generalization performance and robustness for synthetic content. Most existing synthetic image detection schemes are highly content-specific, e.g., designed for only human faces, thus limiting their applicability. I propose an unsupervised content-agnostic detection scheme called NoiseScope, which does not require a priori access to synthetic images and is applicable to a wide variety of generative models, i.e., GANs. NoiseScope is also resilient against a range of countermeasures conducted by a knowledgeable attacker. For the text modality, our study reveals that state-of-the-art defenses that mine sequential patterns in the text using Transformer models are vulnerable to simple evasion schemes. We conduct further exploration towards enhancing the robustness of synthetic text detection by leveraging semantic features. / Doctor of Philosophy / Recent advances in deep generative models have enabled the generation of realistic synthetic media or deepfakes, including synthetic images, videos, and text. However, synthetic media can be misused for malicious purposes and damage users' trust in online content. This dissertation aims to address several key challenges in defending against the misuse of synthetic media. Key contributions of this dissertation include the following: (1) Understanding challenges with the real-world applicability of existing synthetic media defenses. We curate synthetic videos and text from the Internet community, and assess the effectiveness of state-of-the-art defenses on the collected datasets. In addition, we systematically measure the robustness of existing defenses by designing practical low-cost attacks, such as changing the configuration of generative models. Our findings reveal that most defenses show significant degradation in performance under real-world detection scenarios, which leads to the second thread of my work: (2) Building detection schemes with improved generalization performance and robustness for synthetic content. Many existing synthetic image detection schemes make decisions by looking for anomalous patterns in a specific type of high-level content, e.g., human faces, thus limiting their applicability. I propose a blind content-agnostic detection scheme called NoiseScope, which does not require synthetic images for training, and is applicable to a wide variety of generative models. For the text modality, our study reveals that state-of-the-art defenses that mine sequential patterns in the text using Transformer models are not robust against simple attacks. We conduct further exploration towards enhancing the robustness of synthetic text detection by leveraging semantic features.
2

Generativní adversariální sítě zdivočely: Veřejné vnímání Deepfake technologií na YouTube / GANs gone wild: Public perceptions of Deepfake technologies on YouTube

Poon, Jessica January 2021 (has links)
Deepfake technologies are a form of artificial intelligence (AI) which are based on generative adversarial networks (GANs), a development which has emerged out of deep learning (DL) and machine learning (ML) models. Using a data range which spans the years 2018 - 2021, this research explores public perceptions of deepfake technologies at scale by closely examining commentary found on the social video-sharing platform, YouTube. This open source, ground-level data documents civilian responses to a selection of user-produced, labelled deepfake content. This research fills a gap regarding public perception of this emerging technology at scale. It gauges an underrepresented set of responses in discourse to find that users demonstrate a spectrum of responses which veer between irony and concern, with greater volumes of commentary skewed towards the former. This study of user commentary also finds that YouTube as a wild space ultimately affords reflexive and critical thinking around the subject of deepfake technologies and could prove to be effective as a form of inoculation against disinformation.
3

Manipulation i rörligt format - En studie kring deepfake video och dess påverkan

Weidenstolpe, Louise, Jönsson, Jade January 2020 (has links)
Med deepfake-teknologi kan det skapas manipulerade videor där det produceras falska bilder och ljud som framställs vara verkliga. Deepfake-teknologin förbättras ständigt och det kommer att bli svårare att upptäcka manipulerade videor online. Detta kan innebära att en stor del mediekonsumenter omedvetet exponeras för tekniken när de använder sociala medier. Studiens syfte är att undersöka unga vuxnas medvetenhet, synsätt och påverkan av deepfake videor. Detta eftersom deepfake-teknologin förbättras årligen och problemen med tekniken växer samt kan få negativa konsekvenser i framtiden om den utnyttjas på fel sätt. Insamlingen av det empiriska materialet skedde genom en kvantitativ metod i form av en webbenkät och en kvalitativ metod med tre fokusgrupper. Slutsatsen visade på att det finns ett större antal unga vuxna som inte är medvetna om vad en deepfake video är, dock existerar det en viss oro för deepfake-teknologin och dess utveckling. Det upplevs att det finns risker för framtiden med teknologin i form av hot mot demokratin och politik, spridning av Fake news, video-manipulation samt brist på källkritik. De positiva aspekterna är att tekniken kan användas i sammanhang av humor, inom film- och TV-industrin samt sjukvård. Ytterligare en slutsats är att unga vuxna kommer att vara mer källkritiska till innehåll de exponeras av framöver, dock kommer de med stor sannolikhet ändå att påverkas av deepfake-teknologin i framtiden. / Manipulated videos can be created with deepfake technology, where fake images and sounds are produced and seem to be real. Deepfake technology is constantly improving and it will be more problematic to detect manipulated video online. This may result in a large number of media consumers being unknowingly exposed to deepfake technology while using social media. The purpose of this study is to research young adults' awareness, approach and impact of deepfake videos. The deepfake technology improves annually and more problems occur, which can cause negative consequences in the future if it’s misused. The study is based on a quantitative method in the form of a web survey and a qualitative method with three focus groups. The conclusion shows that there’s a large number of young adults who are not aware of what a deepfake video is, however there’s some concern about deepfake technology and its development. It’s perceived that there can be risks in the future with the technology in terms of threats to democracy and politics, distribution of Fake news, video manipulation and lack of source criticism. The positive aspects are that the technology can be used for entertainment purposes, in the film and television industry also in the healthcare department. Another conclusion is that young adults will be more critical to the content they are exposed to in the future, but likely be affected by deepfake technology either way.
4

NoiseLearner: An Unsupervised, Content-agnostic Approach to Detect Deepfake Images

Vives, Cristian 21 March 2022 (has links)
Recent advancements in generative models have resulted in the improvement of hyper- realistic synthetic images or "deepfakes" at high resolutions, making them almost indistin- guishable from real images from cameras. While exciting, this technology introduces room for abuse. Deepfakes have already been misused to produce pornography, political propaganda, and misinformation. The ability to produce fully synthetic content that can cause such mis- information demands for robust deepfake detection frameworks. Most deepfake detection methods are trained in a supervised manner, and fail to generalize to deepfakes produced by newer and superior generative models. More importantly, such detection methods are usually focused on detecting deepfakes having a specific type of content, e.g., face deepfakes. How- ever, other types of deepfakes are starting to emerge, e.g., deepfakes of biomedical images, satellite imagery, people, and objects shown in different settings. Taking these challenges into account, we propose NoiseLearner, an unsupervised and content-agnostic deepfake im- age detection method. NoiseLearner aims to detect any deepfake image regardless of the generative model of origin or the content of the image. We perform a comprehensive evalu- ation by testing on multiple deepfake datasets composed of different generative models and different content groups, such as faces, satellite images, landscapes, and animals. Further- more, we include more recent state-of-the-art generative models in our evaluation, such as StyleGAN3 and probabilistic denoising diffusion models (DDPM). We observe that Noise- Learner performs well on multiple datasets, achieving 96% accuracy on both StyleGAN and StyleGAN2 datasets. / Master of Science / Images synthesized by artificial intelligence, commonly known as deepfakes, are starting to become indistinguishable from real images. While these technological advances are exciting with regards to what a computer can do, it is important to understand that such technol- ogy is currently being used with ill intent. Thus, identifying these images is becoming a growing necessity, especially as deepfake technology grows to perfectly mimic the nature of real images. Current deepfake detection approaches fail to detect deepfakes of other content, such as sattelite imagery or X-rays, and cannot generalize to deepfakes synthesized by new artificial intelligence. Taking these concerns into account, we propose NoiseLearner, a deep- fake detection method that can detect any deepfake regardless of the content and artificial intelligence model used to synthesize it. The key idea behind NoiseLearner is that it does not require any deepfakes to train. Instead, NoiseLearner learns the key features of real images and uses them to differentiate between deepfakes and real images – without ever looking at a single deepfake. Even with this strong constraint, NoiseLearner shows promise by detecting deepfakes of diverse contents and models used to generate them. We also explore different ways to improve NoiseLearner.
5

Použitelnost Deepfakes v oblasti kybernetické bezpečnosti / Applicability of Deepfakes in the Field of Cyber Security

Firc, Anton January 2021 (has links)
Deepfake technológia je v poslednej dobe na vzostupe. Vzniká mnoho techník a nástrojov pre tvorbu deepfake médií a začínajú sa používať ako pre nezákonné tak aj pre prospešné činnosti. Nezákonné použitie vedie k výskumu techník pre detekciu deepfake médií a ich neustálemu zlepšovaniu, takisto ako k potrebe vzdelávať širokú verejnosť o nástrahách, ktoré táto technológia prináša. Jedna z málo preskúmaných oblastí škodlivého použitia je používanie deepfake pre oklamanie systémov hlasovej autentifikácie. Názory spoločnosti na vykonateľnosť takýchto útokov sa líšia, no existuje len málo vedeckých dôkazov. Cieľom tejto práce je preskúmať aktuálnu pripravenosť systémov hlasovej biometrie čeliť deepfake nahrávkam. Vykonané experimenty ukazujú, že systémy hlasovej biometrie sú zraniteľné pomocou deepfake nahrávok. Napriek tomu, že skoro všetky verejne dostupné nástroje a modely sú určené pre syntézu anglického jazyka, v tejto práci ukazujem, že syntéza hlasu v akomkoľvek jazyku nie je veľmi náročná. Nakoniec navrhujem riešenie pre zníženie rizika ktoré deepfake nahrávky predstavujú pre systémy hlasovej biometrie, a to používať overenie hlasu závislé na texte, nakoľko som ukázal, že je odolnejšie proti deepfake nahrávkam.
6

Manipulation i rörligt format - En studie kring deepfake video och dess påverkan

Jönsson, Jade, weidenstolpe, louise January 2020 (has links)
Med deepfake-teknologi kan det skapas manipulerade videor där det produceras falska bilder och ljud som framställs vara verkliga. Deepfake-teknologin förbättras ständigt och det kommer att bli svårare att upptäcka manipulerade videor online. Detta kan innebära att en stor del mediekonsumenter omedvetet exponeras för tekniken när de använder sociala medier. Studiens syfte är att undersöka unga vuxnas medvetenhet, synsätt och påverkan av deepfake videor. Detta eftersom deepfake-teknologin förbättras årligen och problemen med tekniken växer samt kan få negativa konsekvenser i framtiden om den utnyttjas på fel sätt. Insamlingen av det empiriska materialet skedde genom en kvantitativ metod i form av en webbenkät och en kvalitativ metod med tre fokusgrupper. Slutsatsen visade på att det finns ett större antal unga vuxna som inte är medvetna om vad en deepfake video är, dock existerar det en viss oro för deepfake-teknologin och dess utveckling. Det upplevs att det finns risker för framtiden med teknologin i form av hot mot demokratin och politik, spridning av Fake news, video-manipulation samt brist på källkritik. De positiva aspekterna är att tekniken kan användas i sammanhang av humor, inom film- och TV-industrin samt sjukvård. Ytterligare en slutsats är att unga vuxna kommer att vara mer källkritiska till innehåll de exponeras av framöver, dock kommer de med stor sannolikhet ändå att påverkas av deepfake-teknologin i framtiden. / Manipulated videos can be created with deepfake technology, where fake images and sounds are produced and seem to be real. Deepfake technology is constantly improving and it will be more problematic to detect manipulated video online. This may result in a large number of media consumers being unknowingly exposed to deepfake technology while using social media. The purpose of this study is to research young adults' awareness, approach and impact of deepfake videos. The deepfake technology improves annually and more problems occur, which can cause negative consequences in the future if it’s misused. The study is based on a quantitative method in the form of a web survey and a qualitative method with three focus groups. The conclusion shows that there’s a large number of young adults who are not aware of what a deepfake video is, however there’s some concern about deepfake technology and its development. It’s perceived that there can be risks in the future with the technology in terms of threats to democracy and politics, distribution of Fake news, video manipulation and lack of source criticism. The positive aspects are that the technology can be used for entertainment purposes, in the film and television industry also in the healthcare department. Another conclusion is that young adults will be more critical to the content they are exposed to in the future, but likely be affected by deepfake technology either way.
7

Could you imagine that face on that body? : A study of deepfakes and performers’ rights in EU law

Tyni, Emil January 2023 (has links)
The natural desire to express the human experience through song, dance, speech and movement have characterised culture and society throughout history. From frantic dances around fires, to comedies and dramas at the ancient theatres, to sold out arena concerts, all driven by the same fundamental spirit of creation and expression. The unification of intellectual creation and physical action that constitutes a performance were for a long time only a transient activity, but that all changed with the introduction of recording technology. To protect the economic and moral interests of performers, a new form of intellectual property right, known as performers right, was introduced. This new form of IP-right was based on a similar rationale as the one for copyright protection of artistic and literary works, and it allowed performing artists to exercise a certain degree of control over fixations of their work. The modern development of image manipulation has come to challenge the integrity of performers rights however. Deepfakes is a form of AI-assisted technology that has made the synthetic manipulation of images, sound and video possible in ways previously unseen, in terms of quality, quantity and accessibility. The manipulation of videos by changing faces or voices of individuals raises a number of questions in a variety of legal areas. In order to bring clarity to the relation between deepfakes and performers rights, the condition under which a deepfake can constitute an infringement, and a granter, of performers rights under EU law, was investigated in this thesis. By primarily relying on a legal dogmatic method to interpret and systematise the existing EU legislation, case law and international treaties in the field of intellectual property, the synthetic manipulations of recorded performances were studied in relation to the applicable law. It was concluded that deepfakes may infringe on the rights of performers if the manipulated content constituted a reproduction of a fixation of a performance. Furthermore, it was also established that a deepfake in itself can not constitute a source of performers rights, due to its synthetic nature
8

Intimt eller sexuellt deepfakematerial? : En analys av fenomenet ‘deepfake pornografi’ som digitalt sexuellt övergrepp inom det EU-rättsliga området / Intimate or sexual deepfake material? : An analysis of the phenomenon ’deepfake pornography’ as virtual sexual abuse in the legal framework of the European Union

Skoghag, Emelie January 2023 (has links)
No description available.
9

Cooperative edge deepfake detection

Hasanaj, Enis, Aveler, Albert, Söder, William January 2021 (has links)
Deepfakes are an emerging problem in social media and for celebrities and political profiles, it can be devastating to their reputation if the technology ends up in the wrong hands. Creating deepfakes is becoming increasingly easy. Attempts have been made at detecting whether a face in an image is real or not but training these machine learning models can be a very time-consuming process. This research proposes a solution to training deepfake detection models cooperatively on the edge. This is done in order to evaluate if the training process, among other things, can be made more efficient with this approach.  The feasibility of edge training is evaluated by training machine learning models on several different types of iPhone devices. The models are trained using the YOLOv2 object detection system.  To test if the YOLOv2 object detection system is able to distinguish between real and fake human faces in images, several models are trained on a computer. Each model is trained with either different number of iterations or different subsets of data, since these metrics have been identified as important to the performance of the models. The performance of the models is evaluated by measuring the accuracy in detecting deepfakes.  Additionally, the deepfake detection models trained on a computer are ensembled using the bagging ensemble method. This is done in order to evaluate the feasibility of cooperatively training a deepfake detection model by combining several models.  Results show that the proposed solution is not feasible due to the time the training process takes on each mobile device. Additionally, each trained model is about 200 MB, and the size of the ensemble model grows linearly by each model added to the ensemble. This can cause the ensemble model to grow to several hundred gigabytes in size.
10

Facial Identity Embeddings for Deepfake Detection in Videos

Emir, Alkazhami January 2020 (has links)
Forged videos of swapped faces, so-called deepfakes, have gained a  lot  of  attention in recent years. Methods for automated detection of this type of manipulation are also seeing rapid progress in their development. The purpose of this thesis work is to evaluate the possibility and effectiveness of using deep embeddings from facial recognition networks as base for detection of such deepfakes. In addition, the thesis aims to answer whether or not the identity embeddings contain information that can be used for detection while analyzed over time and if it is suitable to include information about the person's head pose in this analysis. To answer these questions, three classifiers are created with the intent to answer one question each. Their performances are compared with each other and it is shown that identity embeddings are suitable as a basis for deepfake detection. Temporal analysis of the embeddings also seem effective, at least for deepfake methods that only work on a frame-by-frame basis. Including information about head poses in the videos is shown to not improve a classifier like this.

Page generated in 0.0466 seconds