• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Deepfake detection by humans : Face swap versus lip sync / Människors förmåga att upptäcka deepfakes : Face swap mot lipsync

Sundström, Isak January 2023 (has links)
The term “deepfakes” refers to media content that has been manipulated using deep learning. This thesis project seeks to answer the question of how well humans are able to detect deepfakes. In particular, the project compares people’s ability to detect deepfakes between two different deepfake categories; face swap and lip sync. In order to achieve this, a perceptual user test was performed, in which 30 participants were given a number of lip sync, face swap and unaltered videos and were asked to classify which of them were unaltered and which of them were manipulated using deepfake technology. These results serve to fill in the gap in knowledge regarding perceptual user tests on deepfakes, for which only a small amount of research has been made. The results also serve to shed light on which types of deepfakes pose the biggest threat regarding the problem of malicious impersonation. The main conclusion from this study was that lip sync is likely harder for humans to detect than face swap. The percentage of correct classifications of lip sync videos was 52.7%, and the percentage of correct classifications of face swap videos was 91.3%. / Deepfakes är videor som har blivit manipulerade med hjälp av deep learning. Detta examensarbete utforskar huvudsakligen två olika kategorier av deepfakes, dessa två är: face swap och lip sync. Syftet med projektet är att svara på frågan: Hur bra är människor på att se om en video innehåller deepfakes eller inte? Dessutom ställs frågan: Vilken typ av deepfake mellan face swap och lip sync är svårare för människor att märka av? För att svara på dessa frågor genomfördes en användarsudie där 30 deltagare fick titta på ett antal lip sync, face swap och icke-manipulerade videor, och fick sedan försöka avgöra vilka av dom som var manipulerade och vilka som inte var manipulerade. Resultaten från den här studien hjälper till att fylla kunskapsklyftan som finns angående människors förmåga att upptäcka deepfakes, där bara en väldigt begränsad mängd studier finns. Resulaten kan också användas för att peka ut på vilka typer av deepfakes som utgör större hot angående lurendrejeri. Slutsatsen från studien var att lip sync är troligtvis svårare för människor att märka av än face swap, eller åtminstone för datasetet FakeAVCeleb. Andelen korrekta gissningar för lip sync videorna i studien var 52.7%, medan andelen korrekta gissningar för face swap var 91.3%.
2

Facial Identity Embeddings for Deepfake Detection in Videos

Emir, Alkazhami January 2020 (has links)
Forged videos of swapped faces, so-called deepfakes, have gained a  lot  of  attention in recent years. Methods for automated detection of this type of manipulation are also seeing rapid progress in their development. The purpose of this thesis work is to evaluate the possibility and effectiveness of using deep embeddings from facial recognition networks as base for detection of such deepfakes. In addition, the thesis aims to answer whether or not the identity embeddings contain information that can be used for detection while analyzed over time and if it is suitable to include information about the person's head pose in this analysis. To answer these questions, three classifiers are created with the intent to answer one question each. Their performances are compared with each other and it is shown that identity embeddings are suitable as a basis for deepfake detection. Temporal analysis of the embeddings also seem effective, at least for deepfake methods that only work on a frame-by-frame basis. Including information about head poses in the videos is shown to not improve a classifier like this.

Page generated in 0.0475 seconds