Spelling suggestions: "subject:"deepfake"" "subject:"deepfakes""
11 |
Cooperative edge deepfake detectionHasanaj, Enis, Aveler, Albert, Söder, William January 2021 (has links)
Deepfakes are an emerging problem in social media and for celebrities and political profiles, it can be devastating to their reputation if the technology ends up in the wrong hands. Creating deepfakes is becoming increasingly easy. Attempts have been made at detecting whether a face in an image is real or not but training these machine learning models can be a very time-consuming process. This research proposes a solution to training deepfake detection models cooperatively on the edge. This is done in order to evaluate if the training process, among other things, can be made more efficient with this approach. The feasibility of edge training is evaluated by training machine learning models on several different types of iPhone devices. The models are trained using the YOLOv2 object detection system. To test if the YOLOv2 object detection system is able to distinguish between real and fake human faces in images, several models are trained on a computer. Each model is trained with either different number of iterations or different subsets of data, since these metrics have been identified as important to the performance of the models. The performance of the models is evaluated by measuring the accuracy in detecting deepfakes. Additionally, the deepfake detection models trained on a computer are ensembled using the bagging ensemble method. This is done in order to evaluate the feasibility of cooperatively training a deepfake detection model by combining several models. Results show that the proposed solution is not feasible due to the time the training process takes on each mobile device. Additionally, each trained model is about 200 MB, and the size of the ensemble model grows linearly by each model added to the ensemble. This can cause the ensemble model to grow to several hundred gigabytes in size.
|
12 |
Facial Identity Embeddings for Deepfake Detection in VideosEmir, Alkazhami January 2020 (has links)
Forged videos of swapped faces, so-called deepfakes, have gained a lot of attention in recent years. Methods for automated detection of this type of manipulation are also seeing rapid progress in their development. The purpose of this thesis work is to evaluate the possibility and effectiveness of using deep embeddings from facial recognition networks as base for detection of such deepfakes. In addition, the thesis aims to answer whether or not the identity embeddings contain information that can be used for detection while analyzed over time and if it is suitable to include information about the person's head pose in this analysis. To answer these questions, three classifiers are created with the intent to answer one question each. Their performances are compared with each other and it is shown that identity embeddings are suitable as a basis for deepfake detection. Temporal analysis of the embeddings also seem effective, at least for deepfake methods that only work on a frame-by-frame basis. Including information about head poses in the videos is shown to not improve a classifier like this.
|
13 |
The cybersecurity threat of deepfakeBrandqvist, Johan January 2024 (has links)
The rapid advancement of deepfake technology, utilizing Artificial Intelligence (AI) to create convincing, but manipulated audio and video content, presents significant challenges to cybersecurity, privacy, and information integrity. This study explores the complex cybersecurity threats posed by deepfakes and evaluates effective strategies, to prepare organizations and individuals for these risks. Employing a qualitative research approach, semi-structured interviews with cybersecurity- and AI experts were conducted to gain insights into the current threat landscape, the technological evolution of deepfakes, and strategies for their detection and prevention. The findings reveal that while deepfakes offer opportunities in various sectors, they predominantly also pose threats such as misinformation, identity theft, and fraud. This study highlights the dual-use nature of deepfake technology, where improvements in creation and detection are continually evolving in a technological arms race. Ethical and societal implications are examined, emphasizing the need for enhanced public awareness and comprehensive regulatory frameworks to manage these challenges. The conclusions drawn from this research underscore the urgency of developing robust, AI-driven detection tools, advocating for a balanced approach that considers both technological advancements and the ethical dimensions of these innovations. Recommendations for policymakers and cybersecurity professionals include investing in detection technologies, promoting digital literacy, and fostering international collaboration to establish standards for ethical AI use. This thesis contributes to the broader discourse on AI ethics and cybersecurity, providing a foundation for future research and policy development in the era of digital manipulation.
|
14 |
Från ovetande till medveten : Hur information påverkar människors förmåga att identifiera falska bilderHedman, Vilma, Olofsson, Malin January 2024 (has links)
En definition för deepfake är “Datorframställd förfalskad information i form av bild eller film som framställs som äkta och trovärdig”. Ordet är uppbyggt av två ord deep vilket syftar till djup maskininlärning och fake vilket indikerar att informationen är förfalskad. Syftet med det här arbetet var att undersöka människors förmåga att identifiera deepfakes bilder utifrån deras egen kunskap kontra efter att författarna gett en informativ text om vad respondenten kunde kolla efter i bilderna. Utöver detta har arbetet även undersökt vilka metoder som finns för att identifiera och motverka att deepfakes sprids. För att besvara detta har en enkätundersökning samt en litteraturöversikt inom området genomförts. Resultatet av undersökningen analyserades genom bland annat ett hypotestest. Det användes för att undersöka om det fanns en signifikant skillnad på medelvärdena före och efter respondenten tagit del av den informativa texten. Slutsatsen efter avslutad undersökning blev en påvisad statistisk signifikans för resultatet av enkätundersökningen samt en sammanställning utav de metoder som forskningen presenterade. Den statistiska signifikansen innebär att resultatet bygger på tillräckligt mycket data för att inte klassas som slumpmässigt. Av studiens resultat har en guide för att underlätta identifiering av deepfakes skapats där respondenternas och forskningens gemensamma punkter ligger till grund.
|
15 |
Kreativitet i en tid av AI : En kvalitativ studie på hur AI påverkar den kreativa branschen samt användningsområden och hot som finns för kreativa yrken / Creativity in an age of AI : A qualitative study on how AI affects the creative industry as well as the usages and threats to creative professionsTran, Sally, Lund, Clara Marie January 2023 (has links)
Artificiell intelligens (AI) har blivit alltmer betydelsefull i dagens samhälle, med växande tillämpningar inom olika kreativa områden. Denna kandidatuppsats syftar till att undersöka de potentiella fördelarna och riskerna som AI erbjuder för kreativa yrken inom media. Genom analys av åtta intervjuer studerar arbetet hur AI-verktyg, såsom ChatGPT, DALL·E och Midjourney, kan effektivisera arbetsprocesser, öka tidsåtgången för kreativt arbete och främja inspiration, samtidigt som det granskar de förknippade utmaningarna, såsom rättigheter, stereotyper, deepfake och påverkan på arbetsmöjligheter. Resultaten visar att användningen av AI medför både för- och nackdelar för kreativt arbete och antyder att denna trend kommer att fortsätta i framtiden. Även om potentialen för AI-teknik är stor, förväntas mänsklig kreativitet fortfarande att ha en avgörande roll i det kreativa arbetet. Avhandlingen avslutas med att betona samhällets och individernas ansvar att aktivt delta i utvecklingsprocessen och säkerställa att lämpliga lagar och regler finns på plats för att förhindra missbruk av olika AI-system inom den kreativa sektorn. / Artificial intelligence (AI) has become increasingly significant today with growing applications in various creative fields. This bachelor thesis aims to explore the potential benefits and risks AI offers to creative professions within media. Through the analysis of eight interviews, the study investigates how AI tools, such as ChatGPT, DALL·E, and Midjourney, can streamline work processes, enhance creative work, and foster inspiration while also examining the associated challenges, such as rights, stereotypes, deepfakes, and the impact on work opportunities. The findings demonstrate that AI usage presents both advantages and disadvantages for creative work and suggests that this trend will continue in the future. While the potential of AI technology is vast, the industry is expected to undergo significant transformations. The thesis concludes by emphasizing the responsibility of society and individuals to actively participate in the development process, ensuring that appropriate laws and regulations are in place to prevent the misuse of various AI systems in the creative sector.
|
16 |
Nyhetsmedieindustrin och den syntetiska revolutionen : En kvalitativ studie om hur nyhetsmedieindustrin hanterar utvecklingen av syntetisk mediaKadhum, Zainab, Rosvall, Amanda January 2024 (has links)
Syntetisk media är en form av manipulerat eller genererat innehåll som skapas med hjälp av avancerad AI. Denna teknik har potentialen att revolutionera skapandet av medieinnehåll men medför också betydande utmaningar, som spridning av desinformation. Denna studie utforskar de konsekvenser som syntetisk media har för nyhetsmedia, som traditionellt följer strikta journalistiska standarder. Verktyg som skapar syntetisk media har potentialen att effektivisera delar av nyhetsproduktionen och frigöra tid till andra uppgifter. I takt med att syntetisk media utvecklas blir det svårare att verifiera äktheten hos audiovisuell media. Studien undersöker syntetisk medias betydelse för nyhetsmedia genom empirisk datainsamling med respondenter som är yrkesverksamma inom svensk nyhetsmedia. Studiens slutsats resulterade i fyra rekommendationer riktade till nyhetsmedieindustrin för att hantera utvecklingen, inklusive implementering av AI-policyer, teknisk utbildning, förbättrade verifieringsprocesser och ökade investeringar i faktagranskning. / Synthetic media is a form of manipulated or generated media created using advanced AI. This technology has the potential to revolutionize news media production, but it also poses several challenges that need to be addressed and remedied, one of them being the increased risk of disinformation deployment. As the technology behind synthetic media evolves, it also challenges the journalistic principles that the news media industry are built upon. Thus this study aims to explore the implications that synthetic media has on the news media industry, through empirical data collected from interviews with Swedish news media professionals. The findings of the study conclude a number of key strategies that the news media industry are recommended to implement to maintain their credibility, while also adapting to the development of AI and synthetic media. The strategies include implementing AI policies, providing essential AI education, enhancing verification and detection processes, and further investing in specialized fact-checking desks. Furthermore, the findings of the study highlights the need of implementing a holistic approach that combines technical solutions, with journalistic expertise and legislative measures to maintain public trust in news media. The study also calls for further research to understand the broader implications of synthetic media across the industry.
|
17 |
Social Media's Take on Deepfakes: Ethical Concerns in the Public DiscourseAbdul Hussein, Mohamed, Bogren, William January 2023 (has links)
The rapid advancement of artificial intelligence has led to the emergence of deepfake, digital media that has been manipulated to replace a person's likeness with another. This technology has seen significant improvements, becoming easier to use and producing results increasingly difficult to distinguish from reality. This development has raised ethical discussions surrounding its deceiving nature. Furthermore, deepfakes have had a considerable impact and application on social media, enabling their spread. Despite this, the public discourse on social media, along with its societal and personal values associated with deepfakes, remains underexplored. This study addresses this gap by examining social media discourse and perception surrounding the prominent ethical concerns of deepfakes, and situating these concerns within the broader landscape of AI ethics. Through a qualitative method resembling netnography, 320 posts from Reddit and Youtube were thematically analyzed through a passive observation, along with their respective comment section. The findings reveal various concerns, surrounding misinformation and consent to deeper fears about deepfakes' role in fostering distrust, as well as more abstract apprehensions regarding the technology's abuse and harmful applications. These concerns further revealed how generalized established AI ethical principles might be interpreted in the deepfake context, also showing how and why these principles might be violated by this technology. Particularly it revealed terms how principles such as dignity, transparency, privacy and non-maleficence might be diverged in deepfake applications.
|
18 |
Playdates & Algorithms : Exploring parental awareness and mediation strategies in the age of generative artificial intelligenceAbel, Chandler, Magnusson, Marie January 2024 (has links)
Access to the internet is more available than ever before for small children and adolescents, along with an increasing number of channels for using Generative Artificial Intelligence (GAI). For parents of children and teens, this is a new frontier with innovative tools, terminology, and effects that test the integrity of existing parental mediation strategies for modern media. The lack of research aimed at parental awareness of GAI or how this tech can influence children’s well-being led us to fill this current gap and gain valuable insights for future use. The present study explores the current state of parental awareness regarding GAI and its effects on the well-being of children and what mediation strategies parents employ to mitigate these effects. By using the Parental Mediation Theory (PMT) as a theoretical framework, patterns gathered through conducting semi-structured interviews with parents (N=10) are identified with thematic analysis. Through these interviews, themes are uncovered that shed light on how parents perceive GAI in the context of the effects that such technology has on their children, as well as how it could impact their children’s well-being in the future. The conclusion of this study reveals that while most parents know about GAI, many parents are not aware of the less-familiar effects of this technology being used for media manipulation, chatbot companionship or educational assignments that can have a potentially negative impact on the well-being of their children. Stemming from the PMT, a new parental mediation strategy emerged from an analysis of the collected data. This strategy is called ‘planned mediation’ and it serves to proactively protect children from GAI and its less-familiar effects, rather than responding reactively with the mediation strategies that currently exist.
|
19 |
Empowerment or exploitation: A qualitative analysis of online feminist communities’ discussions of deepfake pornographyBrieger, Alexandra Rose January 2024 (has links)
This thesis serves to provide insight into the textually constructed identities that online feminist groups create when discussing deepfake pornography, as well as positions that feminist users embody in regards to their ability to change dominant uses of deepfake pornography. Deepfakes, powered by artificial intelligence and deep learning, involve taking individuals’ faces and placing them on images and videos for various purposes and includes but is not limited to pornography. Much is known regarding the potential ramifications of deepfake technology in general, however, little is known concerning social groups and their perceptions of deepfake pornography. Additionally, there is no data in connection with feminist perspectives of deepfakes in online communities. In order to interpret the empirical data, this thesis employs various theoretical concepts in connection with technofeminism (Wajcman, 2004) in order to understand participants’ perceptions and attitudes towards deepfakes. It also utilizes moral foundations theory (Graham et al., 2013) to unpack moral concerns that communities may have regarding deepfake pornography. Based on a discourse analysis of three Reddit feminist communities: r/PornIsMisogyny, r/fourthwavewomen, and r/TwoXChromosomes, this thesis finds that through their discussions, feminist communities construct a multitude of identities in relation to deepfake pornography, all of which are directly tied to their sense of moral principles. These identities contrast victims and perpetrators, as well as hold identities of governments and parents accountable for the spread of deepfake pornography. Additionally, many feminist users express vigilante justice and criminalization as potential catalysts for change, which is reflected in social constructionism, while other users express feelings of hopelessness and exhibit alternative positions more consistent with technological determinism. Thus, this emphasizes that collaborative efforts from governments, those in power, as well as private citizens are needed to address challenges that deepfake pornography poses to society as a whole.
|
20 |
Le traitement de la preuve audiovisuelle devant la Cour pénale internationaleMuhgoh, Thierry Chia 08 1900 (has links)
L’utilisation croissante de l’information audiovisuelle devant les tribunaux de droit pénal international indique une trajectoire qui oblige à considérer plus attentivement les enjeux soulevés par ce type de preuve à partir de sa collecte, sa conservation jusqu’à son utilisation dans le cadre d’un procès. Ces enjeux peuvent être variés et se rattacher à la véracité, l’authenticité et l’intégrité du contenu d’une telle information.
Dans le cadre de ce mémoire, nous plaidons, au moins, pour une approche rigoureuse dans l’évaluation de la preuve audiovisuelle, et ce, tout au long du processus judiciaire d’une affaire devant la CPI, ou au plus, pour un encadrement objectif des règles applicables à la preuve audiovisuelle, et ce, en s’écartant du principe général de souplesse et de flexibilité fortement ancré dans la culture de l’administration de la preuve devant cette institution, pour adopter une approche stricte et rigoureuse. Laquelle favoriserait, d’une part, l’application du critère préalable de fiabilité lors de la phase de l’introduction d’un élément de preuve audiovisuel, et d’autre part, l’application d’une méthode d’évaluation de la preuve audiovisuelle basée sur le modèle d’admission.
La présence d’une phase préalable d’analyse substantielle des éléments sensibles, tels que les éléments de preuve audiovisuels, n’implique pas forcément une perte du pouvoir discrétionnaire des juges à renvoyer l’évaluation des éléments de preuve introduits à la fin du processus. L’enjeu fondamental réside dans le fait que le critère préalable de fiabilité et le modèle d’admission permettraient de tempérer le pouvoir discrétionnaire des juges et favoriseraient une analyse plus diligente et rigoureuse des éléments de preuve audiovisuels.
À notre avis, cette démarche devra être initiée par les juges des chambres préliminaires et de première instance, en leur qualité de juges de faits et de la preuve, et se concrétiser sur le terrain par les premiers et différents intervenants impliqués dans le processus judiciaire de cette institution. / The increasing use of audiovisual information before international criminal courts is indicative of a trajectory that calls for a closer look at the issues raised by this type of evidence, from its collection and preservation to its use in court. These issues can be varied and relate to the veracity, authenticity and integrity of the content of such information.
In this research Paper, we argue for at least a rigorous approach to the evaluation of digital evidence, throughout the judicial process of a case before the ICC, or at most, an objective framing of the rules applicable to digital evidence, departing from the general principle of flexibility strongly rooted in the culture of the administration of evidence before this institution, in favor of a strict and rigorous approach. This would favor the application of the preliminary criterion of reliability when introducing audiovisual evidence, and the application of a method for evaluating audiovisual evidence based on the admission model.
The presence of a substantial preliminary analysis phase for sensitive elements, such as digital evidence, does not necessarily imply a loss of judicial discretion to defer the evaluation of the evidence produced to the end of the process. What is fundamentally at stake is the fact that the prior reliability criterion and the admission model would temper judicial discretion and encourage a more diligent and rigorous analysis of digital evidence.
In our opinion, this approach should be initiated by the judges of the preliminary and trial chambers, in their capacity as judges of fact and evidence, and implemented in the field by the first and various stakeholders involved in the judicial process of this institution.
|
Page generated in 0.0718 seconds