Spelling suggestions: "subject:"multimedia forensic""
1 |
On the preservation of media trustworthiness in the social media eraLago, Federica 29 March 2022 (has links)
The amount of multimedia content shared everyday online recently underwent a dramatic increase. This, combined with the stunning realism of fake images that can be generated with AI-based technologies, undermines the trustworthiness of online information sources. In this work, we tackle the problem of preserving media trustworthiness online from two different points of view. The first one consists in assessing the human ability to spot fake images, focusing in particular on synthetic faces, which are extremely realistic and can represent a severe threat if used to disseminate fake news. A perception study allowed us to prove for the first time how people are more prone to question the reality of authentic pictures rather than the one of last-generation AI-generated images. Secondly, we focused on social media forensics: our goal is to reconstruct the history of an image shared or re-shared online as typically happens nowadays. We propose a new framework that is able to trace the history of an image over multiple sharings. This framework improves the state of the art and has the advantage of being easily extensible with new methods and thus adapt to new datasets and scenarios. In fact, in this environment of fast-paced technological evolution, being able to adapt is fundamental to preserve our trust in what we see.
|
2 |
Detecting Image Forgery with Color PhenomenologyStanton, Jamie Alyssa 30 May 2019 (has links)
No description available.
|
3 |
On the Relevance of Temporal Information in Multimedia Forensics Applications in the Age of A.I.Montibeller, Andrea 24 January 2024 (has links)
The proliferation of multimedia data, including digital images and videos, has led to an increase in their misuse, such as the unauthorized sharing of sensitive content, the spread of fake news, and the dissemination of misleading propaganda. To address these issues, the research field of multimedia forensics has developed tools to distinguish genuine multimedia from fakes and identify the sources of those who share sensitive content. However, the accuracy and reliability of multimedia forensics tools are threatened by recent technological advancements in new multimedia processing software and camera devices. For example, source attribution involves attributing an image or video to a specific camera device, which is crucial for addressing privacy violations, cases of revenge porn, and instances of child pornography. These tools exploit forensic traces unique to each camera’s manufacturing process, such as Photo Response Non-Uniformity (PRNU). Nevertheless, image and video processing transformations can disrupt the consistency of PRNU, necessitating the development of new methods for its recovery. Conversely, to distinguish genuine multimedia from fakes, AI-based image and video forgery localization methods have also emerged. However, they constantly face challenges from new, more sophisticated AI-forgery techniques and are hindered by factors like AI-aided post-processing and, in the case of videos, lower resolutions, and stronger compression. This doctoral study investigates the relevance of exploiting temporal information during the parameters estimation used to reverse complex spatial transformations for source attribution, and video forgery localization in low-resolution H.264 post-processed inpainted videos. Two novel methods will be presented that model the set of parameters involved in reversing in-camera and out-camera complex spatial transformations applied to images and videos as time series, improving source attribution accuracy and computational efficiency. Regarding video inpainting localization, a novel dataset of videos inpainted and post-processed with Temporal Consistency Networks will be introduced, and we will present our solution to improve video inpainting localization by taking into account spatial and temporal inconsistencies at dense optical flow level. The research presented in this dissertation has resulted in several publications that contribute to the field of multimedia forensics, addressing challenges related to source attribution and video forgery localization.
|
4 |
Machine Learning Approaches for Speech ForensicsAmit Kumar Singh Yadav (19984650) 31 October 2024 (has links)
<p dir="ltr">Several incidents report misuse of synthetic speech for impersonation attacks, spreading misinformation, and supporting financial frauds. To counter such misuse, this dissertation focuses on developing methods for speech forensics. First, we present a method to detect compressed synthetic speech. The method uses comparatively 33 times less information from compressed bit stream than used by existing methods and achieve high performance. Second, we present a transformer neural network method that uses 2D spectral representation of speech signals to detect synthetic speech. The method shows high performance on detecting both compressed and uncompressed synthetic speech. Third, we present a method using an interpretable machine learning approach known as disentangled representation learning for synthetic speech detection. Fourth, we present a method for synthetic speech attribution. It identifies the source of a speech signal. If the speech is spoken by a human, we classify it as authentic/bona fide. If the speech signal is synthetic, we identify the generation method used to create it. We examine both closed-set and open-set attribution scenarios. In a closed-set scenario, we evaluate our approach only on the speech generation methods present in the training set. In an open-set scenario, we also evaluate on methods which are not present in the training set. Fifth, we propose a multi-domain method for synthetic speech localization. It processes multi-domain features obtained from a transformer using a ResNet-style MLP. We show that with relatively less number of parameters, the proposed method performs better than existing methods. Finally, we present a new direction of research in speech forensics <i>i.e.</i>, bias and fairness of synthetic speech detectors. By bias, we refer to an action in which a detector unfairly targets a specific demographic group of individuals and falsely labels their bona fide speech as synthetic. We show that existing synthetic speech detectors are gender, age and accent biased. They also have bias against bona fide speech from people with speech impairments such as stuttering. We propose a set of augmentations that simulate stuttering in speech. We show that synthetic speech detectors trained with proposed augmentation have less bias relative to detector trained without it.</p>
|
Page generated in 0.1033 seconds