• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Machine Learning Approaches for Speech Forensics

Amit Kumar Singh Yadav (19984650) 31 October 2024 (has links)
<p dir="ltr">Several incidents report misuse of synthetic speech for impersonation attacks, spreading misinformation, and supporting financial frauds. To counter such misuse, this dissertation focuses on developing methods for speech forensics. First, we present a method to detect compressed synthetic speech. The method uses comparatively 33 times less information from compressed bit stream than used by existing methods and achieve high performance. Second, we present a transformer neural network method that uses 2D spectral representation of speech signals to detect synthetic speech. The method shows high performance on detecting both compressed and uncompressed synthetic speech. Third, we present a method using an interpretable machine learning approach known as disentangled representation learning for synthetic speech detection. Fourth, we present a method for synthetic speech attribution. It identifies the source of a speech signal. If the speech is spoken by a human, we classify it as authentic/bona fide. If the speech signal is synthetic, we identify the generation method used to create it. We examine both closed-set and open-set attribution scenarios. In a closed-set scenario, we evaluate our approach only on the speech generation methods present in the training set. In an open-set scenario, we also evaluate on methods which are not present in the training set. Fifth, we propose a multi-domain method for synthetic speech localization. It processes multi-domain features obtained from a transformer using a ResNet-style MLP. We show that with relatively less number of parameters, the proposed method performs better than existing methods. Finally, we present a new direction of research in speech forensics <i>i.e.</i>, bias and fairness of synthetic speech detectors. By bias, we refer to an action in which a detector unfairly targets a specific demographic group of individuals and falsely labels their bona fide speech as synthetic. We show that existing synthetic speech detectors are gender, age and accent biased. They also have bias against bona fide speech from people with speech impairments such as stuttering. We propose a set of augmentations that simulate stuttering in speech. We show that synthetic speech detectors trained with proposed augmentation have less bias relative to detector trained without it.</p>
2

<b>Speech Forensics Using Machine Learning</b>

Kratika Bhagtani (20699921) 10 February 2025 (has links)
<p dir="ltr">High quality synthetic speech can now be generated and used maliciously. There is a need of speech forensic tools to detect synthetic speech. Besides detection, it is important to identify the synthesizer that was used for generating a given speech. This is known as synthetic speech attribution. Speech editing tools can be used to create partially synthetic speech in which only parts of speech are synthetic. Detecting these synthetic parts is known as synthetic speech localization.</p><p dir="ltr">We first propose a method for synthetic speech attribution known as the Patchout Spectrogram Attribution Transformer (PSAT). PSAT can distinguish unseen speech synthesis methods (<i>unknown </i>synthesizers) from the methods that were seen during its training (<i>known </i>synthesizers). It achieves more than 95% attribution accuracy. Second, we propose a method known as Fine-Grain Synthetic Speech Attribution Transformer (FGSSAT) that can assign different labels to different <i>unknown </i>synthesizers. Existing methods including PSAT cannot distinguish between different <i>unknown </i>synthesizers. FGSSAT improves on existing work by doing a fine-grain synthetic speech attribution analysis. Third, we propose Synthetic Speech Localization Convolutional Transformer (SSLCT) and achieve less than 10% Equal Error Rate (EER) for synthetic speech localization. Fourth, we demonstrate that existing methods do not perform well for recent diffusion-based synthesizers. We propose the Diffusion-Based Synthetic Speech Dataset (DiffSSD) consisting of about 200 hours of speech, including synthetic speech from 8 diffusion-based open-source and 2 commercial generators. We train speech forensic methods on this dataset and show its importance with respect to recent open-source and commercial generators.</p>

Page generated in 0.0815 seconds