• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Head motion synthesis : evaluation and a template motion approach

Braude, David Adam January 2016 (has links)
The use of conversational agents has increased across the world. From providing automated support for companies to being virtual psychologists they have moved from an academic curiosity to an application with real world relevance. While many researchers have focused on the content of the dialogue and synthetic speech to give the agents a voice, more recently animating these characters has become a topic of interest. An additional use for character animation technology is in the film and video game industry where having characters animated without needing to pay for expensive labour would save tremendous costs. When animating characters there are many aspects to consider, for example the way they walk. However, to truly assist with communication automated animation needs to duplicate the body language used when speaking. In particular conversational agents are often only an animation of the upper parts of the body, so head motion is one of the keys to a believable agent. While certain linguistic features are obvious, such as nodding to indicate agreement, research has shown that head motion also aids understanding of speech. Additionally head motion often contains emotional cues, prosodic information, and other paralinguistic information. In this thesis we will present our research into synthesising head motion using only recorded speech as input. During this research we collected a large dataset of head motion synchronised with speech, examined evaluation methodology, and developed a synthesis system. Our dataset is one of the larger ones available. From it we present some statistics about head motion in general. Including differences between read speech and story telling speech, and differences between speakers. From this we are able to draw some conclusions as to what type of source data will be the most interesting in head motion research, and if speaker-dependent models are needed for synthesis. In our examination of head motion evaluation methodology we introduce Forced Canonical Correlation Analysis (FCCA). FCCA shows the difference between head motion shaped noise and motion capture better than standard methods for objective evaluation used in the literature. We have shown that for subjective testing it is best practice to use a variation of MUltiple Stimuli with Hidden Reference and Anchor (MUSHRA) based testing, adapted for head motion. Through experimentation we have developed guidelines for the implementation of the test, and the constraints on the length. Finally we present a new system for head motion synthesis. We make use of simple templates of motion, automatically extracted from source data, that are warped to suit the speech features. Our system uses clustering to pick the small motion units, and a combined HMM and GMM based approach for determining the values of warping parameters at synthesis time. This results in highly natural looking motion that outperforms other state of the art systems. Our system requires minimal human intervention and produces believable motion. The key innovates were the new methods for segmenting head motion and creating a process similar to language modelling for synthesising head motion.
2

A Real-Time Classification approach of a Human Brain-Computer Interface based on Movement Related Electroencephalogram

Mileros, Martin D. January 2004 (has links)
<p>A Real-Time Brain-Computer Interface is a technical system classifying increased or decreased brain activity in Real-Time between different body movements, actions performed by a person. Focus in this thesis will be on testing algorithms and settings, finding the initial time interval and how increased activity in the brain can be distinguished and satisfyingly classified. The objective is letting the system give an output somewhere within 250ms of a thought of an action, which will be faster than a persons reaction time. </p><p>Algorithms in the preprocessing were Blind Signal Separation and the Fast Fourier Transform. With different frequency and time interval settings the algorithms were tested on an offline Electroencephalographic data file based on the "Ten Twenty" Electrode Application System, classified using an Artificial Neural Network. </p><p>A satisfying time interval could be found between 125-250ms, but more research is needed to investigate that specific interval. A reduction in frequency resulted in a lack of samples in the sample window preventing the algorithms from working properly. A high frequency is therefore proposed to help keeping the sample window small in the time domain. Blind Signal Separation together with the Fast Fourier Transform had problems finding appropriate correlation using the Ten-Twenty Electrode Application System. Electrodes should be placed more selectively at the parietal lobe, in case of requiring motor responses.</p>
3

A Real-Time Classification approach of a Human Brain-Computer Interface based on Movement Related Electroencephalogram

Mileros, Martin D. January 2004 (has links)
A Real-Time Brain-Computer Interface is a technical system classifying increased or decreased brain activity in Real-Time between different body movements, actions performed by a person. Focus in this thesis will be on testing algorithms and settings, finding the initial time interval and how increased activity in the brain can be distinguished and satisfyingly classified. The objective is letting the system give an output somewhere within 250ms of a thought of an action, which will be faster than a persons reaction time. Algorithms in the preprocessing were Blind Signal Separation and the Fast Fourier Transform. With different frequency and time interval settings the algorithms were tested on an offline Electroencephalographic data file based on the "Ten Twenty" Electrode Application System, classified using an Artificial Neural Network. A satisfying time interval could be found between 125-250ms, but more research is needed to investigate that specific interval. A reduction in frequency resulted in a lack of samples in the sample window preventing the algorithms from working properly. A high frequency is therefore proposed to help keeping the sample window small in the time domain. Blind Signal Separation together with the Fast Fourier Transform had problems finding appropriate correlation using the Ten-Twenty Electrode Application System. Electrodes should be placed more selectively at the parietal lobe, in case of requiring motor responses.
4

Signal Processing Methods for Reliable Extraction of Neural Responses in Developmental EEG

Kumaravel, Velu Prabhakar 27 February 2023 (has links)
Studying newborns in the first days of life prior to experiencing the world provides remarkable insights into the neurocognitive predispositions that humans are endowed with. First, it helps us to improve our current knowledge of the development of a typical brain. Secondly, it potentially opens new pathways for earlier diagnosis of several developmental neurocognitive disorders such as Autism Spectrum Disorder (ASD). While most studies investigating early cognition in the literature are purely behavioural, recently there has been an increasing number of neuroimaging studies in newborns and infants. Electroencephalography (EEG) is one of the most optimal neuroimaging technique to investigate neurocognitive functions in human newborns because it is non-invasive and quick and easy to mount on the head. Since EEG offers a versatile design with custom number of channels/electrodes, an ergonomic wearable solution could help study newborns outside clinical settings such as their homes. Compared to adult EEG, newborn EEG data are different in two main aspects: 1) In experimental designs investigating stimulus-related neural responses, collected data is extremely short in length due to the reduced attentional span of newborns; 2) Data is heavily contaminated with noise due to their uncontrollable movement artifacts. Since EEG processing methods for adults are not adapted to very short data length and usually deal with well-defined, stereotyped artifacts, they are unsuitable for newborn EEG. As a result, researchers manually clean the data, which is a subjective and time-consuming task. This thesis work is specifically dedicated to developing (semi-) automated novel signal processing methods for noise removal and for extracting reliable neural responses specific to this population. The solutions are proposed for both high-density EEG for traditional lab-based research and wearable EEG for clinical applications. To this end, this thesis, first, presents novel signal processing methods applied to newborn EEG: 1) Local Outlier Factor (LOF) for detecting and removing bad/noisy channels; 2) Artifacts Subspace Reconstruction (ASR) for detecting and removing or correcting bad/noisy segments. Then, based on these algorithms and other preprocessing functionalities, a robust preprocessing pipeline, Newborn EEG Artifact Removal (NEAR), is proposed. Notably, this is the first time LOF is explored for EEG bad channel detection, despite being a popular outlier detection technique in other kinds of data such as Electrocardiogram (ECG). Even if ASR is already an established artifact real algorithm originally developed for mobile adult EEG, this thesis explores the possibility of adapting ASR for short newborn EEG data, which is the first of its kind. NEAR is validated on simulated, real newborn, and infant EEG datasets. We used the SEREEGA toolbox to simulate neurologically plausible synthetic data and contaminated a certain number of channels and segments with artifacts commonly manifested in developmental EEG. We used newborn EEG data (n = 10, age range: 1 and 4 days) recorded in our lab based on a frequency-tagging paradigm. The chosen paradigm consists of visual stimuli to investigate the cortical bases of facelike pattern processing, and the results were published in 2019. To test NEAR performance on an older population with an event-related design (ERP) and with data recorded in another lab, we also evaluated NEAR on infant EEG data recorded on 9-months-old infants (n = 14) with an ERP paradigm. The experimental paradigm for these datasets consists of auditory stimulus to investigate the electrophysiological evidence for understanding maternal speech, and the results were published in 2012. Since authors of these independent studies employed manual artifact removal, the obtained neural responses serve as ground truth for validating NEAR’s artifact removal performance. For comparative evaluation, we considered the performance of two state-of-the-art pipelines designed for older infants. Results show that NEAR is successful in recovering the neural responses (specific to the EEG paradigm and the stimuli) compared to the other pipelines. In sum, this thesis presents a set of methods for artifact removal and extraction of stimulus-related neural responses specifically adapted to newborn and infant EEG data that will hopefully contribute to strengthening the reliability and reproducibility of developmental cognitive neuroscience studies, both in research laboratories and in clinical applications.

Page generated in 0.4551 seconds