• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 3
  • 3
  • 3
  • 2
  • 1
  • Tagged with
  • 45
  • 45
  • 45
  • 10
  • 9
  • 9
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Dynamic Headpose Classification and Video Retargeting with Human Attention

Anoop, K R January 2015 (has links) (PDF)
Over the years, extensive research has been devoted to the study of people's head pose due to its relevance in security, human-computer interaction, advertising as well as cognitive, neuro and behavioural psychology. One of the main goals of this thesis is to estimate people's 3D head orientation as they freely move around in naturalistic settings such as parties, supermarkets etc. Head pose classification from surveillance images acquired with distant, large field-of-view cameras is difficult as faces captured are at low-resolution with a blurred appearance. Also labelling sufficient training data for headpose estimation in such settings is difficult due to the motion of targets and the large possible range of head orientations. Domain adaptation approaches are useful for transferring knowledge from the training source to the test target data having different attributes, minimizing target data labelling efforts in the process. This thesis examines the use of transfer learning for efficient multi-view head pose classification. Relationship between head pose and facial appearance from many labelled examples corresponding to the source data is learned initially. Domain adaptation techniques are then employed to transfer this knowledge to the target data. The following three challenging situations is addressed (I) ranges of head poses in the source and target images is different, (II) where source images capture a stationary person while target images capture a moving person with varying facial appearance due to changing perspective, scale and (III) a combination of (I) and (II). All proposed transfer learning methods are sufficiently tested and benchmarked on a new compiled dataset DPOSE for headpose classification. This thesis also looks at a novel signature representation for describing object sets for covariance descriptors, Covariance Profiles (CPs). CP is well suited for representing a set of similarly related objects. CPs posit that the covariance matrices, pertaining to a specific entity, share the same eigen-structure. Such a representation is not only compact but also eliminates the need to store all the training data. Experiments on images as well as videos for applications such as object-track clustering and headpose estimation is shown using CP. In the second part, Human-gaze for interest point detection for video retargeting is explored. Regions in video streams attracting human interest contribute significantly to human understanding of the video. Being able to predict salient and informative Regions of Interest (ROIs) through a sequence of eye movements is a challenging problem. This thesis proposes an interactive human-in-loop framework to model eye-movements and predicts visual saliency in yet-unseen frames. Eye-tracking and video content is used to model visual attention in a manner that accounts for temporal discontinuities due to sudden eye movements, noise and behavioural artefacts. Gaze buffering, for eye-gaze analysis and its fusion with content based features is proposed. The method uses eye-gaze information along with bottom-up and top-down saliency to boost the importance of image pixels. Our robust visual saliency prediction is instantiated for content aware Video Retargeting.
42

Canonical Correlation and the Calculation of Information Measures for Infinite-Dimensional Distributions: Kanonische Korrelationen und die Berechnung von Informationsmaßen für unendlichdimensionale Verteilungen

Huffmann, Jonathan 26 March 2021 (has links)
This thesis investigates the extension of the well-known canonical correlation analysis for random elements on abstract real measurable Hilbert spaces. One focus is on the application of this extension to the calculation of information-theoretical quantities on finite time intervals. Analytical approaches for the calculation of the mutual information and the information density between Gaussian distributed random elements on arbitrary real measurable Hilbert spaces are derived. With respect to mutual information, the results obtained are comparable to [4] and [1] (Baker, 1970, 1978). They can also be seen as a generalization of earlier findings in [20] (Gelfand and Yaglom, 1958). In addition, some of the derived equations for calculating the information density, its characteristic function and its n-th central moments extend results from [45] and [44] (Pinsker, 1963, 1964). Furthermore, explicit examples for the calculation of the mutual information, the characteristic function of the information density as well as the n-th central moments of the information density for the important special case of an additive Gaussian channel with Gaussian distributed input signal with rational spectral density are elaborated, on the one hand for white Gaussian noise and on the other hand for Gaussian noise with rational spectral density. These results extend the corresponding concrete examples for the calculation of the mutual information from [20] (Gelfand and Yaglom, 1958) as well as [28] and [29] (Huang and Johnson, 1963, 1962).:Kurzfassung Abstract Notations Abbreviations 1 Introduction 1.1 Software Used 2 Mathematical Background 2.1 Basic Notions of Measure and Probability Theory 2.1.1 Characteristic Functions 2.2 Stochastic Processes 2.2.1 The Consistency Theorem of Daniell and Kolmogorov 2.2.2 Second Order Random Processes 2.3 Some Properties of Fourier Transforms 2.4 Some Basic Inequalities 2.5 Some Fundamentals in Functional Analysis 2.5.1 Hilbert Spaces 2.5.2 Linear Operators on Hilbert Spaces 2.5.3 The Fréchet-Riesz Representation Theorem 2.5.4 Adjoint and Compact Operators 2.5.5 The Spectral Theorem for Compact Operators 3 Mutual Information and Information Density 3.1 Mutual Information 3.2 Information Density 4 Probability Measures on Hilbert Spaces 4.1 Measurable Hilbert Spaces 4.2 The Characteristic Functional 4.3 Mean Value and Covariance Operator 4.4 Gaussian Probability Measures on Hilbert Spaces 4.5 The Product of Two Measurable Hilbert Spaces 4.5.1 The Product Measure 4.5.2 Cross-Covariance Operator 5 Canonical Correlation Analysis on Hilbert Spaces 5.1 The Hellinger Distance and the Theorem of Kakutani 5.2 Canonical Correlation Analysis on Hilbert Spaces 5.3 The Theorem of Hájek and Feldman 6 Mutual Information and Information Density Between Gaussian Measures 6.1 A General Formula for Mutual Information and Information Density for Gaussian Random Elements 6.2 Hadamard’s Factorization Theorem 6.3 Closed Form Expressions for Mutual Information and Related Quantities 6.4 The Discrete-Time Case 6.5 The Continuous-Time Case 6.6 Approximation Error 7 Additive Gaussian Channels 7.1 Abstract Channel Model and General Definitions 7.2 Explicit Expressions for Mutual Information and Related Quantities 7.2.1 Gaussian Random Elements as Input to an Additive Gaussian Channel 8 Continuous-Time Gaussian Channels 8.1 White Gaussian Channels 8.1.1 Two Simple Examples 8.1.2 Gaussian Input with Rational Spectral Density 8.1.3 A Method of Youla, Kadota and Slepian 8.2 Noise and Input Signal with Rational Spectral Density 8.2.1 Again a Method by Slepian and Kadota Bibliography / Diese Arbeit untersucht die Erweiterung der bekannten kanonischen Korrelationsanalyse (canonical correlation analysis) für Zufallselemente auf abstrakten reellen messbaren Hilberträumen. Ein Schwerpunkt liegt dabei auf der Anwendung dieser Erweiterung zur Berechnung informationstheoretischer Größen auf endlichen Zeitintervallen. Analytische Ansätze für die Berechnung der Transinformation und der Informationsdichte zwischen gaußverteilten Zufallselementen auf beliebigen reelen messbaren Hilberträumen werden hergeleitet. Bezüglich der Transinformation sind die gewonnenen Resultate vergleichbar zu [4] und [1] (Baker, 1970, 1978). Sie können auch als Verallgemeinerung früherer Erkenntnisse aus [20] (Gelfand und Yaglom, 1958) aufgefasst werden. Zusätzlich erweitern einige der hergeleiteten Formeln zur Berechnung der Informationsdichte, ihrer charakteristischen Funktion und ihrer n-ten zentralen Momente Ergebnisse aus [45] und [44] (Pinsker, 1963, 1964). Weiterhin werden explizite Beispiele für die Berechnung der Transinformation, der charakteristischen Funktion der Informationsdichte sowie der n-ten zentralen Momente der Informationsdichte für den wichtigen Spezialfall eines additiven Gaußkanals mit gaußverteiltem Eingangssignal mit rationaler Spektraldichte erarbeitet, einerseits für gaußsches weißes Rauschen und andererseits für gaußsches Rauschen mit einer rationalen Spektraldichte. Diese Ergebnisse erweitern die entsprechenden konkreten Beispiele zur Berechnung der Transinformation aus [20] (Gelfand und Yaglom, 1958) sowie [28] und [29] (Huang und Johnson, 1963, 1962).:Kurzfassung Abstract Notations Abbreviations 1 Introduction 1.1 Software Used 2 Mathematical Background 2.1 Basic Notions of Measure and Probability Theory 2.1.1 Characteristic Functions 2.2 Stochastic Processes 2.2.1 The Consistency Theorem of Daniell and Kolmogorov 2.2.2 Second Order Random Processes 2.3 Some Properties of Fourier Transforms 2.4 Some Basic Inequalities 2.5 Some Fundamentals in Functional Analysis 2.5.1 Hilbert Spaces 2.5.2 Linear Operators on Hilbert Spaces 2.5.3 The Fréchet-Riesz Representation Theorem 2.5.4 Adjoint and Compact Operators 2.5.5 The Spectral Theorem for Compact Operators 3 Mutual Information and Information Density 3.1 Mutual Information 3.2 Information Density 4 Probability Measures on Hilbert Spaces 4.1 Measurable Hilbert Spaces 4.2 The Characteristic Functional 4.3 Mean Value and Covariance Operator 4.4 Gaussian Probability Measures on Hilbert Spaces 4.5 The Product of Two Measurable Hilbert Spaces 4.5.1 The Product Measure 4.5.2 Cross-Covariance Operator 5 Canonical Correlation Analysis on Hilbert Spaces 5.1 The Hellinger Distance and the Theorem of Kakutani 5.2 Canonical Correlation Analysis on Hilbert Spaces 5.3 The Theorem of Hájek and Feldman 6 Mutual Information and Information Density Between Gaussian Measures 6.1 A General Formula for Mutual Information and Information Density for Gaussian Random Elements 6.2 Hadamard’s Factorization Theorem 6.3 Closed Form Expressions for Mutual Information and Related Quantities 6.4 The Discrete-Time Case 6.5 The Continuous-Time Case 6.6 Approximation Error 7 Additive Gaussian Channels 7.1 Abstract Channel Model and General Definitions 7.2 Explicit Expressions for Mutual Information and Related Quantities 7.2.1 Gaussian Random Elements as Input to an Additive Gaussian Channel 8 Continuous-Time Gaussian Channels 8.1 White Gaussian Channels 8.1.1 Two Simple Examples 8.1.2 Gaussian Input with Rational Spectral Density 8.1.3 A Method of Youla, Kadota and Slepian 8.2 Noise and Input Signal with Rational Spectral Density 8.2.1 Again a Method by Slepian and Kadota Bibliography
43

Signal Processing Methods for Reliable Extraction of Neural Responses in Developmental EEG

Kumaravel, Velu Prabhakar 27 February 2023 (has links)
Studying newborns in the first days of life prior to experiencing the world provides remarkable insights into the neurocognitive predispositions that humans are endowed with. First, it helps us to improve our current knowledge of the development of a typical brain. Secondly, it potentially opens new pathways for earlier diagnosis of several developmental neurocognitive disorders such as Autism Spectrum Disorder (ASD). While most studies investigating early cognition in the literature are purely behavioural, recently there has been an increasing number of neuroimaging studies in newborns and infants. Electroencephalography (EEG) is one of the most optimal neuroimaging technique to investigate neurocognitive functions in human newborns because it is non-invasive and quick and easy to mount on the head. Since EEG offers a versatile design with custom number of channels/electrodes, an ergonomic wearable solution could help study newborns outside clinical settings such as their homes. Compared to adult EEG, newborn EEG data are different in two main aspects: 1) In experimental designs investigating stimulus-related neural responses, collected data is extremely short in length due to the reduced attentional span of newborns; 2) Data is heavily contaminated with noise due to their uncontrollable movement artifacts. Since EEG processing methods for adults are not adapted to very short data length and usually deal with well-defined, stereotyped artifacts, they are unsuitable for newborn EEG. As a result, researchers manually clean the data, which is a subjective and time-consuming task. This thesis work is specifically dedicated to developing (semi-) automated novel signal processing methods for noise removal and for extracting reliable neural responses specific to this population. The solutions are proposed for both high-density EEG for traditional lab-based research and wearable EEG for clinical applications. To this end, this thesis, first, presents novel signal processing methods applied to newborn EEG: 1) Local Outlier Factor (LOF) for detecting and removing bad/noisy channels; 2) Artifacts Subspace Reconstruction (ASR) for detecting and removing or correcting bad/noisy segments. Then, based on these algorithms and other preprocessing functionalities, a robust preprocessing pipeline, Newborn EEG Artifact Removal (NEAR), is proposed. Notably, this is the first time LOF is explored for EEG bad channel detection, despite being a popular outlier detection technique in other kinds of data such as Electrocardiogram (ECG). Even if ASR is already an established artifact real algorithm originally developed for mobile adult EEG, this thesis explores the possibility of adapting ASR for short newborn EEG data, which is the first of its kind. NEAR is validated on simulated, real newborn, and infant EEG datasets. We used the SEREEGA toolbox to simulate neurologically plausible synthetic data and contaminated a certain number of channels and segments with artifacts commonly manifested in developmental EEG. We used newborn EEG data (n = 10, age range: 1 and 4 days) recorded in our lab based on a frequency-tagging paradigm. The chosen paradigm consists of visual stimuli to investigate the cortical bases of facelike pattern processing, and the results were published in 2019. To test NEAR performance on an older population with an event-related design (ERP) and with data recorded in another lab, we also evaluated NEAR on infant EEG data recorded on 9-months-old infants (n = 14) with an ERP paradigm. The experimental paradigm for these datasets consists of auditory stimulus to investigate the electrophysiological evidence for understanding maternal speech, and the results were published in 2012. Since authors of these independent studies employed manual artifact removal, the obtained neural responses serve as ground truth for validating NEAR’s artifact removal performance. For comparative evaluation, we considered the performance of two state-of-the-art pipelines designed for older infants. Results show that NEAR is successful in recovering the neural responses (specific to the EEG paradigm and the stimuli) compared to the other pipelines. In sum, this thesis presents a set of methods for artifact removal and extraction of stimulus-related neural responses specifically adapted to newborn and infant EEG data that will hopefully contribute to strengthening the reliability and reproducibility of developmental cognitive neuroscience studies, both in research laboratories and in clinical applications.
44

Modeling of the sEMG / Force relationship by data analysis of high resolution sensor network / Modélisation de la relation entre le signal EMG de surface et la force musculaire par analyse de données d’un réseau de capteurs à haute résolution

Al Harrach, Mariam 27 September 2016 (has links)
Les systèmes neuromusculaires et musculo-squelettique sont considérés comme un système de systèmes complexe. En effet, le mouvement du corps humain est contrôlé par le système nerveux central par l'activation des cellules musculaires squelettiques. L'activation du muscle produit deux phénomènes différents : mécanique et électrique. Ces deux activités possèdent des propriétés différentes, mais l'activité mécanique ne peut avoir lieu sans l'activité électrique et réciproquement. L'activité mécanique de la contraction du muscle squelettique est responsable du mouvement. Le mouvement étant primordial pour la vie humaine, il est crucial de comprendre son fonctionnement et sa génération qui pourront aider à détecter des déficiences dans les systèmes neuromusculaire et musculo-squelettique. Ce mouvement est décrit par les forces musculaires et les moments agissant sur une articulation particulière. En conséquence, les systèmes neuromusculaires et musculo-squelettique peuvent être évalués avec le diagnostic et le management des maladies neurologiques et orthopédiques à travers l'estimation de la force. Néanmoins, la force produite par un seul muscle ne peut être mesurée que par une technique très invasive. C'est pour cela, que l'estimation de cette force reste l'un des grands challenges de la biomécanique. De plus, comme dit précédemment, l'activation musculaire possède aussi une réponse électrique qui est corrélée à la réponse mécanique. Cette résultante électrique est appelée l'électromyogramme (EMG) et peut être mesurée d'une façon non invasive à l'aide d'électrodes de surface. L'EMG est la somme des trains de potentiel d'action d'unité motrice qui sont responsable de la contraction musculaire et de la génération du mouvement. Ce signal électrique peut être mesuré par des électrodes à la surface de la peau et est appelé I'EMG de surface {sEMG). Pour un muscle unique, en supposant que la relation entre l'amplitude du sEMG et la force est monotone, plusieurs études ont essayé d'estimer cette force en développant des modèles actionnés par ce signal. Toutefois, ces modèles contiennent plusieurs limites à cause des hypothèses irréalistes par rapport à l'activation neurale. Dans cette thèse, nous proposons un nouveau modèle de relation sEMG/force en intégrant ce qu'on appelle le sEMG haute définition (HD-sEMG), qui est une nouvelle technique d'enregistrement des signaux sEMG ayant démontré une meilleure estimation de la force en surmontant le problème de la position de l'électrode sur le muscle. Ce modèle de relation sEMG/force sera développé dans un contexte sans fatigue pour des contractions isométriques, isotoniques et anisotoniques du Biceps Brachii (BB) lors une flexion isométrique de l'articulation du coude à 90°. / The neuromuscular and musculoskeletal systems are complex System of Systems (SoS) that perfectly interact to provide motion. This interaction is illustrated by the muscular force, generated by muscle activation driven by the Central Nervous System (CNS) which pilots joint motion. The knowledge of the force level is highly important in biomechanical and clinical applications. However, the recording of the force produced by a unique muscle is impossible using noninvasive procedures. Therefore, it is necessary to develop a way to estimate it. The muscle activation also generates another electric phenomenon, measured at the skin using electrodes, namely the surface electromyogram (sEMG). ln the biomechanics literature, several models of the sEMG/force relationship are provided. They are principally used to command musculoskeletal models. However, these models suffer from several important limitations such lacks of physiological realism, personalization, and representability when using single sEMG channel input. ln this work, we propose to construct a model of the sEMG/force relationship for the Biceps Brachii (BB) based on the data analysis of a High Density sEMG (HD-sEMG) sensor network. For this purpose, we first have to prepare the data for the processing stage by denoising the sEMG signals and removing the parasite signals. Therefore, we propose a HD-sEMG denoising procedure based on Canonical Correlation Analysis (CCA) that removes two types of noise that degrade the sEMG signals and a source separation method that combines CCA and image segmentation in order to separate the electrical activities of the BB and the Brachialis (BR). Second, we have to extract the information from an 8 X 8 HD-sEMG electrode grid in order to form the input of the sEMG/force model Thusly, we investigated different parameters that describe muscle activation and can affect the relationship shape then we applied data fusion through an image segmentation algorithm. Finally, we proposed a new HDsEMG/force relationship, using simulated data from a realistic HD-sEMG generation model of the BB and a Twitch based model to estimate a specific force profile corresponding to a specific sEMG sensor network and muscle configuration. Then, we tested this new relationship in force estimation using both machine learning and analytical approaches. This study is motivated by the impossibility of obtaining the intrinsic force from one muscle in experimentation.
45

Kernel Methods for Nonlinear Identification, Equalization and Separation of Signals

Vaerenbergh, Steven Van 03 February 2010 (has links)
En la última década, los métodos kernel (métodos núcleo) han demostrado ser técnicas muy eficaces en la resolución de problemas no lineales. Parte de su éxito puede atribuirse a su sólida base matemática dentro de los espacios de Hilbert generados por funciones kernel ("reproducing kernel Hilbert spaces", RKHS); y al hecho de que resultan en problemas convexos de optimización. Además, son aproximadores universales y la complejidad computacional que requieren es moderada. Gracias a estas características, los métodos kernel constituyen una alternativa atractiva a las técnicas tradicionales no lineales, como las series de Volterra, los polinómios y las redes neuronales. Los métodos kernel también presentan ciertos inconvenientes que deben ser abordados adecuadamente en las distintas aplicaciones, por ejemplo, las dificultades asociadas al manejo de grandes conjuntos de datos y los problemas de sobreajuste ocasionados al trabajar en espacios de dimensionalidad infinita.En este trabajo se desarrolla un conjunto de algoritmos basados en métodos kernel para resolver una serie de problemas no lineales, dentro del ámbito del procesado de señal y las comunicaciones. En particular, se tratan problemas de identificación e igualación de sistemas no lineales, y problemas de separación ciega de fuentes no lineal ("blind source separation", BSS). Esta tesis se divide en tres partes. La primera parte consiste en un estudio de la literatura sobre los métodos kernel. En la segunda parte, se proponen una serie de técnicas nuevas basadas en regresión con kernels para resolver problemas de identificación e igualación de sistemas de Wiener y de Hammerstein, en casos supervisados y ciegos. Como contribución adicional se estudia el campo del filtrado adaptativo mediante kernels y se proponen dos algoritmos recursivos de mínimos cuadrados mediante kernels ("kernel recursive least-squares", KRLS). En la tercera parte se tratan problemas de decodificación ciega en que las fuentes son dispersas, como es el caso en comunicaciones digitales. La dispersidad de las fuentes se refleja en que las muestras observadas se agrupan, lo cual ha permitido diseñar técnicas de decodificación basadas en agrupamiento espectral. Las técnicas propuestas se han aplicado al problema de la decodificación ciega de canales MIMO rápidamente variantes en el tiempo, y a la separación ciega de fuentes post no lineal. / In the last decade, kernel methods have become established techniques to perform nonlinear signal processing. Thanks to their foundation in the solid mathematical framework of reproducing kernel Hilbert spaces (RKHS), kernel methods yield convex optimization problems. In addition, they are universal nonlinear approximators and require only moderate computational complexity. These properties make them an attractive alternative to traditional nonlinear techniques such as Volterra series, polynomial filters and neural networks.This work aims to study the application of kernel methods to resolve nonlinear problems in signal processing and communications. Specifically, the problems treated in this thesis consist of the identification and equalization of nonlinear systems, both in supervised and blind scenarios, kernel adaptive filtering and nonlinear blind source separation.In a first contribution, a framework for identification and equalization of nonlinear Wiener and Hammerstein systems is designed, based on kernel canonical correlation analysis (KCCA). As a result of this study, various other related techniques are proposed, including two kernel recursive least squares (KRLS) algorithms with fixed memory size, and a KCCA-based blind equalization technique for Wiener systems that uses oversampling. The second part of this thesis treats two nonlinear blind decoding problems of sparse data, posed under conditions that do not permit the application of traditional clustering techniques. For these problems, which include the blind decoding of fast time-varying MIMO channels, a set of algorithms based on spectral clustering is designed. The effectiveness of the proposed techniques is demonstrated through various simulations.

Page generated in 0.1491 seconds