• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 4
  • 1
  • 1
  • Tagged with
  • 18
  • 18
  • 6
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Compression multimodale du signal et de l’image en utilisant un seul codeur / Multimodal compression of digital signal and image data using a unique encoder

Zeybek, Emre 24 March 2011 (has links)
Cette thèse a pour objectif d'étudier et d'analyser une nouvelle stratégie de compression, dont le principe consiste à compresser conjointement des données issues de plusieurs modalités, en utilisant un codeur unique. Cette approche est appelée « Compression Multimodale ». Dans ce contexte, une image et un signal audio peuvent être compressés conjointement et uniquement par un codeur d'image (e.g. un standard), sans la nécessité d'intégrer un codec audio. L'idée de base développée dans cette thèse consiste à insérer les échantillons d'un signal en remplacement de certains pixels de l'image « porteuse » tout en préservant la qualité de l'information après le processus de codage et de décodage. Cette technique ne doit pas être confondue aux techniques de tatouage ou de stéganographie puisqu'il ne s'agit pas de dissimuler une information dans une autre. En Compression Multimodale, l'objectif majeur est, d'une part, l'amélioration des performances de la compression en termes de débit-distorsion et d'autre part, l'optimisation de l'utilisation des ressources matérielles d'un système embarqué donné (e.g. accélération du temps d'encodage/décodage). Tout au long de ce rapport, nous allons étudier et analyser des variantes de la Compression Multimodale dont le noyau consiste à élaborer des fonctions de mélange et de séparation, en amont du codage et de séparation. Une validation est effectuée sur des images et des signaux usuels ainsi que sur des données spécifiques telles que les images et signaux biomédicaux. Ce travail sera conclu par une extension vers la vidéo de la stratégie de la Compression Multimodale / The objective of this thesis is to study and analyze a new compression strategy, whose principle is to compress the data together from multiple modalities by using a single encoder. This approach is called “Multimodal Compression” during which, an image and an audio signal is compressed together by a single image encoder (e.g. a standard), without the need for an integrating audio codec. The basic idea developed in this thesis is to insert samples of a signal by replacing some pixels of the "carrier's image” while preserving the quality of information after the process of encoding and decoding. This technique should not be confused with techniques like watermarking or stéganographie, since Multimodal Compression does not conceal any information with another. Two main objectives of Multimodal Compression are to improve the compression performance in terms of rate-distortion and to optimize the use of material resources of a given embedded system (e.g. acceleration of encoding/decoding time). In this report we study and analyze the variations of Multimodal Compression whose core function is to develop mixing and separation prior to coding and separation. Images and common signals as well as specific data such as biomedical images and signals are validated. This work is concluded by discussing the video of the strategy of Multimodal Compression
12

Preliminary study for detection and classification of swallowing sound / Étude préliminaire de détection et classification des sons de la déglutition

Khlaifi, Hajer 21 May 2019 (has links)
Les maladies altérant le processus de la déglutition sont multiples, affectant la qualité de vie du patient et sa capacité de fonctionner en société. La nature exacte et la gravité des changements post/pré-traitement dépendent de la localisation de l’anomalie. Une réadaptation efficace de la déglutition, cliniquement parlant, dépend généralement de l’inclusion d’une évaluation vidéo-fluoroscopique de la déglutition du patient dans l’évaluation post-traitement des patients en risque de fausse route. La restriction de cette utilisation est due au fait qu’elle est très invasive, comme d’autres moyens disponibles, tels que la fibre optique endoscopique. Ces méthodes permettent d’observer le déroulement de la déglutition et d’identifier les lieux de dysfonctionnement, durant ce processus, avec une précision élevée. "Mieux vaut prévenir que guérir" est le principe de base de la médecine en général. C’est dans ce contexte que se situe ce travail de thèse pour la télésurveillance des malades et plus spécifiquement pour suivre l’évolution fonctionnelle du processus de la déglutition chez des personnes à risques dysphagiques, que ce soit à domicile ou bien en institution, en utilisant le minimum de capteurs non-invasifs. C’est pourquoi le principal signal traité dans ce travail est le son. La principale problématique du traitement du signal sonore est la détection automatique du signal utile du son, étape cruciale pour la classification automatique de sons durant la prise alimentaire, en vue de la surveillance automatique. L’étape de la détection du signal utile permet de réduire la complexité du système d’analyse sonore. Les algorithmes issus de l’état de l’art traitant la détection du son de la déglutition dans le bruit environnemental n’ont pas montré une bonne performance. D’où l’idée d’utiliser un seuil adaptatif sur le signal, résultant de la décomposition en ondelettes. Les problématiques liées à la classification des sons en général et des sons de la déglutition en particulier sont abordées dans ce travail avec une analyse hiérarchique, qui vise à identifier dans un premier temps les segments de sons de la déglutition, puis à le décomposer en trois sons caractéristiques, ce qui correspond parfaitement à la physiologie du processus. Le couplage est également abordé dans ce travail. L’implémentation en temps réel de l’algorithme de détection a été réalisée. Cependant, celle de l’algorithme de classification reste en perspective. Son utilisation en clinique est prévue. / The diseases affecting and altering the swallowing process are multi-faceted, affecting the patient’s quality of life and ability to perform well in society. The exact nature and severity of the pre/post-treatment changes depend on the location of the anomaly. Effective swallowing rehabilitation, clinically depends on the inclusion of a video-fluoroscopic evaluation of the patient’s swallowing in the post-treatment evaluation. There are other available means such as endoscopic optical fibre. The drawback of these evaluation approaches is that they are very invasive. However, these methods make it possible to observe the swallowing process and identify areas of dysfunction during the process with high accuracy. "Prevention is better than cure" is the fundamental principle of medicine in general. In this context, this thesis focuses on remote monitoring of patients and more specifically monitoring the functional evolution of the swallowing process of people at risk of dysphagia, whether at home or in medical institutions, using the minimum number of non-invasive sensors. This has motivated the monitoring of the swallowing process based on the capturing only the acoustic signature of the process and modeling the process as a sequence of acoustic events occuring within a specific time frame. The main problem of such acoustic signal processing is the automatic detection of the relevent sound signals, a crucial step in the automatic classification of sounds during food intake for automatic monitoring. The detection of relevant signal reduces the complexity of the subsequent analysis and characterisation of a particular swallowing process. The-state-of-the-art algorithms processing the detection of the swallowing sounds as distinguished from environmental noise were not sufficiently accurate. Hence, the idea occured of using an adaptive threshold on the signal resulting from wavelet decomposition. The issues related to the classification of sounds in general and swallowing sounds in particular are addressed in this work with a hierarchical analysis that aims to first identify the swallowing sound segments and then to decompose them into three characteristic sounds, consistent with the physiology of the process. The coupling between detection and classification is also addressed in this work. The real-time implementation of the detection algorithm has been carried out. However, clinical use of the classification is discussed with a plan for its staged deployment subject to normal processes of clinical approval.
13

Structural Health Monitoring Using Multiple Piezoelectric Sensors and Actuators

Kabeya, Kazuhisa III 03 June 1998 (has links)
A piezoelectric impedance-based structural health monitoring technique was developed at the Center for Intelligent Material Systems and Structures. It has been successfully implemented on several complex structures to detect incipient-type damage such as small cracks or loose connections. However, there are still some problems to be solved before full scale development and commercialization can take place. These include: i) the damage assessment is influenced by ambient temperature change; ii) the sensing area is small; and iii) the ability to identify the damage location is poor. The objective of this research is to solve these problems in order to apply the impedance-based structural health monitoring technique to real structures. First, an empirical compensation technique to minimize the temperature effect on the damage assessment has been developed. The compensation technique utilizes the fact that the temperature change causes vertical and horizontal shifts of the signature pattern in the impedance versus frequency plot, while damage causes somewhat irregular changes. Second, a new impedance-based technique that uses multiple piezoelectric sensor-actuators has been developed which extends the sensing area. The new technique relies on the measurement of electrical transfer admittance, which gives us mutual information between multiple piezoelectric sensor-actuators. We found that this technique increases the sensing region by at least an order of magnitude. Third, a time domain technique to identify the damage location has been proposed. This technique also uses multiple piezoelectric sensors and actuators. The basic idea utilizes the pulse-echo method often used in ultrasonic testing, together with wavelet decomposition to extract traveling pulses from a noisy signal. The results for a one-dimensional structure show that we can determine the damage location to within a spatial resolution determined by the temporal resolution of the data acquisition. The validity of all these techniques has been verified by proof-of-concept experiments. These techniques help bring conventional impedance-based structural health monitoring closer to full scale development and commercialization. / Master of Science
14

Scalable Estimation and Testing for Complex, High-Dimensional Data

Lu, Ruijin 22 August 2019 (has links)
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time. / Doctor of Philosophy / With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
15

Méthodes de reconstruction et de quantification pour la microscopie de super-résolution par localisation de molécules individuelles / Reconstruction and quantification methods for single-molecule based super-resolution microscopy

Kechkar, Mohamed Adel 20 December 2013 (has links)
Le domaine de la microscopie de fluorescence a connu une réelle révolution ces dernières années, permettant d'atteindre des résolutions nanométriques, bien en dessous de la limite de diffraction prédite par Abbe il y a plus d’un siècle. Les techniques basées sur la localisation de molécules individuelles telles que le PALM (Photo-Activation Light Microscopy) ou le (d)STORM (direct Stochastic Optical Reconstruction Microscopy) permettent la reconstruction d’images d’échantillons biologiques en 2 et 3 dimensions, avec des résolutions quasi-moléculaires. Néanmoins, même si ces techniques nécessitent une instrumentation relativement simple, elles requièrent des traitements informatiques conséquents, limitant leur utilisation en routine. En effet, plusieurs dizaines de milliers d’images brutes contenant plus d’un million de molécules doivent être acquises et analysées pour reconstruire une seule image. La plupart des outils disponibles nécessitent une analyse post-acquisition, alourdissant considérablement le processus d’acquisition. Par ailleurs la quantification de l’organisation, de la dynamique mais aussi de la stœchiométrie des complexes moléculaires à des échelles nanométriques peut constituer une clé déterminante pour élucider l’origine de certaines maladies. Ces nouvelles techniques offrent de telles capacités, mais les méthodes d’analyse pour y parvenir restent à développer. Afin d’accompagner cette nouvelle vague de microscopie de localisation et de la rendre utilisable en routine par des expérimentateurs non experts, il est primordial de développer des méthodes de localisation et d’analyse efficaces, simples d’emploi et quantitatives. Dans le cadre de ce travail de thèse, nous avons développé dans un premier temps une nouvelle technique de localisation et reconstruction en temps réel basée sur la décomposition en ondelettes et l‘utilisation des cartes GPU pour la microscopie de super-résolution en 2 et 3 dimensions. Dans un second temps, nous avons mis au point une méthode quantitative basée sur la visualisation et la photophysique des fluorophores organiques pour la mesure de la stœchiométrie des récepteurs AMPA dans les synapses à l’échelle nanométrique. / The field of fluorescence microscopy has witnessed a real revolution these last few years, allowing nanometric spatial resolutions, well below the diffraction limit predicted by Abe more than a century ago. Single molecule-based super-resolution techniques such as PALM (Photo-Activation Light Microscopy) or (d)STORM (direct Stochastic Optical Reconstruction Microscopy) allow the image reconstruction of biological samples in 2 and 3 dimensions, with close to molecular resolution. However, while they require a quite straightforward instrumentation, they need heavy computation, limiting their use in routine. In practice, few tens of thousands of raw images with more than one million molecules must be acquired and analyzed to reconstruct a single super-resolution image. Most of the available tools require post-acquisition processing, making the acquisition protocol much heavier. In addition, the quantification of the organization, dynamics but also the stoichiometry of biomolecular complexes at nanometer scales can be a key determinant to elucidate the origin of certain diseases. Novel localization microscopy techniques offer such capabilities, but dedicated analysis methods still have to be developed. In order to democratize this new generation of localization microscopy techniques and make them usable in routine by non-experts, it is essential to develop simple and easy to use localization and quantitative analysis methods. During this PhD thesis, we first developed a new technique for real-time localization and reconstruction based on wavelet decomposition and the use of GPU cards for super-resolution microscopy in 2 and 3 dimensions. Second, we have proposed a quantitative method based on the visualization and the photophysics of organic fluorophores for measuring the stoichiometry of AMPA receptors in synapses at the molecular scale.
16

Interprétation des signaux cérébraux pour l’autonomie des handicapés : Système de reconnaissance de mots imaginés / Cerebral signal processing for the autonomy of the handicapped : Imagery recognition system

Abdallah, Nassib 20 December 2018 (has links)
Les interfaces Cerveau Machine représentent une solution pour rétablir plusieurs fonctions comme le mouvement, la parole, etc. La construction de BCI se compose de quatre phases principales: "Collecte des données", "Prétraitement du signal", "Extraction et sélection de caractéristiques", "Classification". Dans ce rapport nous présentons un nouveau système de reconnaissance de mots imaginées basé sur une technique d’acquisition non invasive (EEG) et portable pour faciliter aux personnes ayant des handicaps spécifiques, leurs communications avec le monde extérieur. Cette thèse inclut un système nommé FEASR pour la construction d’une base de données pertinente et optimisée. Cette base a été testée avec plusieurs méthodes de classification pour obtenir un taux maximal de reconnaissance de 83.4% pour cinq mots imaginés en arabe. De plus, on discute de l’impact des algorithmes d’optimisations (Sélection des capteurs de Wernicke, Analyse en composante principale et sélection de sous bandes résultant de la décomposition en ondelette) sur les pourcentages de reconnaissance en fonction de la taille de notre base de données et de sa réduction. / The Brain Machine interfaces represent a solution to restore several human issues such as movement, speech, etc. The construction of BCI consists of four main phases: "Data Recording", "Signal preprocessing", "Extraction and Selection of Characteristics", and "Classification". In this report we present a new imagery recognition system based on a non-invasive (EEG) and portable acquisition technique to facilitate communication with the outside world for people with specific disabilities.This thesis includes a system called FEASR for the construction of a relevant and optimized database. This database has been tested with several classification methods to obtain a maximum recognition rate of 83.4% for five words imagined in Arabic. In addition, we discuss the impact of optimization algorithms (Wernicke sensor selection, principal component analysis algorithm and the selection of subbands resulting from the discrete wavelet transform decomposition) on recognition percentages according to the size of our database and its reduction.
17

SPARSE DISCRETE WAVELET DECOMPOSITION AND FILTER BANK TECHNIQUES FOR SPEECH RECOGNITION

Jingzhao Dai (6642491) 11 June 2019 (has links)
<p>Speech recognition is widely applied to translation from speech to related text, voice driven commands, human machine interface and so on [1]-[8]. It has been increasingly proliferated to Human’s lives in the modern age. To improve the accuracy of speech recognition, various algorithms such as artificial neural network, hidden Markov model and so on have been developed [1], [2].</p> <p>In this thesis work, the tasks of speech recognition with various classifiers are investigated. The classifiers employed include the support vector machine (SVM), k-nearest neighbors (KNN), random forest (RF) and convolutional neural network (CNN). Two novel features extraction methods of sparse discrete wavelet decomposition (SDWD) and bandpass filtering (BPF) based on the Mel filter banks [9] are developed and proposed. In order to meet diversity of classification algorithms, one-dimensional (1D) and two-dimensional (2D) features are required to be obtained. The 1D features are the array of power coefficients in frequency bands, which are dedicated for training SVM, KNN and RF classifiers while the 2D features are formed both in frequency domain and temporal variations. In fact, the 2D feature consists of the power values in decomposed bands versus consecutive speech frames. Most importantly, the 2D feature with geometric transformation are adopted to train CNN.</p> <p>Speech recognition including males and females are from the recorded data set as well as the standard data set. Firstly, the recordings with little noise and clear pronunciation are applied with the proposed feature extraction methods. After many trials and experiments using this dataset, a high recognition accuracy is achieved. Then, these feature extraction methods are further applied to the standard recordings having random characteristics with ambient noise and unclear pronunciation. Many experiment results validate the effectiveness of the proposed feature extraction techniques.</p>
18

Réduction de dimension en apprentissage supervisé : applications à l’étude de l’activité cérébrale

Vezard, Laurent 13 December 2013 (has links)
L'objectif de ce travail est de développer une méthode capable de déterminer automatiquement l'état de vigilance chez l'humain. Les applications envisageables sont multiples. Une telle méthode permettrait par exemple de détecter automatiquement toute modification de l'état de vigilance chez des personnes qui doivent rester dans un état de vigilance élevée (par exemple, les pilotes ou les personnels médicaux).Dans ce travail, les signaux électroencéphalographiques (EEG) de 58 sujets dans deux états de vigilance distincts (état de vigilance haut et bas) ont été recueillis à l'aide d'un casque à 58 électrodes posant ainsi un problème de classification binaire. Afin d'envisager une utilisation de ces travaux sur une application du monde réel, il est nécessaire de construire une méthode de prédiction qui ne nécessite qu'un faible nombre de capteurs (électrodes) afin de limiter le temps de pose du casque à électrodes ainsi que son coût. Au cours de ces travaux de thèse, plusieurs approches ont été développées. Une première approche propose d'utiliser un pré-traitement des signaux EEG basé sur l'utilisation d'une décomposition en ondelettes discrète des signaux EEG afin d'extraire les contributions de chaque fréquence dans le signal. Une régression linéaire est alors effectuée sur les contributions de certaines de ces fréquences et la pente de cette régression est conservée. Un algorithme génétique est utilisé afin d'optimiser le choix des fréquences sur lesquelles la régression est réalisée. De plus, cet algorithme génétique permet la sélection d'une unique électrode.Une seconde approche est basée sur l'utilisation du Common Spatial Pattern (CSP). Cette méthode permet de définir des combinaisons linéaires des variables initiales afin d'obtenir des signaux synthétiques utiles pour la tâche de classification. Dans ce travail, un algorithme génétique ainsi que des méthodes de recherche séquentielle ont été proposés afin de sélectionner un sous groupes d'électrodes à conserver lors du calcul du CSP.Enfin, un algorithme de CSP parcimonieux basé sur l'utilisation des travaux existant sur l'analyse en composantes principales parcimonieuse a été développé.Les résultats de chacune des approches sont détaillés et comparés. Ces travaux ont aboutit sur l'obtention d'un modèle permettant de prédire de manière rapide et fiable l'état de vigilance d'un nouvel individu. / The aim of this work is to develop a method able to automatically determine the alertness state of humans. Such a task is relevant to diverse domains, where a person is expected or required to be in a particular state. For instance, pilots, security personnel or medical personnel are expected to be in a highly alert state, and this method could help to confirm this or detect possible problems. In this work, electroencephalographic data (EEG) of 58 subjects in two distinct vigilance states (state of high and low alertness) were collected via a cap with $58$ electrodes. Thus, a binary classification problem is considered. In order to use of this work on a real-world applications, it is necessary to build a prediction method that requires only a small number of sensors (electrodes) in order to minimize the time needed by the cap installation and the cap cost. During this thesis, several approaches have been developed. A first approach involves use of a pre-processing method for EEG signals based on the use of a discrete wavelet decomposition in order to extract the energy of each frequency in the signal. Then, a linear regression is performed on the energies of some of these frequencies and the slope of this regression is retained. A genetic algorithm (GA) is used to optimize the selection of frequencies on which the regression is performed. Moreover, the GA is used to select a single electrode .A second approach is based on the use of the Common Spatial Pattern method (CSP). This method allows to define linear combinations of the original variables to obtain useful synthetic signals for the task classification. In this work, a GA and a sequential search method have been proposed to select a subset of electrode which are keep in the CSP calculation.Finally, a sparse CSP algorithm, based on the use of existing work in the sparse principal component analysis, was developed.The results of the different approaches are detailed and compared. This work allows us to obtaining a reliable model to obtain fast prediction of the alertness of a new individual.

Page generated in 0.102 seconds