• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 196
  • 53
  • 21
  • 19
  • 8
  • 7
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 378
  • 378
  • 96
  • 67
  • 66
  • 64
  • 58
  • 51
  • 50
  • 38
  • 37
  • 37
  • 34
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

Vers une gestion décentralisée des données des réseaux de capteurs dans le contexte des smart grids / Towards decentralized data management of smard grids' sensor networks

Matta, Natalie 20 March 2014 (has links)
Cette thèse s’intéresse à la gestion décentralisée des données récoltées par les réseaux de capteurs dans le contexte des réseaux électriques intelligents (smart grids). Nous proposons une architecture décentralisée basée sur les systèmes multi-agents pour la gestion des données et de l’énergie dans un smart grid. En particulier, nos travaux traitent de la gestion des données des réseaux de capteurs dans le réseau de distribution d’un smart grid et ont pour objectif de lever deux verrous essentiels : (1) l'identification et la détection de défaillances et de changements nécessitant une prise de décision et la mise en œuvre des actions correspondantes ; (2) la gestion des grandes quantités de données qui seront récoltées suite à la prolifération des capteurs et des compteurs communicants. La gestion de ces informations peut faire appel à plusieurs méthodes, dont l'agrégation des paquets de données sur laquelle nous nous focalisons dans cette thèse. Nous proposons d’agréger (PriBaCC) et/ou de corréler (CoDA) le contenu de ces paquets de données de manière décentralisée. Ainsi, le traitement de ces données s'effectuera plus rapidement, ce qui aboutira à une prise de décision rapide et efficace concernant la gestion de l'énergie. La validation par simulation de nos contributions a montré que celles-ci répondent aux enjeux identifiés, notamment en réduisant le volume des données à gérer et le délai de communication des données prioritaires / This thesis focuses on the decentralized management of data collected by wireless sensor networks which are deployed in a smart grid, i.e. the evolved new generation electricity network. It proposes a decentralized architecture based on multi-agent systems for both data and energy management in the smart grid. In particular, our works deal with data management of sensor networks which are deployed in the distribution electric subsystem of a smart grid. They aim at answering two key challenges: (1) detection and identification of failure and disturbances requiring swift reporting and appropriate reactions; (2) efficient management of the growing volume of data caused by the proliferation of sensors and other sensing entities such as smart meters. The management of this data can call upon several methods, including the aggregation of data packets on which we focus in this thesis. To this end, we propose to aggregate (PriBaCC) and/or to correlate (CoDA) the contents of these data packets in a decentralized manner. Data processing will thus be done faster, consequently leading to rapid and efficient decision-making concerning energy management. The validation of our contributions by means of simulation has shown that they meet the identified challenges. It has also put forward their enhancements with respect to other existing approaches, particularly in terms of reducing data volume as well as transmission delay of high priority data
302

Design and Analysis of Techniques for Multiple-Instance Learning in the Presence of Balanced and Skewed Class Distributions

Wang, Xiaoguang January 2015 (has links)
With the continuous expansion of data availability in many large-scale, complex, and networked systems, such as surveillance, security, the Internet, and finance, it becomes critical to advance the fundamental understanding of knowledge discovery and analysis from raw data to support decision-making processes. Existing knowledge discovery and data analyzing techniques have shown great success in many real-world applications such as applying Automatic Target Recognition (ATR) methods to detect targets of interest in imagery, drug activity prediction, computer vision recognition, and so on. Among these techniques, Multiple-Instance (MI) learning is different from standard classification since it uses a set of bags containing many instances as input. The instances in each bag are not labeled | instead the bags themselves are labeled. In this area many researchers have accomplished a lot of work and made a lot of progress. However, there still exist some areas which are not covered. In this thesis, we focus on two topics of MI learning: (1) Investigating the relationship between MI learning and other multiple pattern learning methods, which include multi-view learning, data fusion method and multi-kernel SVM. (2) Dealing with the class imbalance problem of MI learning. In the first topic, three different learning frameworks will be presented for general MI learning. The first uses multiple view approaches to deal with MI problem, the second is a data fusion framework, and the third framework, which is an extension of the first framework, uses multiple-kernel SVM. Experimental results show that the approaches presented work well on solving MI problem. The second topic is concerned with the imbalanced MI problem. Here we investigate the performance of learning algorithms in the presence of underrepresented data and severe class distribution skews. For this problem, we propose three solution frameworks: a data re-sampling framework, a cost-sensitive boosting framework and an adaptive instance-weighted boosting SVM (with the name IB_SVM) for MI learning. Experimental results - on both benchmark datasets and application datasets - show that the proposed frameworks are proved to be effective solutions for the imbalanced problem of MI learning.
303

Assessing and Improving Methods for the Effective Use of Landsat Imagery for Classification and Change Detection in Remote Canadian Regions

He, Juan Xia January 2016 (has links)
Canadian remote areas are characterized by a minimal human footprint, restricted accessibility, ubiquitous lichen/snow cover (e.g. Arctic) or continuous forest with water bodies (e.g. Sub-Arctic). Effective mapping of earth surface cover and land cover changes using free medium-resolution Landsat images in remote environments is a challenge due to the presence of spectrally mixed pixels, restricted field sampling and ground truthing, and the often relatively homogenous cover in some areas. This thesis investigates how remote sensing methods can be applied to improve the capability of Landsat images for mapping earth surface features and land cover changes in Canadian remote areas. The investigation is conducted from the following four perspectives: 1) determining the continuity of Landsat-8 images for mapping surficial materials, 2) selecting classification algorithms that best address challenges involving mixed pixels, 3) applying advanced image fusion algorithms to improve Landsat spatial resolution while maintaining spectral fidelity and reducing the effects of mixed pixels on image classification and change detection, and, 4) examining different change detection techniques, including post-classification comparisons and threshold-based methods employing PCA(Principal Components Analysis)-fused multi-temporal Landsat images to detect changes in Canadian remote areas. Three typical landscapes in Canadian remote areas are chosen in this research. The first is located in the Canadian Arctic and is characterized by ubiquitous lichen and snow cover. The second is located in the Canadian sub-Arctic and is characterized by well-defined land features such as highlands, ponds, and wetlands. The last is located in a forested highlands region with minimal built-environment features. The thesis research demonstrates that the newly available Landsat-8 images can be a major data source for mapping Canadian geological information in Arctic areas when Landsat-7 is decommissioned. In addition, advanced classification techniques such as a Support-Vector-Machine (SVM) can generate satisfactory classification results in the context of mixed training data and minimal field sampling and truthing. This thesis research provides a systematic investigation on how geostatistical image fusion can be used to improve the performance of Landsat images in identifying surface features. Finally, SVM-based post-classified multi-temporal, and threshold-based PCA-fused bi-temporal Landsat images are shown to be effective in detecting different aspects of vegetation change in a remote forested region in Ontario. This research provides a comprehensive methodology to employ free Landsat images for image classification and change detection in Canadian remote regions.
304

Caractérisation de l'endommagement des composites à matrice polymère par une approche multi-technique non destructive

Harizi, Walid 11 December 2012 (has links)
Cette étude novatrice consiste à mettre en oeuvre dans un même protocole expérimental, trois techniques de caractérisation non destructive en simultané : l’émission acoustique, la thermographie infrarouge et les ultrasons pour la caractérisation de l’endommagement des matériaux Composites à fibres continues et à Matrice Polymère (CMP) à plis croisés [0/90]S. Chaque technique a permis demontrer sa potentialité à révéler l’endommagement dépendant de ses spécificités intrinsèques. L'émission acoustique a été utilisée sous sa forme classique et couplée avec une classification de données obtenue par les k-means et la carte de Kohonen. La thermographie infrarouge a été étudiée selon ses deux formes passive et active, les méthodes ultrasonores ont été exploitées en termes d’amplitude et de vitesse des ondes longitudinales et des ondes de Lamb respectivement. Il a été montré que l’approche multitechnique adoptée dans ce travail est très intéressante pour obtenir un diagnostic complet sur l’état de santé du matériau au repos et sous différents niveaux de chargement mécanique en traction. Il s’est avéré aussi que l’aspect « complémentarité » entre les trois techniques était plus envisageable que celui de la « redondance ». La fusion des données a été utilisée pour avoir une prise de décision fiable, complète et plus crédible sur les différents mécanismes d’endommagement susceptibles d’apparaître dans un matériau CMP. Ceci n’a été possible que pour les deux techniques d’imagerie, le C-scan ultrasonore et la thermographie infrarouge. En conclusion, les résultats montrent que ces trois techniques sont potentiellement capables de qualifier l’état d’endommagement du matériau, mais qu’elles ne le quantifient pas de la même manière / This innovative study consists to implement in the same experimental procedure three non destructive techniques simultaneously: acoustic emission, infrared thermography and ultrasonic waves for the characterization of damage in cross ply Polymer Composite Materials (PCM) [0/90]S. Each technique has demonstrated its potential to reveal the damage that depends on its intrinsic characteristics. Acoustic emission has been used in its classical form and coupled with a data classification obtained by k-means and Kohonen map. Infrared thermography has been studied using both passive and active forms, ultrasonic methods have been used by exploiting amplitude and velocity of longitudinal and Lamb waves respectively. It has been shown that the adopted multi-technique approach is veryinteresting to obtain a full diagnostic of the health state of the material before and after uniaxial mechanical loading. The “complementarity” aspect between the three used techniques is showed more interesting that “redundancy” aspect. The data fusion theory was used to have a reliable, comprehensive and credible decision about the different damage mechanisms may appear in PCM material. This has been possible only for the two imaging techniques, ultrasonic C-scan and infrared thermography. All in all, the results show that these three techniques are potentially able to describe the damage state of the material, but they don’t quantify it with the same manner
305

Tolérance aux fautes pour la perception multi-capteurs : application à la localisation d'un véhicule intelligent / Fault tolerance for multi-sensor perception : application to the localization of an intelligent vehicle

Bader, Kaci 05 December 2014 (has links)
La perception est une entrée fondamentale des systèmes robotiques, en particulier pour la localisation, la navigation et l'interaction avec l'environnement. Or les données perçues par les systèmes robotiques sont souvent complexes et sujettes à des imprécisions importantes. Pour remédier à ces problèmes, l'approche multi-capteurs utilise soit plusieurs capteurs de même type pour exploiter leur redondance, soit des capteurs de types différents pour exploiter leur complémentarité afin de réduire les imprécisions et les incertitudes sur les capteurs. La validation de cette approche de fusion de données pose deux problèmes majeurs.Tout d'abord, le comportement des algorithmes de fusion est difficile à prédire,ce qui les rend difficilement vérifiables par des approches formelles. De plus, l'environnement ouvert des systèmes robotiques engendre un contexte d'exécution très large, ce qui rend les tests difficiles et coûteux. L'objet de ces travaux de thèse est de proposer une alternative à la validation en mettant en place des mécanismes de tolérance aux fautes : puisqu'il est difficile d'éliminer toutes les fautes du système de perception, on va chercher à limiter leurs impacts sur son fonctionnement. Nous avons étudié la tolérance aux fautes intrinsèquement permise par la fusion de données en analysant formellement les algorithmes de fusion de données, et nous avons proposé des mécanismes de détection et de rétablissement adaptés à la perception multi-capteurs. Nous avons ensuite implémenté les mécanismes proposés pour une application de localisation de véhicules en utilisant la fusion de données par filtrage de Kalman. Nous avons finalement évalué les mécanismes proposés en utilisant le rejeu de données réelles et la technique d'injection de fautes, et démontré leur efficacité face à des fautes matérielles et logicielles. / Perception is a fundamental input for robotic systems, particularly for positioning, navigation and interaction with the environment. But the data perceived by these systems are often complex and subject to significant imprecision. To overcome these problems, the multi-sensor approach uses either multiple sensors of the same type to exploit their redundancy or sensors of different types for exploiting their complementarity to reduce the sensors inaccuracies and uncertainties. The validation of the data fusion approach raises two major problems. First, the behavior of fusion algorithms is difficult to predict, which makes them difficult to verify by formal approaches. In addition, the open environment of robotic systems generates a very large execution context, which makes the tests difficult and costly. The purpose of this work is to propose an alternative to validation by developing fault tolerance mechanisms : since it is difficult to eliminate all the errors of the perceptual system, We will try to limit impact in their operation. We studied the inherently fault tolerance allowed by data fusion by formally analyzing the data fusion algorithms, and we have proposed detection and recovery mechanisms suitable for multi-sensor perception, we implemented the proposed mechanisms on vehicle localization application using Kalman filltering data fusion. We evaluated the proposed mechanims using the real data replay and fault injection technique.
306

Fusion techniques for iris recognition in degraded sequences / Techniques de fusion pour la reconnaissance de personne par l’iris dans des séquences dégradées

Othman, Nadia 11 March 2016 (has links)
Parmi les diverses modalités biométriques qui permettent l'identification des personnes, l'iris est considéré comme très fiable, avec un taux d'erreur remarquablement faible. Toutefois, ce niveau élevé de performances est obtenu en contrôlant la qualité des images acquises et en imposant de fortes contraintes à la personne (être statique et à proximité de la caméra). Cependant, dans de nombreuses applications de sécurité comme les contrôles d'accès, ces contraintes ne sont plus adaptées. Les images résultantes souffrent alors de diverses dégradations (manque de résolution, artefacts...) qui affectent négativement les taux de reconnaissance. Pour contourner ce problème, il est possible d’exploiter la redondance de l’information découlant de la disponibilité de plusieurs images du même œil dans la séquence enregistrée. Cette thèse se concentre sur la façon de fusionner ces informations, afin d'améliorer les performances. Dans la littérature, diverses méthodes de fusion ont été proposées. Cependant, elles s’accordent sur le fait que la qualité des images utilisées dans la fusion est un facteur crucial pour sa réussite. Plusieurs facteurs de qualité doivent être pris en considération et différentes méthodes ont été proposées pour les quantifier. Ces mesures de qualité sont généralement combinées pour obtenir une valeur unique et globale. Cependant, il n'existe pas de méthode de combinaison universelle et des connaissances a priori doivent être utilisées, ce qui rend le problème non trivial. Pour faire face à ces limites, nous proposons une nouvelle manière de mesurer et d'intégrer des mesures de qualité dans un schéma de fusion d'images, basé sur une approche de super-résolution. Cette stratégie permet de remédier à deux problèmes courants en reconnaissance par l'iris: le manque de résolution et la présence d’artefacts dans les images d'iris. La première partie de la thèse consiste en l’élaboration d’une mesure de qualité pertinente pour quantifier la qualité d’image d’iris. Elle repose sur une mesure statistique locale de la texture de l’iris grâce à un modèle de mélange de Gaussienne. L'intérêt de notre mesure est 1) sa simplicité, 2) son calcul ne nécessite pas d'identifier a priori les types de dégradations, 3) son unicité, évitant ainsi l’estimation de plusieurs facteurs de qualité et un schéma de combinaison associé et 4) sa capacité à prendre en compte la qualité intrinsèque des images mais aussi, et surtout, les défauts liés à une mauvaise segmentation de la zone d’iris. Dans la deuxième partie de la thèse, nous proposons de nouvelles approches de fusion basées sur des mesures de qualité. Tout d’abord, notre métrique est utilisée comme une mesure de qualité globale de deux façons différentes: 1) comme outil de sélection pour détecter les meilleures images de la séquence et 2) comme facteur de pondération au niveau pixel dans le schéma de super-résolution pour donner plus d'importance aux images de bonnes qualités. Puis, profitant du caractère local de notre mesure de qualité, nous proposons un schéma de fusion original basé sur une pondération locale au niveau pixel, permettant ainsi de prendre en compte le fait que les dégradations peuvent varier d’une sous partie à une autre. Ainsi, les zones de bonne qualité contribueront davantage à la reconstruction de l'image fusionnée que les zones présentant des artéfacts. Par conséquent, l'image résultante sera de meilleure qualité et pourra donc permettre d'assurer de meilleures performances en reconnaissance. L'efficacité des approches proposées est démontrée sur plusieurs bases de données couramment utilisées: MBGC, Casia-Iris-Thousand et QFIRE à trois distances différentes. Nous étudions séparément l'amélioration apportée par la super-résolution, la qualité globale, puis locale dans le processus de fusion. Les résultats montrent une amélioration importante apportée par l'utilisation de la qualité globale, amélioration qui est encore augmentée en utilisant la qualité locale / Among the large number of biometric modalities, iris is considered as a very reliable biometrics with a remarkably low error rate. The excellent performance of iris recognition systems are obtained by controlling the quality of the captured images and by imposing certain constraints on users, such as standing at a close fixed distance from the camera. However, in many real-world applications such as control access and airport boarding these constraints are no longer suitable. In such non ideal conditions, the resulting iris images suffer from diverse degradations which have a negative impact on the recognition rate. One way to try to circumvent this bad situation is to use some redundancy arising from the availability of several images of the same eye in the recorded sequence. Therefore, this thesis focuses on how to fuse the information available in the sequence in order to improve the performance. In the literature, diverse schemes of fusion have been proposed. However, they agree on the fact that the quality of the used images in the fusion process is an important factor for its success in increasing the recognition rate. Therefore, researchers concentrated their efforts in the estimation of image quality to weight each image in the fusion process according to its quality. There are various iris quality factors to be considered and diverse methods have been proposed for quantifying these criteria. These quality measures are generally combined to one unique value: a global quality. However, there is no universal combination scheme to do so and some a priori knowledge has to be inserted, which is not a trivial task. To deal with these drawbacks, in this thesis we propose of a novel way of measuring and integrating quality measures in a super-resolution approach, aiming at improving the performance. This strategy can handle two types of issues for iris recognition: the lack of resolution and the presence of various artifacts in the captured iris images. The first part of the doctoral work consists in elaborating a relevant quality metric able to quantify locally the quality of the iris images. Our measure relies on a Gaussian Mixture Model estimation of clean iris texture distribution. The interest of our quality measure is 1) its simplicity, 2) its computation does not require identifying in advance the type of degradations that can occur in the iris image, 3) its uniqueness, avoiding thus the computation of several quality metrics and associated combination rule and 4) its ability to measure the intrinsic quality and to specially detect segmentation errors. In the second part of the thesis, we propose two novel quality-based fusion schemes. Firstly, we suggest using our quality metric as a global measure in the fusion process in two ways: as a selection tool for detecting the best images and as a weighting factor at the pixel-level in the super-resolution scheme. In the last case, the contribution of each image of the sequence in final fused image will only depend on its overall quality. Secondly, taking advantage of the localness of our quality measure, we propose an original fusion scheme based on a local weighting at the pixel-level, allowing us to take into account the fact that degradations can be different in diverse parts of the iris image. This means that regions free from occlusions will contribute more in the image reconstruction than regions with artefacts. Thus, the quality of the fused image will be optimized in order to improve the performance. The effectiveness of the proposed approaches is shown on several databases commonly used: MBGC, Casia-Iris-Thousand and QFIRE at three different distances: 5, 7 and 11 feet. We separately investigate the improvement brought by the super-resolution, the global quality and the local quality in the fusion process. In particular, the results show the important improvement brought by the use of the global quality, improvement that is even increased using the local quality
307

Návrh algoritmu pro fúzi dat navigačních systémů GPS a INS / Navigation algorithm for INS/GPS Data Fusion

Pálenská, Markéta January 2013 (has links)
Diplomová práce se zabývá návrhem algoritmu rozšířeného Kalmanova filtru, který integruje data z inerciálního navigačního systému (INS) a globálního polohovacího systému (GPS). Součástí algoritmu je i samotná mechanizace INS, určující na základě dat z akcelerometrů a gyroskopů údaje o rychlosti, zeměpisné pozici a polohových úhlech letadla. Vzhledem k rychlému nárůstu chybovosti INS je výstup korigován hodnotami rychlosti a pozice získané z GPS. Výsledný algoritmus je implementován v prostředí Simulink. Součástí práce je odvození jednotlivých stavových matic rozšířeného Kalmanova filtru.
308

Pokročilá navigace v heterogenních multirobotických systémech ve vnějším prostředí / Advanced Navigation in Heterogeneous Multi-robot Systems in Outdoor Environment

Jílek, Tomáš January 2015 (has links)
The doctoral thesis discusses current options for the navigation of unmanned ground vehicles with a focus on achieving high absolute compliance of the required motion trajectory and the obtained one. The current possibilities of key self-localization methods, such as global satellite navigation systems, inertial navigation systems, and odometry, are analyzed. The description of the navigation method, which allows achieving a centimeter-level accuracy of the required trajectory tracking with the above mentioned self-localization methods, forms the core of the thesis. The new navigation method was designed with regard to its very simple parameterization, respecting the limitations of the used robot drive configuration. Thus, after an appropriate parametrization of the navigation method, it can be applied to any drive configuration. The concept of the navigation method allows integrating and using more self-localization systems and external navigation methods simultaneously. This increases the overall robustness of the whole process of the mobile robot navigation. The thesis also deals with the solution of cooperative convoying heterogeneous mobile robots. The proposed algorithms were validated under real outdoor conditions in three different experiments.
309

Analýza senzorických dat pro pokročilé uživatelské rozhraní / Sensor Data Analysis for Advanced User Interfaces

Chmiel, Filip January 2013 (has links)
The paper deals with the creation of interface based on multiple input signals, i.e. multimodal interface. For this purpose analyzes the benefits of the approach to communicate with the device that way. The work also includes an overview of the level at which you can perform data fusion, and different approaches to the layout of the system architecture for multimodal data processing. The important part is the actual design of the system, where for the interface was chosen distributed architecture using software agents for processing inputs. As a method for data integration was picked hybrid fusion based on dialog driven and unification strategy. The result should be an interface for media center control and interaction with other devices around the user.
310

Design and Implementation of a Wireless Sensor Network for Smart Home Applications

Prodromos-Vasileios, Mekikis January 2012 (has links)
In smart homes devices take partial control of the house and make decisions that increase its safety and functionality. Due to the high cost of cable installation, wireless sensor networks are considered to be a good choice for smart home systems. In smart homes, to reliably detect events such as intrusions, gas leakages, or accidents, is an essential functionality. A correct control action, such as alarming or shutting gas pipes relies entirely on reliable event detection. Given that reliability is the major concern in these devices, detection solutions should be found that make decisions with the smallest possible cost. A way to achieve this is by using detection and data fusion techniques so measurements from multiple sensors make the final decision whether the event has happened or not. Furthermore, estimation aided detection in every sensor is essential so as to provide a noiseless environment for the decision concerning the happening of an event in each node of the network. In this thesis, several distributed detection techniques are reviewed and their suitability for low power sensor networks is investigated. A wireless sensor network is designed and completely implemented in order to test the functionality and the reliability of the methods. In the presence of a detected event the sensor network sends a Twitter notification to the user and, meanwhile, actuates a control decision that could solve the detected problem. The experiments show that the studied detected methods are able to offer reliable performance even in the presence of high noises in the measurements. It is concluded that wireless sensor networks can be effectively used in smart home applications, provided that detection methods of low complexity and reliability are implemented.

Page generated in 0.0876 seconds