• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 25
  • 13
  • 8
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • Tagged with
  • 292
  • 292
  • 78
  • 69
  • 64
  • 61
  • 56
  • 48
  • 43
  • 43
  • 42
  • 40
  • 38
  • 35
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Sensor Fusion for Automotive Applications

Lundquist, Christian January 2011 (has links)
Mapping stationary objects and tracking moving targets are essential for many autonomous functions in vehicles. In order to compute the map and track estimates, sensor measurements from radar, laser and camera are used together with the standard proprioceptive sensors present in a car. By fusing information from different types of sensors, the accuracy and robustness of the estimates can be increased. Different types of maps are discussed and compared in the thesis. In particular, road maps make use of the fact that roads are highly structured, which allows relatively simple and powerful models to be employed. It is shown how the information of the lane markings, obtained by a front looking camera, can be fused with inertial measurement of the vehicle motion and radar measurements of vehicles ahead to compute a more accurate and robust road geometry estimate. Further, it is shown how radar measurements of stationary targets can be used to estimate the road edges, modeled as polynomials and tracked as extended targets. Recent advances in the field of multiple target tracking lead to the use of finite set statistics (FISST) in a set theoretic approach, where the targets and the measurements are treated as random finite sets (RFS). The first order moment of a RFS is called probability hypothesis density (PHD), and it is propagated in time with a PHD filter. In this thesis, the PHD filter is applied to radar data for constructing a parsimonious representation of the map of the stationary objects around the vehicle. Two original contributions, which exploit the inherent structure in the map, are proposed. A data clustering algorithm is suggested to structure the description of the prior and considerably improving the update in the PHD filter. Improvements in the merging step further simplify the map representation. When it comes to tracking moving targets, the focus of this thesis is on extended targets, i.e., targets which potentially may give rise to more than one measurement per time step. An implementation of the PHD filter, which was proposed to handle data obtained from extended targets, is presented. An approximation is proposed in order to limit the number of hypotheses. Further, a framework to track the size and shape of a target is introduced. The method is based on measurement generating points on the surface of the target, which are modeled by an RFS. Finally, an efficient and novel Bayesian method is proposed for approximating the tire radii of a vehicle based on particle filters and the marginalization concept. This is done under the assumption that a change in the tire radius is caused by a change in tire pressure, thus obtaining an indirect tire pressure monitoring system. The approaches presented in this thesis have all been evaluated on real data from both freeways and rural roads in Sweden. / SEFS -- IVSS / VR - ETT
92

Practical Architectures for Fused Visual and Inertial Mobile Sensing

Jain, Puneet January 2015 (has links)
<p>Crowdsourced live video streaming from users is on the rise. Several factors such as social networks, streaming applications, smartphones with high-quality cameras, and ubiquitous wireless connectivity are contributing to this phenomenon. Unlike isolated professional videos, live streams emerge at an unprecedented scale, poorly captured, unorganized, and lack user context. To utilize the full potential of this medium and enable new services on top, immediate addressing of open challenges is required. Smartphones are resource constrained -- battery power is limited, bandwidth is scarce, on-board computing power and storage is insufficient to meet real-time demand. Therefore, mobile cloud computing is cited as an obvious alternative where cloud does the heavy-lifting for the smartphone. But, cloud resources are not cheap and real-time processing demands more than what the cloud can deliver.</p><p>This dissertation argues that throwing cloud resources at these problems and blindly offloading computation, while seemingly necessary, may not be sufficient. Opportunities need to be identified to streamline big-scale problems by leveraging in device capabilities, thereby making them amenable to a given cloud infrastructure. One of the key opportunities, we find, is the cross-correlation between different streams of information available in the cloud. We observe that inferences on a single information stream may often be difficult, but when viewed in conjunction with other information dimensions, the same problem often becomes tractable.</p> / Dissertation
93

Estimation continue de la pose d'un équipement tenu en main par fusion des données visio-inertielles pour les applications de navigation piétonne en milieux urbains / Continuous pose estimation of handheld device by fusion of visio-inertial data for pedestrian navigation applications in urban environments

Antigny, Nicolas 18 October 2018 (has links)
Pour assister la navigation piétonne dans les espaces urbains et intérieurs, une estimation précise de la pose (i.e. la position 3D et l'orientation3D) d'un équipement tenu en main constitue un point essentiel dans le développement d'outils d'aide à la mobilité (e.g. applications de réalité augmentée). Dans l'hypothèse où le piéton n'est équipé que d'appareils grand public, l'estimation de la pose est limitée à l'utilisation de capteurs à faible coût intégrés dans ces derniers (i.e. un récepteur GNSS, une unité de mesure inertielle et magnétique et une caméra monoculaire). De plus, les espaces urbains et intérieurs, comprenant des bâtiments proches et des éléments ferromagnétiques, constituent des zones difficiles pour la localisation et l'estimation de la pose lors de grands déplacements piétons.Cependant, le développement récent et la mise à disposition d'informations contenues dans des Systèmes d'Information Géographiques 3D constituent une nouvelle source de données exploitable pour la localisation et l'estimation de la pose. Pour relever ces défis, cette thèse propose différentes solutions pour améliorer la localisation et l'estimation de la pose des équipements tenus en main par le piéton lors de ses déplacements en espaces urbains et intérieurs. Les solutions proposées intègrent l'estimation de l'attitude basée inertielle et magnétique, l'odométrie visuelle monoculaire mise à l'échelle grâce à l'estimation des déplacements du piéton, l'estimation absolue de la pose basée sur la reconnaissance d'objets SIG 3D parfaitement connus et la mise à jour en position de la navigation à l'estime du piéton.Toutes ces solutions s'intègrent dans un processus de fusion permettant d'améliorer la précision de la localisation et d'estimer en continu une pose qualifiée de l'appareil tenu en main.Cette qualification est nécessaire à la mise en place d'un affichage en réalité augmentée sur site. Pour évaluer les solutions proposées, des données expérimentales ont été recueillies au cours de déplacements piétons dans un espace urbain avec des objets de référence et des passages intérieurs. / To support pedestrian navigation in urban and indoor spaces, an accurate pose estimate (i.e. 3Dposition and 3D orientation) of an equipment held inhand constitutes an essential point in the development of mobility assistance tools (e.g.Augmented Reality applications). On the assumption that the pedestrian is only equipped with general public devices, the pose estimation is restricted to the use of low-cost sensors embedded in the latter (i.e. an Inertial and Magnetic Measurement Unit and a monocular camera). In addition, urban and indoor spaces, comprising closely-spaced buildings and ferromagnetic elements,constitute challenging areas for localization and sensor pose estimation during large pedestrian displacements.However, the recent development and provision of data contained in 3D Geographical Information System constitutes a new wealth of data usable for localization and pose estimation.To address these challenges, this thesis proposes solutions to improve pedestrian localization and hand-held device pose estimation in urban and indoor spaces. The proposed solutions integrate inertial and magnetic-based attitude estimation, monocular Visual Odometry with pedestrian motion estimation for scale estimation, 3D GIS known object recognition-based absolute pose estimation and Pedestrian Dead-Reckoning updates. All these solutions are fused to improve accuracy and to continuously estimate a qualified pose of the handheld device. This qualification is required tovalidate an on-site augmented reality display. To assess the proposed solutions, experimental data has been collected during pedestrian walks in an urban space with sparse known objects and indoors passages.
94

A low cost one-camera optical tracking system for indoor wide-area augmented and virtual reality environments / Sistema de rastreamento ótico monocular de baixo custo para ambientes internos amplos de realidade virtual e aumentada

Buaes, Alexandre Greff January 2006 (has links)
O número de aplicações industriais para ambientes de “Realidade Aumentada” (AR) e “Realidade Virtual” (VR) tem crescido de forma significativa nos últimos anos. Sistemas óticos de rastreamento (optical tracking systems) constituem um importante componente dos ambientes de AR/VR. Este trabalho propõe um sistema ótico de rastreamento de baixo custo e com características adequadas para uso profissional. O sistema opera na região espectral do infravermelho para trabalhar com ruído ótico reduzido. Uma câmera de alta velocidade, equipada com filtro para bloqueio da luz visível e com flash infravermelho, transfere imagens de escala de cinza não comprimidas para um PC usual, onde um software de pré-processamento de imagens e o algoritmo PTrack de rastreamento reconhecem um conjunto de marcadores retrorefletivos e extraem a sua posição e orientação em 3D. É feita neste trabalho uma pesquisa abrangente sobre algoritmos de pré-processamento de imagens e de rastreamento. Uma bancada de testes foi construída para a realização de testes de acurácia e precisão. Os resultados mostram que o sistema atinge níveis de exatidão levemente piores, mas ainda comparáveis aos de sistemas profissionais. Devido à sua modularidade, o sistema pode ser expandido através do uso de vários módulos monoculares de rastreamento interligados por um algoritmo de fusão de sensores, de modo a atingir um maior alcance operacional. Uma configuração com dois módulos foi montada e testada, tendo alcançado um desempenho semelhante à configuração de um só módulo. / In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
95

A low cost one-camera optical tracking system for indoor wide-area augmented and virtual reality environments / Sistema de rastreamento ótico monocular de baixo custo para ambientes internos amplos de realidade virtual e aumentada

Buaes, Alexandre Greff January 2006 (has links)
O número de aplicações industriais para ambientes de “Realidade Aumentada” (AR) e “Realidade Virtual” (VR) tem crescido de forma significativa nos últimos anos. Sistemas óticos de rastreamento (optical tracking systems) constituem um importante componente dos ambientes de AR/VR. Este trabalho propõe um sistema ótico de rastreamento de baixo custo e com características adequadas para uso profissional. O sistema opera na região espectral do infravermelho para trabalhar com ruído ótico reduzido. Uma câmera de alta velocidade, equipada com filtro para bloqueio da luz visível e com flash infravermelho, transfere imagens de escala de cinza não comprimidas para um PC usual, onde um software de pré-processamento de imagens e o algoritmo PTrack de rastreamento reconhecem um conjunto de marcadores retrorefletivos e extraem a sua posição e orientação em 3D. É feita neste trabalho uma pesquisa abrangente sobre algoritmos de pré-processamento de imagens e de rastreamento. Uma bancada de testes foi construída para a realização de testes de acurácia e precisão. Os resultados mostram que o sistema atinge níveis de exatidão levemente piores, mas ainda comparáveis aos de sistemas profissionais. Devido à sua modularidade, o sistema pode ser expandido através do uso de vários módulos monoculares de rastreamento interligados por um algoritmo de fusão de sensores, de modo a atingir um maior alcance operacional. Uma configuração com dois módulos foi montada e testada, tendo alcançado um desempenho semelhante à configuração de um só módulo. / In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
96

A low cost one-camera optical tracking system for indoor wide-area augmented and virtual reality environments / Sistema de rastreamento ótico monocular de baixo custo para ambientes internos amplos de realidade virtual e aumentada

Buaes, Alexandre Greff January 2006 (has links)
O número de aplicações industriais para ambientes de “Realidade Aumentada” (AR) e “Realidade Virtual” (VR) tem crescido de forma significativa nos últimos anos. Sistemas óticos de rastreamento (optical tracking systems) constituem um importante componente dos ambientes de AR/VR. Este trabalho propõe um sistema ótico de rastreamento de baixo custo e com características adequadas para uso profissional. O sistema opera na região espectral do infravermelho para trabalhar com ruído ótico reduzido. Uma câmera de alta velocidade, equipada com filtro para bloqueio da luz visível e com flash infravermelho, transfere imagens de escala de cinza não comprimidas para um PC usual, onde um software de pré-processamento de imagens e o algoritmo PTrack de rastreamento reconhecem um conjunto de marcadores retrorefletivos e extraem a sua posição e orientação em 3D. É feita neste trabalho uma pesquisa abrangente sobre algoritmos de pré-processamento de imagens e de rastreamento. Uma bancada de testes foi construída para a realização de testes de acurácia e precisão. Os resultados mostram que o sistema atinge níveis de exatidão levemente piores, mas ainda comparáveis aos de sistemas profissionais. Devido à sua modularidade, o sistema pode ser expandido através do uso de vários módulos monoculares de rastreamento interligados por um algoritmo de fusão de sensores, de modo a atingir um maior alcance operacional. Uma configuração com dois módulos foi montada e testada, tendo alcançado um desempenho semelhante à configuração de um só módulo. / In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
97

Aplicação de tecnicas de fusão de sensores no monitoramento de ambientes / Application of sensor fusion techniques in the environmental monitory

Salustiano, Rogerio Esteves, 1978- 16 January 2006 (has links)
Orientador: Carlos Alberto dos Reis Filho / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-05T17:36:56Z (GMT). No. of bitstreams: 1 Salustiano_RogerioEsteves_M.pdf: 3698724 bytes, checksum: a5c6d59ec19db38f5a0324243ddb1eb5 (MD5) Previous issue date: 2006 / Resumo: Este trabalho propõe um sistema computacional no qual são aplicadas técnicas de Fusão de Sensores no monitoramento de ambientes. O sistema proposto permite a utilização e incorporação de diversos tipos de dados, incluindo imagens, sons e números em diferentes bases. Dentre os diversos algoritmos pertinentes a um sistema como este, foram implementados os de Sensores em Consenso que visam a combinação de dados de uma mesma natureza. O sistema proposto é suficientemente flexível, permitindo a inclusão de novos tipos de dados e os correspondentes algoritmos que os processem. Todo o processo de recebimento dos dados produzidos pelos sensores, configuração e visualização dos resultados é realizado através da Internet / Abstract: This work proposes a computer system in which Sensor Fusion techniques are applied to monitoring the environment. The proposed system allows the use and incorporation of different data types, including images, sounds and numbers in different bases. Among the existing algorithms that pertain to a system like this, those, which aim to combine data of the same nature, called Consensus Sensors, have been particularly implemented. The proposed system is flexible enough and allows the inclusion of new data types and their corresponding algorithms. The whole process of receiving the data produced by the sensors, configuration of produced results as well as their visualization is performed through the Internet / Mestrado / Eletrônica, Microeletrônica e Optoeletrônica / Mestre em Engenharia Elétrica
98

Visualization Tool for Sensor Data Fusion

Ehsanibenafati, Aida January 2013 (has links)
In recent years researchers has focused on the development of techniques for multi-sensor data fusion systems. Data fusion systems process data from multiple sensors to develop improved estimate of the position, velocity, attributes and identity of entities such as the targets or entities of interest. Visualizing sensor data from fused data to raw data from each sensor help analysts to interpret the data and assess sensor data fusion platform, an evolving situation or threats. Immersive visualization has emerged as an ideal solution for exploration of sensor data and provides opportunities for improvement in multi sensor data fusion. The thesis aims to investigate possibilities of applying information visualization to sensor data fusion platform in Volvo. A visualization prototype is also developed to enables multiple users to interactively visualize Sensor Data Fusion platform in real-time, mainly in order to demonstrates, evaluate and analyze the platform functionality. In this industrial study two research methodologies were used; a case study and an experiment for evaluating the results. First a case study was conducted in order to find the best visualization technique for visualizing sensor data fusion platform. Second an experiment was conducted to evaluate the usability of the prototype that has been developed and make sure the user requirement were met. The visualization tool enabled us to study the effectiveness and efficiency of the visualization techniques used. The results confirm that the visualization method used is effective, efficient for visualizing sensor data fusion platform.
99

Fusion de données capteurs étendue pour applications vidéo embarquées / Extended sensor fusion for embedded video applications

Alibay, Manu 18 December 2015 (has links)
Le travail réalisé au cours de cette thèse se concentre sur la fusion des données d'une caméra et de capteurs inertiels afin d'effectuer une estimation robuste de mouvement pour des applications vidéos embarquées. Les appareils visés sont principalement les téléphones intelligents et les tablettes. On propose une nouvelle technique d'estimation de mouvement 2D temps réel, qui combine les mesures visuelles et inertielles. L'approche introduite se base sur le RANSAC préemptif, en l'étendant via l'ajout de capteurs inertiels. L'évaluation des modèles de mouvement se fait selon un score hybride, un lagrangien dynamique permettant une adaptation à différentes conditions et types de mouvements. Ces améliorations sont effectuées à faible coût, afin de permettre une implémentation sur plateforme embarquée. L'approche est comparée aux méthodes visuelles et inertielles. Une nouvelle méthode d'odométrie visuelle-inertielle temps réelle est présentée. L'interaction entre les données visuelles et inertielles est maximisée en effectuant la fusion dans de multiples étapes de l'algorithme. A travers des tests conduits sur des séquences acquises avec la vérité terrain, nous montrons que notre approche produit des résultats supérieurs aux techniques classiques de l'état de l'art. / This thesis deals with sensor fusion between camera and inertial sensors measurements in order to provide a robust motion estimation algorithm for embedded video applications. The targeted platforms are mainly smartphones and tablets. We present a real-time, 2D online camera motion estimation algorithm combining inertial and visual measurements. The proposed algorithm extends the preemptive RANSAC motion estimation procedure with inertial sensors data, introducing a dynamic lagrangian hybrid scoring of the motion models, to make the approach adaptive to various image and motion contents. All these improvements are made with little computational cost, keeping the complexity of the algorithm low enough for embedded platforms. The approach is compared with pure inertial and pure visual procedures. A novel approach to real-time hybrid monocular visual-inertial odometry for embedded platforms is introduced. The interaction between vision and inertial sensors is maximized by performing fusion at multiple levels of the algorithm. Through tests conducted on sequences with ground-truth data specifically acquired, we show that our method outperforms classical hybrid techniques in ego-motion estimation.
100

Multi-sensor Information Fusion for Classification of Driver's Physiological Sensor Data

Barua, Shaibal January 2013 (has links)
Physiological sensor signals analysis is common practice in medical domain for diagnosis andclassification of various physiological conditions. Clinicians’ frequently use physiologicalsensor signals to diagnose individual’s psychophysiological parameters i.e., stress tiredness,and fatigue etc. However, parameters obtained from physiological sensors could vary becauseof individual’s age, gender, physical conditions etc. and analyzing data from a single sensorcould mislead the diagnosis result. Today, one proposition is that sensor signal fusion canprovide more reliable and efficient outcome than using data from single sensor and it is alsobecoming significant in numerous diagnosis fields including medical diagnosis andclassification. Case-Based Reasoning (CBR) is another well established and recognizedmethod in health sciences. Here, an entropy based algorithm, “Multivariate MultiscaleEntropy analysis” has been selected to fuse multiple sensor signals. Other physiologicalsensor signals measurements are also taken into consideration for system evaluation. A CBRsystem is proposed to classify ‘healthy’ and ‘stressed’ persons using both fused features andother physiological i.e. Heart Rate Variability (HRV), Respiratory Sinus Arrhythmia (RSA),Finger Temperature (FT) features. The evaluation and performance analysis of the system have been done and the results ofthe classification based on data fusion and physiological measurements are presented in thisthesis work.

Page generated in 0.0761 seconds