• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 53
  • 19
  • 18
  • 8
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 357
  • 357
  • 96
  • 65
  • 64
  • 61
  • 52
  • 50
  • 50
  • 36
  • 35
  • 35
  • 34
  • 33
  • 32
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Study of vehicle localization optimization with visual odometry trajectory tracking / Fusion de données pour la localisation de véhicule par suivi de trajectoire provenant de l'odométrie visuelle

Awang Salleh, Dayang Nur Salmi Dharmiza 19 December 2018 (has links)
Au sein des systèmes avancés d’aide à la conduite (Advanced Driver Assistance Systems - ADAS) pour les systèmes de transport intelligents (Intelligent Transport Systems - ITS), les systèmes de positionnement, ou de localisation, du véhicule jouent un rôle primordial. Le système GPS (Global Positioning System) largement employé ne peut donner seul un résultat précis à cause de facteurs extérieurs comme un environnement contraint ou l’affaiblissement des signaux. Ces erreurs peuvent être en partie corrigées en fusionnant les données GPS avec des informations supplémentaires provenant d'autres capteurs. La multiplication des systèmes d’aide à la conduite disponibles dans les véhicules nécessite de plus en plus de capteurs installés et augmente le volume de données utilisables. Dans ce cadre, nous nous sommes intéressés à la fusion des données provenant de capteurs bas cout pour améliorer le positionnement du véhicule. Parmi ces sources d’information, en parallèle au GPS, nous avons considérés les caméras disponibles sur les véhicules dans le but de faire de l’odométrie visuelle (Visual Odometry - VO), couplée à une carte de l’environnement. Nous avons étudié les caractéristiques de cette trajectoire reconstituée dans le but d’améliorer la qualité du positionnement latéral et longitudinal du véhicule sur la route, et de détecter les changements de voies possibles. Après avoir été fusionnée avec les données GPS, cette trajectoire générée est couplée avec la carte de l’environnement provenant d’Open-StreetMap (OSM). L'erreur de positionnement latérale est réduite en utilisant les informations de distribution de voie fournies par OSM, tandis que le positionnement longitudinal est optimisé avec une correspondance de courbes entre la trajectoire provenant de l’odométrie visuelle et les routes segmentées décrites dans OSM. Pour vérifier la robustesse du système, la méthode a été validée avec des jeux de données KITTI en considérant des données GPS bruitées par des modèles de bruits usuels. Plusieurs méthodes d’odométrie visuelle ont été utilisées pour comparer l’influence de la méthode sur le niveau d'amélioration du résultat après fusion des données. En utilisant la technique d’appariement des courbes que nous proposons, la précision du positionnement connait une amélioration significative, en particulier pour l’erreur longitudinale. Les performances de localisation sont comparables à celles des techniques SLAM (Simultaneous Localization And Mapping), corrigeant l’erreur d’orientation initiale provenant de l’odométrie visuelle. Nous avons ensuite employé la trajectoire provenant de l’odométrie visuelle dans le cadre de la détection de changement de voie. Cette indication est utile dans pour les systèmes de navigation des véhicules. La détection de changement de voie a été réalisée par une somme cumulative et une technique d’ajustement de courbe et obtient de très bon taux de réussite. Des perspectives de recherche sur la stratégie de détection sont proposées pour déterminer la voie initiale du véhicule. En conclusion, les résultats obtenus lors de ces travaux montrent l’intérêt de l’utilisation de la trajectoire provenant de l’odométrie visuelle comme source d’information pour la fusion de données à faible coût pour la localisation des véhicules. Cette source d’information provenant de la caméra est complémentaire aux données d’images traitées qui pourront par ailleurs être utilisées pour les différentes taches visée par les systèmes d’aides à la conduite. / With the growing research on Advanced Driver Assistance Systems (ADAS) for Intelligent Transport Systems (ITS), accurate vehicle localization plays an important role in intelligent vehicles. The Global Positioning System (GPS) has been widely used but its accuracy deteriorates and susceptible to positioning error due to factors such as the restricting environments that results in signal weakening. This problem can be addressed by integrating the GPS data with additional information from other sensors. Meanwhile, nowadays, we can find vehicles equipped with sensors for ADAS applications. In this research, fusion of GPS with visual odometry (VO) and digital map is proposed as a solution to localization improvement with low-cost data fusion. From the published works on VO, it is interesting to know how the generated trajectory can further improve vehicle localization. By integrating the VO output with GPS and OpenStreetMap (OSM) data, estimates of vehicle position on the map can be obtained. The lateral positioning error is reduced by utilizing lane distribution information provided by OSM while the longitudinal positioning is optimized with curve matching between VO trajectory trail and segmented roads. To observe the system robustness, the method was validated with KITTI datasets tested with different common GPS noise. Several published VO methods were also used to compare improvement level after data fusion. Validation results show that the positioning accuracy achieved significant improvement especially for the longitudinal error with curve matching technique. The localization performance is on par with Simultaneous Localization and Mapping (SLAM) SLAM techniques despite the drift in VO trajectory input. The research on employability of VO trajectory is extended for a deterministic task in lane-change detection. This is to assist the routing service for lane-level direction in navigation. The lane-change detection was conducted by CUSUM and curve fitting technique that resulted in 100% successful detection for stereo VO. Further study for the detection strategy is however required to obtain the current true lane of the vehicle for lane-level accurate localization. With the results obtained from the proposed low-cost data fusion for localization, we see a bright prospect of utilizing VO trajectory with information from OSM to improve the performance. In addition to obtain VO trajectory, the camera mounted on the vehicle can also be used for other image processing applications to complement the system. This research will continue to develop with future works concluded in the last chapter of this thesis.
162

Managing trust and reliability for indoor tracking systems

Rybarczyk, Ryan Thomas January 2016 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Indoor tracking is a challenging problem. The level of accepted error is on a much smaller scale than that of its outdoor counterpart. While the global positioning system has become omnipresent, and a widely accepted outdoor tracking system it has limitations in indoor environments due to loss or degradation of signal. Many attempts have been made to address this challenge, but currently none have proven to be the de-facto standard. In this thesis, we introduce the concept of opportunistic tracking in which tracking takes place with whatever sensing infrastructure is present – static or mobile, within a given indoor environment. In this approach many of the challenges (e.g., high cost, infeasible infrastructure deployment, etc.) that prohibit usage of existing systems in typical application domains (e.g., asset tracking, emergency rescue) are eliminated. Challenges do still exist when it comes to provide an accurate positional estimate of an entities location in an indoor environment, namely: sensor classification, sensor selection, and multi-sensor data fusion. We propose an enhanced tracking framework that through the infusion of QoS-based selection criteria of trust and reliability we can improve the overall accuracy of the tracking estimate. This improvement is predicated on the introduction of learning techniques to classify sensors that are dynamically discovered as part of this opportunistic tracking approach. This classification allows for sensors to be properly identified and evaluated based upon their specific behavioral characteristics through performance evaluation. This in-depth evaluation of sensors provides the basis for improving the sensor selection process. A side effect of obtaining this improved accuracy is the cost, found in the form of system runtime. This thesis provides a solution for this tradeoff between accuracy and cost through an optimization function that analyzes this tradeoff in an effort to find the optimal subset of sensors to fulfill the goal of tracking an object as it moves indoors. We demonstrate that through this improved sensor classification, selection, data fusion, and tradeoff optimization we can provide an improvement, in terms of accuracy, over other existing indoor tracking systems.
163

Design and development of a work-in-progress, low-cost Earth Observation multispectral satellite for use on the International Space Station

Ahn, Byung Joon 23 September 2020 (has links)
No description available.
164

e-DTS 2.0: A Next-Generation of a Distributed Tracking System

Rybarczyk, Ryan Thomas 20 March 2012 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A key component in tracking is identifying relevant data and combining the data in an effort to provide an accurate estimate of both the location and the orientation of an object marker as it moves through an environment. This thesis proposes an enhancement to an existing tracking system, the enhanced distributed tracking system (e-DTS), in the form of the e-DTS 2.0 and provides an empirical analysis of these enhancements. The thesis also provides suggestions on future enhancements and improvements. When a Camera identifies an object within its frame of view, it communicates with a JINI-based service in an effort to expose this information to any client who wishes to consume it. This aforementioned communication utilizes the JINI Multicast Lookup Protocol to provide the means for a dynamic discovery of any sensors as they are added or removed from the environment during the tracking process. The client can then retrieve this information from the service and perform a fusion technique in an effort to provide an estimation of the marker's current location with respect to a given coordinate system. The coordinate system handoff and transformation is a key component of the e-DTS 2.0 tracking process as it improves the agility of the system.
165

A Human-Centric Approach to Data Fusion in Post-Disaster Managment: The Development of a Fuzzy Set Theory Based Model

Banisakher, Mubarak 01 January 2014 (has links)
It is critical to provide an efficient and accurate information system in the post-disaster phase for individuals' in order to access and obtain the necessary resources in a timely manner; but current map based post-disaster management systems provide all emergency resource lists without filtering them which usually leads to high levels of energy consumed in calculation. Also an effective post-disaster management system (PDMS) will result in distribution of all emergency resources such as, hospital, storage and transportation much more reasonably and be more beneficial to the individuals in the post disaster period. In this Dissertation, firstly, semi-supervised learning (SSL) based graph systems was constructed for PDMS. A Graph-based PDMS' resource map was converted to a directed graph that presented by adjacent matrix and then the decision information will be conducted from the PDMS by two ways, one is clustering operation, and another is graph-based semi-supervised optimization process. In this study, PDMS was applied for emergency resource distribution in post-disaster (responses phase), a path optimization algorithm based ant colony optimization (ACO) was used for minimizing the cost in post-disaster, simulation results show the effectiveness of the proposed methodology. This analysis was done by comparing it with clustering based algorithms under improvement ACO of tour improvement algorithm (TIA) and Min-Max Ant System (MMAS) and the results also show that the SSL based graph will be more effective for calculating the optimization path in PDMS. This research improved the map by combining the disaster map with the initial GIS based map which located the target area considering the influence of disaster. First, all initial map and disaster map will be under Gaussian transformation while we acquired the histogram of all map pictures. And then all pictures will be under discrete wavelet transform (DWT), a Gaussian fusion algorithm was applied in the DWT pictures. Second, inverse DWT (iDWT) was applied to generate a new map for a post-disaster management system. Finally, simulation works were proposed and the results showed the effectiveness of the proposed method by comparing it to other fusion algorithms, such as mean-mean fusion and max-UD fusion through the evaluation indices including entropy, spatial frequency (SF) and image quality index (IQI). Fuzzy set model were proposed to improve the presentation capacity of nodes in this GIS based PDMS.
166

Speaker Identification Based On Discriminative Vector Quantization And Data Fusion

Zhou, Guangyu 01 January 2005 (has links)
Speaker Identification (SI) approaches based on discriminative Vector Quantization (VQ) and data fusion techniques are presented in this dissertation. The SI approaches based on Discriminative VQ (DVQ) proposed in this dissertation are the DVQ for SI (DVQSI), the DVQSI with Unique speech feature vector space segmentation for each speaker pair (DVQSI-U), and the Adaptive DVQSI (ADVQSI) methods. The difference of the probability distributions of the speech feature vector sets from various speakers (or speaker groups) is called the interspeaker variation between speakers (or speaker groups). The interspeaker variation is the measure of template differences between speakers (or speaker groups). All DVQ based techniques presented in this contribution take advantage of the interspeaker variation, which are not exploited in the previous proposed techniques by others that employ traditional VQ for SI (VQSI). All DVQ based techniques have two modes, the training mode and the testing mode. In the training mode, the speech feature vector space is first divided into a number of subspaces based on the interspeaker variations. Then, a discriminative weight is calculated for each subspace of each speaker or speaker pair in the SI group based on the interspeaker variation. The subspaces with higher interspeaker variations play more important roles in SI than the ones with lower interspeaker variations by assigning larger discriminative weights. In the testing mode, discriminative weighted average VQ distortions instead of equally weighted average VQ distortions are used to make the SI decision. The DVQ based techniques lead to higher SI accuracies than VQSI. DVQSI and DVQSI-U techniques consider the interspeaker variation for each speaker pair in the SI group. In DVQSI, speech feature vector space segmentations for all the speaker pairs are exactly the same. However, each speaker pair of DVQSI-U is treated individually in the speech feature vector space segmentation. In both DVQSI and DVQSI-U, the discriminative weights for each speaker pair are calculated by trial and error. The SI accuracies of DVQSI-U are higher than those of DVQSI at the price of much higher computational burden. ADVQSI explores the interspeaker variation between each speaker and all speakers in the SI group. In contrast with DVQSI and DVQSI-U, in ADVQSI, the feature vector space segmentation is for each speaker instead of each speaker pair based on the interspeaker variation between each speaker and all the speakers in the SI group. Also, adaptive techniques are used in the discriminative weights computation for each speaker in ADVQSI. The SI accuracies employing ADVQSI and DVQSI-U are comparable. However, the computational complexity of ADVQSI is much less than that of DVQSI-U. Also, a novel algorithm to convert the raw distortion outputs of template-based SI classifiers into compatible probability measures is proposed in this dissertation. After this conversion, data fusion techniques at the measurement level can be applied to SI. In the proposed technique, stochastic models of the distortion outputs are estimated. Then, the posteriori probabilities of the unknown utterance belonging to each speaker are calculated. Compatible probability measures are assigned based on the posteriori probabilities. The proposed technique leads to better SI performance at the measurement level than existing approaches.
167

Drinking Water Infrastructure Assessment with Teleconnection Signals, Satellite Data Fusion and Mining

Imen, Sanaz 01 January 2015 (has links)
Adjustment of the drinking water treatment process as a simultaneous response to climate variations and water quality impact has been a grand challenge in water resource management in recent years. This desired and preferred capability depends on timely and quantitative knowledge to monitor the quality and availability of water. This issue is of great importance for the largest reservoir in the United States, Lake Mead, which is located in the proximity of a big metropolitan region - Las Vegas, Nevada. The water quality in Lake Mead is impaired by forest fires, soil erosion, and land use changes in nearby watersheds and wastewater effluents from the Las Vegas Wash. In addition, more than a decade of drought has caused a sharp drop by about 100 feet in the elevation of Lake Mead. These hydrological processes in the drought event led to the increased concentration of total organic carbon (TOC) and total suspended solids (TSS) in the lake. TOC in surface water is known as a precursor of disinfection byproducts in drinking water, and high TSS concentration in source water is a threat leading to possible clogging in the water treatment process. Since Lake Mead is a principal source of drinking water for over 25 million people, high concentrations of TOC and TSS may have a potential health impact. Therefore, it is crucial to develop an early warning system which is able to support rapid forecasting of water quality and availability. In this study, the creation of the nowcasting water quality model with satellite remote sensing technologies lays down the foundation for monitoring TSS and TOC, on a near real-time basis. Yet the novelty of this study lies in the development of a forecasting model to predict TOC and TSS values with the aid of remote sensing technologies on a daily basis. The forecasting process is aided by an iterative scheme via updating the daily satellite imagery in concert with retrieving the long-term memory from the past states with the aid of nonlinear autoregressive neural network with external input on a rolling basis onward. To account for the potential impact of long-term hydrological droughts, teleconnection signals were included on a seasonal basis in the Upper Colorado River basin which provides 97% of the inflow into Lake Mead. Identification of teleconnection patterns at a local scale is challenging, largely due to the coexistence of non-stationary and non-linear signals embedded within the ocean-atmosphere system. Empirical mode decomposition as well as wavelet analysis are utilized to extract the intrinsic trend and the dominant oscillation of the sea surface temperature (SST) and precipitation time series. After finding possible associations between the dominant oscillation of seasonal precipitation and global SST through lagged correlation analysis, the statistically significant index regions in the oceans are extracted. With these characterized associations, individual contribution of these SST forcing regions that are linked to the related precipitation responses are further quantified through the use of the extreme learning machine. Results indicate that the non-leading SST regions also contribute saliently to the terrestrial precipitation variability compared to some of the known leading SST regions and confirm the capability of predicting the hydrological drought events one season ahead of time. With such an integrated advancement, an early warning system can be constructed to bridge the current gap in source water monitoring for water supply.
168

Making Sense Out of Uncertainty in Geospatial Data

Foy, Andrew Scott 31 August 2011 (has links)
Uncertainty in geospatial data fusion is a major concern for scientists because society is increasing its use of geospatial technology and generalization is inherent to geographic representations. Limited research exists on the quality of results that come from the fusion of geographic data, yet there is extensive literature on uncertainty in cartography, GIS, and geospatial data. The uncertainties exist and are difficult to understand because data are overlaid which have different scopes, times, classes, accuracies, and precisions. There is a need for a set of tools that can manage uncertainty and incorporate it into the overlay process. This research explores uncertainty in spatial data, GIS and GIScience via three papers. The first paper introduces a framework for classifying and modeling error-bands in a GIS. Paper two tests GIS users' ability to estimate spatial confidence intervals and the third paper looks at the practical application of a set of tools for incorporating uncertainty into overlays. The results from this research indicate that it is hard for people to agree on an error-band classification based on their interpretation of metadata. However, people are good estimators of data quality and uncertainty if they follow a systematic approach and use their average estimate to define spatial confidence intervals. The framework and the toolset presented in this dissertation have the potential to alter how people interpret and use geospatial data. The hope is that the results from this paper prompt inquiry and question the reliability of all simple overlays. Many situations exist in which this research has relevance, making the framework, the tools, and the methods important to a wide variety of disciplines that use spatial analysis and GIS. / Ph. D.
169

Fundus-DeepNet: Multi-Label Deep Learning Classification System for Enhanced Detection of Multiple Ocular Diseases through Data Fusion of Fundus Images

Al-Fahdawi, S., Al-Waisy, A.S., Zeebaree, D.Q., Qahwaji, Rami, Natiq, H., Mohammed, M.A., Nedoma, J., Martinek, R., Deveci, M. 29 September 2023 (has links)
Yes / Detecting multiple ocular diseases in fundus images is crucial in ophthalmic diagnosis. This study introduces the Fundus-DeepNet system, an automated multi-label deep learning classification system designed to identify multiple ocular diseases by integrating feature representations from pairs of fundus images (e.g., left and right eyes). The study initiates with a comprehensive image pre-processing procedure, including circular border cropping, image resizing, contrast enhancement, noise removal, and data augmentation. Subsequently, discriminative deep feature representations are extracted using multiple deep learning blocks, namely the High-Resolution Network (HRNet) and Attention Block, which serve as feature descriptors. The SENet Block is then applied to further enhance the quality and robustness of feature representations from a pair of fundus images, ultimately consolidating them into a single feature representation. Finally, a sophisticated classification model, known as a Discriminative Restricted Boltzmann Machine (DRBM), is employed. By incorporating a Softmax layer, this DRBM is adept at generating a probability distribution that specifically identifies eight different ocular diseases. Extensive experiments were conducted on the challenging Ophthalmic Image Analysis-Ocular Disease Intelligent Recognition (OIA-ODIR) dataset, comprising diverse fundus images depicting eight different ocular diseases. The Fundus-DeepNet system demonstrated F1-scores, Kappa scores, AUC, and final scores of 88.56%, 88.92%, 99.76%, and 92.41% in the off-site test set, and 89.13%, 88.98%, 99.86%, and 92.66% in the on-site test set.In summary, the Fundus-DeepNet system exhibits outstanding proficiency in accurately detecting multiple ocular diseases, offering a promising solution for early diagnosis and treatment in ophthalmology. / European Union under the REFRESH – Research Excellence for Region Sustainability and High-tech Industries project number CZ.10.03.01/00/22_003/0000048 via the Operational Program Just Transition. The Ministry of Education, Youth, and Sports of the Czech Republic - Technical University of Ostrava, Czechia under Grants SP2023/039 and SP2023/042.
170

Light-weighted Deep Learning for LiDAR and Visual Odometry Fusion in Autonomous Driving

Zhang, Dingnan 20 December 2022 (has links)
No description available.

Page generated in 0.0419 seconds