• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 196
  • 53
  • 21
  • 19
  • 8
  • 7
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 378
  • 378
  • 96
  • 67
  • 66
  • 64
  • 58
  • 51
  • 50
  • 38
  • 37
  • 37
  • 34
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

e-DTS 2.0: A Next-Generation of a Distributed Tracking System

Rybarczyk, Ryan Thomas 20 March 2012 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A key component in tracking is identifying relevant data and combining the data in an effort to provide an accurate estimate of both the location and the orientation of an object marker as it moves through an environment. This thesis proposes an enhancement to an existing tracking system, the enhanced distributed tracking system (e-DTS), in the form of the e-DTS 2.0 and provides an empirical analysis of these enhancements. The thesis also provides suggestions on future enhancements and improvements. When a Camera identifies an object within its frame of view, it communicates with a JINI-based service in an effort to expose this information to any client who wishes to consume it. This aforementioned communication utilizes the JINI Multicast Lookup Protocol to provide the means for a dynamic discovery of any sensors as they are added or removed from the environment during the tracking process. The client can then retrieve this information from the service and perform a fusion technique in an effort to provide an estimation of the marker's current location with respect to a given coordinate system. The coordinate system handoff and transformation is a key component of the e-DTS 2.0 tracking process as it improves the agility of the system.
182

A Human-Centric Approach to Data Fusion in Post-Disaster Managment: The Development of a Fuzzy Set Theory Based Model

Banisakher, Mubarak 01 January 2014 (has links)
It is critical to provide an efficient and accurate information system in the post-disaster phase for individuals' in order to access and obtain the necessary resources in a timely manner; but current map based post-disaster management systems provide all emergency resource lists without filtering them which usually leads to high levels of energy consumed in calculation. Also an effective post-disaster management system (PDMS) will result in distribution of all emergency resources such as, hospital, storage and transportation much more reasonably and be more beneficial to the individuals in the post disaster period. In this Dissertation, firstly, semi-supervised learning (SSL) based graph systems was constructed for PDMS. A Graph-based PDMS' resource map was converted to a directed graph that presented by adjacent matrix and then the decision information will be conducted from the PDMS by two ways, one is clustering operation, and another is graph-based semi-supervised optimization process. In this study, PDMS was applied for emergency resource distribution in post-disaster (responses phase), a path optimization algorithm based ant colony optimization (ACO) was used for minimizing the cost in post-disaster, simulation results show the effectiveness of the proposed methodology. This analysis was done by comparing it with clustering based algorithms under improvement ACO of tour improvement algorithm (TIA) and Min-Max Ant System (MMAS) and the results also show that the SSL based graph will be more effective for calculating the optimization path in PDMS. This research improved the map by combining the disaster map with the initial GIS based map which located the target area considering the influence of disaster. First, all initial map and disaster map will be under Gaussian transformation while we acquired the histogram of all map pictures. And then all pictures will be under discrete wavelet transform (DWT), a Gaussian fusion algorithm was applied in the DWT pictures. Second, inverse DWT (iDWT) was applied to generate a new map for a post-disaster management system. Finally, simulation works were proposed and the results showed the effectiveness of the proposed method by comparing it to other fusion algorithms, such as mean-mean fusion and max-UD fusion through the evaluation indices including entropy, spatial frequency (SF) and image quality index (IQI). Fuzzy set model were proposed to improve the presentation capacity of nodes in this GIS based PDMS.
183

Speaker Identification Based On Discriminative Vector Quantization And Data Fusion

Zhou, Guangyu 01 January 2005 (has links)
Speaker Identification (SI) approaches based on discriminative Vector Quantization (VQ) and data fusion techniques are presented in this dissertation. The SI approaches based on Discriminative VQ (DVQ) proposed in this dissertation are the DVQ for SI (DVQSI), the DVQSI with Unique speech feature vector space segmentation for each speaker pair (DVQSI-U), and the Adaptive DVQSI (ADVQSI) methods. The difference of the probability distributions of the speech feature vector sets from various speakers (or speaker groups) is called the interspeaker variation between speakers (or speaker groups). The interspeaker variation is the measure of template differences between speakers (or speaker groups). All DVQ based techniques presented in this contribution take advantage of the interspeaker variation, which are not exploited in the previous proposed techniques by others that employ traditional VQ for SI (VQSI). All DVQ based techniques have two modes, the training mode and the testing mode. In the training mode, the speech feature vector space is first divided into a number of subspaces based on the interspeaker variations. Then, a discriminative weight is calculated for each subspace of each speaker or speaker pair in the SI group based on the interspeaker variation. The subspaces with higher interspeaker variations play more important roles in SI than the ones with lower interspeaker variations by assigning larger discriminative weights. In the testing mode, discriminative weighted average VQ distortions instead of equally weighted average VQ distortions are used to make the SI decision. The DVQ based techniques lead to higher SI accuracies than VQSI. DVQSI and DVQSI-U techniques consider the interspeaker variation for each speaker pair in the SI group. In DVQSI, speech feature vector space segmentations for all the speaker pairs are exactly the same. However, each speaker pair of DVQSI-U is treated individually in the speech feature vector space segmentation. In both DVQSI and DVQSI-U, the discriminative weights for each speaker pair are calculated by trial and error. The SI accuracies of DVQSI-U are higher than those of DVQSI at the price of much higher computational burden. ADVQSI explores the interspeaker variation between each speaker and all speakers in the SI group. In contrast with DVQSI and DVQSI-U, in ADVQSI, the feature vector space segmentation is for each speaker instead of each speaker pair based on the interspeaker variation between each speaker and all the speakers in the SI group. Also, adaptive techniques are used in the discriminative weights computation for each speaker in ADVQSI. The SI accuracies employing ADVQSI and DVQSI-U are comparable. However, the computational complexity of ADVQSI is much less than that of DVQSI-U. Also, a novel algorithm to convert the raw distortion outputs of template-based SI classifiers into compatible probability measures is proposed in this dissertation. After this conversion, data fusion techniques at the measurement level can be applied to SI. In the proposed technique, stochastic models of the distortion outputs are estimated. Then, the posteriori probabilities of the unknown utterance belonging to each speaker are calculated. Compatible probability measures are assigned based on the posteriori probabilities. The proposed technique leads to better SI performance at the measurement level than existing approaches.
184

Drinking Water Infrastructure Assessment with Teleconnection Signals, Satellite Data Fusion and Mining

Imen, Sanaz 01 January 2015 (has links)
Adjustment of the drinking water treatment process as a simultaneous response to climate variations and water quality impact has been a grand challenge in water resource management in recent years. This desired and preferred capability depends on timely and quantitative knowledge to monitor the quality and availability of water. This issue is of great importance for the largest reservoir in the United States, Lake Mead, which is located in the proximity of a big metropolitan region - Las Vegas, Nevada. The water quality in Lake Mead is impaired by forest fires, soil erosion, and land use changes in nearby watersheds and wastewater effluents from the Las Vegas Wash. In addition, more than a decade of drought has caused a sharp drop by about 100 feet in the elevation of Lake Mead. These hydrological processes in the drought event led to the increased concentration of total organic carbon (TOC) and total suspended solids (TSS) in the lake. TOC in surface water is known as a precursor of disinfection byproducts in drinking water, and high TSS concentration in source water is a threat leading to possible clogging in the water treatment process. Since Lake Mead is a principal source of drinking water for over 25 million people, high concentrations of TOC and TSS may have a potential health impact. Therefore, it is crucial to develop an early warning system which is able to support rapid forecasting of water quality and availability. In this study, the creation of the nowcasting water quality model with satellite remote sensing technologies lays down the foundation for monitoring TSS and TOC, on a near real-time basis. Yet the novelty of this study lies in the development of a forecasting model to predict TOC and TSS values with the aid of remote sensing technologies on a daily basis. The forecasting process is aided by an iterative scheme via updating the daily satellite imagery in concert with retrieving the long-term memory from the past states with the aid of nonlinear autoregressive neural network with external input on a rolling basis onward. To account for the potential impact of long-term hydrological droughts, teleconnection signals were included on a seasonal basis in the Upper Colorado River basin which provides 97% of the inflow into Lake Mead. Identification of teleconnection patterns at a local scale is challenging, largely due to the coexistence of non-stationary and non-linear signals embedded within the ocean-atmosphere system. Empirical mode decomposition as well as wavelet analysis are utilized to extract the intrinsic trend and the dominant oscillation of the sea surface temperature (SST) and precipitation time series. After finding possible associations between the dominant oscillation of seasonal precipitation and global SST through lagged correlation analysis, the statistically significant index regions in the oceans are extracted. With these characterized associations, individual contribution of these SST forcing regions that are linked to the related precipitation responses are further quantified through the use of the extreme learning machine. Results indicate that the non-leading SST regions also contribute saliently to the terrestrial precipitation variability compared to some of the known leading SST regions and confirm the capability of predicting the hydrological drought events one season ahead of time. With such an integrated advancement, an early warning system can be constructed to bridge the current gap in source water monitoring for water supply.
185

Making Sense Out of Uncertainty in Geospatial Data

Foy, Andrew Scott 31 August 2011 (has links)
Uncertainty in geospatial data fusion is a major concern for scientists because society is increasing its use of geospatial technology and generalization is inherent to geographic representations. Limited research exists on the quality of results that come from the fusion of geographic data, yet there is extensive literature on uncertainty in cartography, GIS, and geospatial data. The uncertainties exist and are difficult to understand because data are overlaid which have different scopes, times, classes, accuracies, and precisions. There is a need for a set of tools that can manage uncertainty and incorporate it into the overlay process. This research explores uncertainty in spatial data, GIS and GIScience via three papers. The first paper introduces a framework for classifying and modeling error-bands in a GIS. Paper two tests GIS users' ability to estimate spatial confidence intervals and the third paper looks at the practical application of a set of tools for incorporating uncertainty into overlays. The results from this research indicate that it is hard for people to agree on an error-band classification based on their interpretation of metadata. However, people are good estimators of data quality and uncertainty if they follow a systematic approach and use their average estimate to define spatial confidence intervals. The framework and the toolset presented in this dissertation have the potential to alter how people interpret and use geospatial data. The hope is that the results from this paper prompt inquiry and question the reliability of all simple overlays. Many situations exist in which this research has relevance, making the framework, the tools, and the methods important to a wide variety of disciplines that use spatial analysis and GIS. / Ph. D.
186

Fundus-DeepNet: Multi-Label Deep Learning Classification System for Enhanced Detection of Multiple Ocular Diseases through Data Fusion of Fundus Images

Al-Fahdawi, S., Al-Waisy, A.S., Zeebaree, D.Q., Qahwaji, Rami, Natiq, H., Mohammed, M.A., Nedoma, J., Martinek, R., Deveci, M. 29 September 2023 (has links)
Yes / Detecting multiple ocular diseases in fundus images is crucial in ophthalmic diagnosis. This study introduces the Fundus-DeepNet system, an automated multi-label deep learning classification system designed to identify multiple ocular diseases by integrating feature representations from pairs of fundus images (e.g., left and right eyes). The study initiates with a comprehensive image pre-processing procedure, including circular border cropping, image resizing, contrast enhancement, noise removal, and data augmentation. Subsequently, discriminative deep feature representations are extracted using multiple deep learning blocks, namely the High-Resolution Network (HRNet) and Attention Block, which serve as feature descriptors. The SENet Block is then applied to further enhance the quality and robustness of feature representations from a pair of fundus images, ultimately consolidating them into a single feature representation. Finally, a sophisticated classification model, known as a Discriminative Restricted Boltzmann Machine (DRBM), is employed. By incorporating a Softmax layer, this DRBM is adept at generating a probability distribution that specifically identifies eight different ocular diseases. Extensive experiments were conducted on the challenging Ophthalmic Image Analysis-Ocular Disease Intelligent Recognition (OIA-ODIR) dataset, comprising diverse fundus images depicting eight different ocular diseases. The Fundus-DeepNet system demonstrated F1-scores, Kappa scores, AUC, and final scores of 88.56%, 88.92%, 99.76%, and 92.41% in the off-site test set, and 89.13%, 88.98%, 99.86%, and 92.66% in the on-site test set.In summary, the Fundus-DeepNet system exhibits outstanding proficiency in accurately detecting multiple ocular diseases, offering a promising solution for early diagnosis and treatment in ophthalmology. / European Union under the REFRESH – Research Excellence for Region Sustainability and High-tech Industries project number CZ.10.03.01/00/22_003/0000048 via the Operational Program Just Transition. The Ministry of Education, Youth, and Sports of the Czech Republic - Technical University of Ostrava, Czechia under Grants SP2023/039 and SP2023/042.
187

Light-weighted Deep Learning for LiDAR and Visual Odometry Fusion in Autonomous Driving

Zhang, Dingnan 20 December 2022 (has links)
No description available.
188

Estimating a Boat’s Vertical Velocity with Unpositioned 6DOF IMU:s : How sensor fusion and knowledge of the system dynamics can be used to estimate the IMU positions and produce fused estimates

Sjöblom, Jesper January 2023 (has links)
Longline fishing is a method of fishing that utilizes baited hooks to catch fish in an environmentally friendly way. In order to reduce the number of catch lost while longline fishing, it is of great interest to be able to keep an even tension on the fishing line. This can be done by estimating the speed at the point of interest (POI) at which the fishing line is attached to the boat. Due to the harsh conditionson the seas, it is not recommended to put any sensors directly at that point. The aim of this thesis was to explore whether or not it is possible to estimate the vertical speed at the POI by having sensors measuring linear acceleration and angular velocity at various unknown places in the boat. The sensors were placed at various places in a simulated boat, after which the sensor orientations and positions were calculated using a nonlinear Least Squares method. After the sensors were positioned, an Extended Kalman Filter (EKF) was implemented on each sensor, after which the speed of the POI was calculated as the fused estimate of all EKFs. By changing the number of sensors and their sampling times, the best compromise between accuracy, computational load and number of sensors was found. The results prove that it is fully possible to estimate the vertical speed of the POI using only four 6DOF IMU:s using a sampling time of 50 or 100 ms, depending on how accurate the user wants the estimated positions of the sensors to be. However, there are still many ways in which the method used can be improved to geta better estimate.
189

Improving the guidance, navigation and control design of the KNATTE platform

Lundström, Lars January 2023 (has links)
For complex satellite missions that rely on agile and high-precision manoeuvres, the low-friction aspect of the space environment is a critical component in understanding the attitude control dynamics of the spacecraft. The Kinesthetic Node and Autonomous Table-Top Emulator (KNATTE) is a three-degree-of-freedom frictionless vehicle that serves as the foundation of a multipurpose platform for real-time spacecraft hardware-in-the-loop experiments, and allows emulation of these conditions in two dimensions with the purpose of validating various guidance, navigation, and control algorithms. The data acquisition of the vehicle depends on a computer vision system (CVS) that yields position and attitude data, but also suffers from unpredictable blackout events. To complement such measurements, KNATTE incorporates an inertial measurement unit (IMU) that yields accelerometer, gyroscope, and magnetometer data. This study describes a multisensor data fusion approach to obtain accurate attitude information by combining the measurements from the CVS and the IMU using nonlinear Kalman filter algorithms. To do this, the data fusion algorithms are developed and tested in a Matlab/Simulink environment. After that, the algorithms are adapted to the KNATTE platform and their performance is confirmed in various conditions. Through this work, the accuracy and efficiency of the approach can be checked by numerical simulation and real-time experiments. In addition, the quality of the CVS measurements are further improved by the introduction of a neural network to the image processing pipeline of the original system.
190

EXPLORING BRAIN CONNECTIVITY USING A FUNCTIONAL-STRUCTURAL IMAGING FUSION PIPELINE

Ayyash, Sondos January 2021 (has links)
In this thesis we were interested in combining functional connectivity (from functional Magnetic Resonance Imaging) and structural connectivity (from Diffusion Tensor Imaging) with a data fusion approach. While data fusion approaches provide an abundance of information they are underutilized due to their complexity. To solve this problem, we integrated the ease of a neuroimaging toolbox, known as the Functional And Tractographic Analysis Toolbox (FATCAT) with a data fusion approach known as the anatomically weighted functional connectivity (awFC) approach - to produce a practical and more efficient pipeline. We studied the connectivity within resting-state networks of different populations using this novel pipeline. We performed separate analyses with traditional structural and functional connectivity for comparison with the awFC findings - across all three projects. In the first study we evaluated the awFC of participants with major depressive disorder compared to controls. We observed significant connectivity differences in the default mode network (DMN) and the ventral attention network (VAN). In the second study we studied the awFC of MDD remitters compared to non-remitters at baseline and week-8 (post antidepressant), and evaluated awFC in remitters longitudinally from baseline to to week-8. We found significant group differences in the DMN, VAN, and frontoparietal network (FPN) for remitters and non-remitters at week-8. We also found significant awFC longitudinally from baseline to week-8 in the dorsal attention network (DAN) and FPN. We also tested the associations between connectivity strength and cognition. In the third study we studied the awFC in children exposed to pre- and postnatal adversity compared to controls. We observed significant differences in the DMN, FPN, VAN, DAN, and limbic network (LIM). We also assessed the association between connectivity strength in middle childhood and motor and behavioural scores at age 3. Therefore, the FATCAT-awFC pipeline, we designed was capable of identifying group differences in RSN in a practical and more efficient manner. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.0486 seconds