• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 197
  • 53
  • 21
  • 19
  • 8
  • 7
  • 5
  • 4
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 379
  • 379
  • 96
  • 67
  • 66
  • 64
  • 58
  • 51
  • 50
  • 38
  • 37
  • 37
  • 34
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Assessing change in the Earth's land surface albedo with moderate resolution satellite imagery

Sun, Qingsong 12 March 2016 (has links)
Land surface albedo describes the proportion of incident solar radiant flux that is reflected from the Earth's surface and therefore is a crucial parameter in modeling and monitoring attempts to capture the current climate, hydrological, and biogeochemical cycles and predict future scenarios. Due to the temporal variability and spatial heterogeneity of land surface albedo, remote sensing offers the only realistic method of monitoring albedo on a global scale. While the distribution of bright, highly reflective surfaces (clouds, snow, deserts) govern the vast majority of the fluctuation, variations in the intrinsic surface albedo due to natural and human disturbances such as urban development, fire, pests, harvesting, grazing, flooding, and erosion, as well as the natural seasonal rhythm of vegetation phenology, play a significant role as well. The development of times series of global snow-free and cloud-free albedo from remotely sensed observations over the past decade and a half offers a unique opportunity to monitor and assess the impact of these alterations to the Earth's land surface. By utilizing multiple satellite records from the MODerate-resolution Imaging Spectroradiometer (MODIS), the Multi-angle Imaging Spectroradiometer (MISR) and the Visible Infrared Imaging Radiometer Suite (VIIRS) instruments, and developing innovative spectral conversion coefficients and temporal gap-filling strategies, it has been possible to utilize the strengths of the various sensors to improve the spatial and temporal coverage of global land surface albedo retrievals. The availability of these products is particularly important in tropical regions where cloud cover obscures the forest for significant periods. In the Amazon, field ecologists have noted that some areas of the forest ecosystem respond rapidly with foliage growth at the beginning of the dry season, when sunlight can finally penetrate fully to the surface and have suggested this phenomenon can continue until reductions in water availability (particularly in times of drought) impact the growth cycle. While it has been difficult to capture this variability from individual optical satellite sensors, the temporally gap-filled albedo products developed during this research are used in a case study to monitor the Amazon during the dry season and identify the extent of these regions of foliage growth.
82

Generic support for decision-making in management and command and control

Wallenius, Klas January 2004 (has links)
Flexibility is the keyword when preparing for the uncertainfuture tasks for the civilian and military defence. Supporttools relying on general principles will greatlyfacilitateflexible co-ordination and co-operation between differentcivilian and military organizations, and also between differentcommand levels. Further motivations for general solutionsinclude reduced costs for technical development and training,as well as faster and more informed decisionmaking. Mosttechnical systems that support military activities are howeverdesigned with specific work tasks in mind, and are consequentlyrather inflexible. There are large differences between forinstance fire fighting, disaster relief, calculating missiletrajectories, and navigating large battle-ships. Still, thereought to be much in common in the work of managing thesevarious tasks. We use the termCommand and Control(C2) to capture these commonfeatures in management of civilian and military, rescue anddefence operations. Consequently, this thesis describes a top-down approach tosupport systems for decision-making in the context of C2, as acomplement to the prevailing bottom-up approaches. DISCCO(Decision Support for Command and Control) is a set ofnetwork-based services includingCommand Supporthelping commanders in the human,cooperative and continuous process of evolving, evaluating, andexecuting solutions to their tasks. The command tools providethe means to formulate and visualize tasks, plans, andassessments, but also the means to visualize decisions on thedynamic design of organization. Also included in DISCCO isDecision Support, which, based on AI and simulationtechniques, improve the human process by integrating automaticand semiautomatic generation and evaluation of plans. The toolsprovided by DISCCO interact with aCommon Situation Modelcapturing the recursive structureof the situation, including the status, the dynamicorganization, and the intentions, of own, allied, neutral, andhostile resources. Hence, DISCCOprovides a more comprehensivesituation description than has previously been possible toachieve. DISCCO shows generic features since it is designed tosupport a decisionmaking process abstracted from the actualkinds and details of the tasks that are solved. Thus it will beuseful through all phases of the operation, through all commandlevels, and through all the different organizations andactivities that are involved. Keywords:Command and Control, Management, DecisionSupport, Data Fusion, Information Fusion, Situation Awareness,Network-Based Defence, Ontology. / <p>QCR 20161026</p>
83

Multimodal Data Fusion Using Voice and Electromyography Data for Robotic Control

Khan Mohd, Tauheed 06 September 2019 (has links)
No description available.
84

Data Fusion Process Refinement in intrusion Detection Alert Correlation Systems

Sheets, David January 2008 (has links)
No description available.
85

Machine Learning and Data Fusion of Simulated Remote Sensing Data

Higgins, Erik Tracy 27 July 2023 (has links)
Modeling and simulation tools are described and implemented in a single workflow to develop a means of simulating a ship wake followed by simulated synthetic aperture radar (SAR) and infra-red (IR) images of these ship wakes. A parametric study across several different ocean environments and simulated remote sensing platforms is conducted to generate a preliminary data set that is used for training and testing neural network--based ship wake detection models. Several different model architectures are trained and tested, which are able to provide a high degree of accuracy in classifying whether input SAR images contain a persistent ship wake. Several data fusion models are explored to understand how fusing data from different SAR bands may improve ship wake detection, with some combinations of neural networks and data fusion models achieving perfect or near-perfect performance. Finally, an outline for a future study into multi-physics data fusion across multiple sensor modalities is created and discussed. / Doctor of Philosophy / This dissertation focuses on using computer simulations to first simulate the wakes of ships on the ocean surface, and then simulate airborne or satellite-based synthetic aperture radar (SAR) and infra-red (IR) images of these ship wakes. These images are used to train machine learning models that can be given a SAR or IR image of the ocean and determine whether or not the image contains a ship wake. The testing shows good preliminary results and some models are able to detect ship wakes in simulated SAR images with a high degree of accuracy. Data fusion models are then created which seeks to fuse data sources together in order to improve ship wake detection. These data fusion models are tested using the simulated SAR images, and some of these data fusion models show a positive impact on ship wake detection. Next steps for future research are documented, such as data fusion of SAR and IR data in order to study how fusion of these sensors impacts ship wake detection compared to just a single SAR sensor or multiple SAR sensors fused together.
86

Annotation, Enrichment and Fusion of Multiscale Data: Identifying High Risk Prostate Cancer

Singanamalli, Asha 21 February 2014 (has links)
No description available.
87

Self-Supervised Remote Sensing Image Change Detection and Data Fusion

Chen, Yuxing 27 November 2023 (has links)
Self-supervised learning models, which are called foundation models, have achieved great success in computer vision. Meanwhile, the limited access to labeled data has driven the development of self-supervised methods in remote sensing tasks. In remote sensing image change detection, the generative models are extensively utilized in unsupervised binary change detection tasks, while they overly focus on pixels rather than on abstract feature representations. In addition, the state-of-the-art satellite image time series change detection approaches fail to effectively leverage the spatial-temporal information of image time series or generalize well to unseen scenarios. Similarly, in the context of multimodal remote sensing data fusion, the recent successes of deep learning techniques mainly focus on specific tasks and complete data fusion paradigms. These task-specific models lack of generalizability to other remote sensing tasks and become overfitted to the dominant modalities. Moreover, they fail to handle incomplete modalities inputs and experience severe degradation in downstream tasks. To address these challenges associated with individual supervised learning models, this thesis presents two novel contributions to self-supervised learning models on remote sensing image change detection and multimodal remote sensing data fusion. The first contribution proposes a bi-temporal / multi-temporal contrastive change detection framework, which employs contrastive loss on image patches or superpixels to get fine-grained change maps and incorporates an uncertainty method to enhance the temporal robustness. In the context of satellite image time series change detection, the proposed approach improves the consistency of pseudo labels through feature tracking and tackles the challenges posed by seasonal changes in long-term remote sensing image time series using supervised contrastive loss and the random walk loss in ConvLSTM. The second contribution develops a self-supervised multimodal RS data fusion framework, with a specific focus on addressing the incomplete multimodal RS data fusion challenges in downstream tasks. Within this framework, multimodal RS data are fused by applying a multi-view contrastive loss at the pixel level and reconstructing each modality using others in a generative way based on MultiMAE. In downstream tasks, the proposed approach leverages a random modality combination training strategy and an attention block to enable fusion across modal-incomplete inputs. The thesis assesses the effectiveness of the proposed self-supervised change detection approach on single-sensor and cross-sensor datasets of SAR and multispectral images, and evaluates the proposed self-supervised multimodal RS data fusion approach on the multimodal RS dataset with SAR, multispectral images, DEM and also LULC maps. The self-supervised change detection approach demonstrates improvements over state-of-the-art unsupervised change detection methods in challenging scenarios involving multi-temporal and multi-sensor RS image change detection. Similarly, the self-supervised multimodal remote sensing data fusion approach achieves the best performance by employing an intermediate fusion strategy on SAR and optical image pairs, outperforming existing unsupervised data fusion approaches. Notably, in incomplete multimodal fusion tasks, the proposed method exhibits impressive performance on all modal-incomplete and single modality inputs, surpassing the performance of vanilla MultiViT, which tends to overfit on dominant modality inputs and fails in tasks with single modality inputs.
88

Analysis of Dryland Forest Phenology using Fused Landsat and MODIS Satellite Imagery

Walker, Jessica 24 October 2012 (has links)
This dissertation investigated the practicality and expediency of applying remote sensing data fusion products to the analysis of dryland vegetation phenology. The objective of the first study was to verify the quality of the output products of the spatial and temporal adaptive reflectance fusion method (STARFM) over the dryland Arizona study site. Synthetic 30 m resolution images were generated from Landsat-5 Thematic Mapper (TM) data and a range of 500 m Moderate Resolution Imaging Spectroradiometer (MODIS) surface reflectance datasets and assessed via correlation analysis with temporally coincident Landsat-5 imagery. The accuracy of the results (0.61 < R2 < 0.94) justified subsequent use of STARFM data in this environment, particularly when the imagery were generated from Nadir Bi-directional Reflectance Factor (BRDF)-Adjusted Reflectance (NBAR) MODIS datasets. The primary objective of the second study was to assess whether synthetic Landsat data could contribute meaningful information to the phenological analyses of a range of dryland vegetation classes. Start-of-season (SOS) and date of peak greenness phenology metrics were calculated for each STARFM and MODIS pixel on the basis of enhanced vegetation index (EVI) and normalized difference vegetation index (NDVI) time series over a single growing season. The variability of each metric was calculated for all STARFM pixels within 500 m MODIS extents. Colorado Plateau Pinyon Juniper displayed high amounts of temporal and spatial variability that justified the use of STARFM data, while the benefit to the remaining classes depended on the specific vegetation index and phenology metric. The third study expanded the STARFM time series to five years (2005-2009) to examine the influence of site characteristics and climatic conditions on dryland ponderosa pine (Pinus ponderosa) forest phenological patterns. The results showed that elevation and slope controlled the variability of peak timing across years, with lower elevations and shallower slopes linked to higher levels of variability. During drought conditions, the number of site variables that controlled the timing and variability of vegetation peak increased. / Ph. D.
89

Data Fusion For Improved TOA/TDOA Position Determination in Wireless Systems

Reza, Rahman Iftekhar 14 November 2000 (has links)
The Federal Communications Commission (FCC) that regulates all wireless communication service providers has issued modified regulations that all service providers must select a method for providing position location (PL) information of a user, requesting for E-911 service, by October 2000. The wireless 911 rules adopted by the FCC are aimed both for improving the reliability of the wireless 911 services and for providing the enhanced features generally available for wireline calls. From the service providers' perspective, effective position location technologies must be utilized to meet the FCC rules. The Time-of-Arrival (TOA) and the Time-Difference-of-Arrival (TDOA) methods are the technology that can provide accurate PL information without necessitating excessive hardware or software changes to the existing cellular/PCS infrastructure. The TOA method works well when the mobile station (MS) is located close to the controlling base station. With certain corrections applied, the TOA method can perform reliably even in the presence of Non-Line-of-Sight (NLOS) condition. The TDOA method performs better when the MS is located at a significant distance from the controlling base station. However, under the NLOS environmental condition, the performance of the TDOA method degenerates significantly. The fusion of TOA and the TDOA method exhibits certain advantages that are not evident when only one of the methods is applied. This thesis investigates the performance of data fusion techniques for a PL system, that are able to merge independent estimates obtained from TOA and TDOA measurements. A channel model is formulated for evaluating PL techniques within a NLOS cellular environment. It is shown that NLOS propagation can introduce a bias into TDOA measurements. A correction method is proposed for removing this bias and new corrected data fusion techniques are compared with previous techniques using simulation method, yielding favorable results. / Master of Science
90

Sensordatafusion av IR- och radarbilder / Sensor data fusion of IR- and radar images

Schultz, Johan January 2004 (has links)
<p>Den här rapporten beskriver och utvärderar ett antal algoritmer för multisensordatafusion av radar och IR/TV-data på rådatanivå. Med rådatafusion menas att fusionen ska ske innan attribut- eller objektextrahering. Attributextrahering kan medföra att information går förlorad som skulle kunna förbättra fusionen. Om fusionen sker på rådatanivå finns mer information tillgänglig och skulle kunna leda till en förbättrad attributextrahering i ett senare steg. Två tillvägagångssätt presenteras. Den ena metoden projicerar radarbilden till IR-vyn och vice versa. Fusionen utförs sedan på de par av bilder med samma dimensioner. Den andra metoden fusionerar de två ursprungliga bilderna till en volym. Volymen spänns upp av de tre dimensionerna representerade i ursprungsbilderna. Metoden utökas också genom att utnyttja stereoseende. Resultaten visar att det kan vara givande att utnyttja stereoseende då den extra informationen underlättar fusionen samt ger en mer generell lösning på problemet.</p> / <p>This thesis describes and evaluates a number of algorithms for multi sensor fusion of radar and IR/TV data. The fusion is performed on raw data level, that is prior to attribute extraction. The idea is that less information will be lost compared to attribute level fusion. Two methods are presented. The first method transforms the radar image to the IR-view and vice versa. The images sharing the same dimension are then fused together. The second method fuses the original images to a three dimensional volume. Another version is also presented, where stereo vision is used. The results show that stereo vision can be used with good performance and gives a more general solution to the problem.</p>

Page generated in 0.044 seconds