• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 165
  • 34
  • 14
  • 9
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 280
  • 280
  • 77
  • 74
  • 46
  • 43
  • 38
  • 33
  • 33
  • 32
  • 31
  • 28
  • 26
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Geospatial integrated urban flood mapping and vulnerability assessment

Islam, MD Tazmul, , 08 December 2023 (has links) (PDF)
Natural disasters like flooding have always been a big problem for countries around the world, but as the global climate changes and the number of people living in cities keeps growing, the threat of flooding has become a lot worse. Even though many studies have been conducted on flood mapping and vulnerability assessment in urban areas, this research addresses a significant knowledge gap in this domain. First, we used a flood depth estimation approach has been used to address the overestimation of urban flood mapping areas using Sentinel-1 images. Ten different combinations of the two initial VH and VV polarizations were used to rapidly and accurately map urban floods within open-source Google Earth Engine platforms using four different methods. The inclusion of flood depth has improved the accuracy of these methods by 7% on average. Next, we focused our research to find out who is most at risk in the floodplain areas. Minority communities, such as African Americans, as a result of socioeconomic constraints, face more difficulties. So, next we conducted an analysis of spatial and temporal changes of demographic patterns (Race) in five southern cities in US. From our analysis we have found that in majority of cities, the minority population within the floodplain has increased over the past two decades, with the exception of Charleston, South Carolina, where the white population has increased while the minority population has decreased. Building upon these insights, we have included more socio-economic and demographic variables in our analysis to find out the more holistic view of the vulnerable people in two of these cities (Jackson and Birmingham). Due to high autocorrelation between explanatory variables, we used Principal Component Analysis (PCA) along with global and local regression techniques to find out how much these variables can explain the vulnerability. According to our findings, the spatial components play a significant role in explaining vulnerabilities in greater detail. The findings of this research can serve as an important resource for policymakers, urban planners, and emergency response agencies to make informed decisions in future events and enhancing overall resilience.
182

Urban Landscape Assessment of the Mississippi and Alabama Gulf Coast using Landsat Imagery 1973-2015

Sherif, Abdalla R 10 August 2018 (has links)
This study aims to conduct an assessment of the land cover change of the Mississippi and Alabama coastal region, an integral part of the Gulf Coast ecological makeup. Landsat satellite data were used to perform a supervised classification using the imagery captured by Landsat sensors including Landsat 1-2 Multispectral Scanner (MSS), Landsat 4-5 Thematic Mapper (TM), Landsat 7 Enhanced Thematic Mapper (ETM+), and Landsat 8 Operational Land Imager (OLI) from 1973 to 2015. The objective of this study is to build a long-term assessment of urban development and land cover change over the past four decades for the Alabama and Mississippi Gulf Coast and to characterize these changes using Landscape Metrics (LM). The findings of this study indicate that the urban land cover doubled in size between 1973 and 2015. This expansion was accompanied by a high degree of urban fragmentation during the first half of the study period and then a gradual leveling off. Local, state, and federal authorities can use the results of this study to build mitigation plans, coastal development planning, and serve as the primary evaluation of the current urban development for city planners, environmental advocates, and community leaders to reduce degradation for this environmentally sensitive coastal region.
183

Volumetric Change Detection Using Uncalibrated 3D Reconstruction Models

Diskin, Yakov 03 June 2015 (has links)
No description available.
184

Contributions to Distributed Detection and Estimation over Sensor Networks

Whipps, Gene Thomas January 2017 (has links)
No description available.
185

Statistical Methods for Image Change Detection with Uncertainty

Lingg, Andrew James January 2012 (has links)
No description available.
186

Monitoring Land Use and Land Cover Changes in Belize, 1993-2003: A Digital Change Detection Approach

Ek, Edgar 18 December 2004 (has links)
No description available.
187

CHANGE DETECTION OF A SCENE FOLLOWING A VIEWPOINT CHANGE: MECHANISMS FOR THE REDUCED PERFORMANCE COST WHEN THE VIEWPOINT CHANGE IS CAUSED BY VIEWER LOCOMOTION

Comishen, Michael A. 10 1900 (has links)
<p>When an observer detects changes in a scene from a viewpoint that is different from the learned viewpoint, viewpoint change caused by observer’s locomotion would lead to better recognition performance compared to a situation where the viewpoint change is caused by equivalent movement of the scene. While such benefit of observer locomotion could be caused by spatial updating through body-based information (Simons and Wang 1998), or knowledge of change of reference direction gained through locomotion (Mou et al, 2009). The effect of such reference direction information have been demonstrated through the effect of a visual cue (e.g., a chopstick) presented during the testing phase indicating the original learning viewpoint (Mou et al, 2009).</p> <p>In the current study, we re-examined the mechanisms of such benefit of observer locomotion. Six experiments were performed using a similar change detection paradigm. Experiment 1 & 2 adopted the design as that in Mou et al. (2009). The results were inconsistent with the results from Mou et al (2009) in that even with the visual indicator, the performance (accuracy and response time) in the table rotation condition was still significantly worse than that in the observer locomotion condition. In Experiments 3-5, we compared performance in the normal walking condition with conditions where the body-based information may not be reliable (disorientation or walking over a long path). The results again showed a lack of benefit with the visual indicator. Experiment 6 introduced a more salient and intrinsic reference direction: coherent object orientations. Unlike the previous experiments, performance in the scene rotation condition was similar to that in the observer locomotion condition.</p> <p>Overall we showed that the body-based information in observer locomotion may be the most prominent information. The knowledge of the reference direction could be useful but might only be effective in limited scenarios such as a scene with a dominant orientation.</p> / Master of Science (MSc)
188

Analysis and Evaluation of Social Network Anomaly Detection

Zhao, Meng John 27 October 2017 (has links)
As social networks become more prevalent, there is significant interest in studying these network data, the focus often being on detecting anomalous events. This area of research is referred to as social network surveillance or social network change detection. While there are a variety of proposed methods suitable for different monitoring situations, two important issues have yet to be completely addressed in network surveillance literature. First, performance assessments using simulated data to evaluate the statistical performance of a particular method. Second, the study of aggregated data in social network surveillance. The research presented tackle these issues in two parts, evaluation of a popular anomaly detection method and investigation of the effects of different aggregation levels on network anomaly detection. / Ph. D.
189

Digital State Models for Infrastructure Condition Assessment and Structural Testing

Lama Salomon, Abraham 10 February 2017 (has links)
This research introduces and applies the concept of digital state models for civil infrastructure condition assessment and structural testing. Digital state models are defined herein as any transient or permanent 3D model of an object (e.g. textured meshes and point clouds) combined with any electromagnetic radiation (e.g., visible light, infrared, X-ray) or other two-dimensional image-like representation. In this study, digital state models are built using visible light and used to document the transient state of a wide variety of structures (ranging from concrete elements to cold-formed steel columns and hot-rolled steel shear-walls) and civil infrastructures (bridges). The accuracy of digital state models was validated in comparison to traditional sensors (e.g., digital caliper, crack microscope, wire potentiometer). Overall, features measured from the 3D point clouds data presented a maximum error of ±0.10 in. (±2.5 mm); and surface features (i.e., crack widths) measured from the texture information in textured polygon meshes had a maximum error of ±0.010 in. (±0.25 mm). Results showed that digital state models have a similar performance between all specimen surface types and between laboratory and field experiments. Also, it is shown that digital state models have great potential for structural assessment by significantly improving data collection, automation, change detection, visualization, and augmented reality, with significant opportunities for commercial development. Algorithms to analyze and extract information from digital state models such as cracks, displacement, and buckling deformation are developed and tested. Finally, the extensive data sets collected in this effort are shared for research development in computer vision-based infrastructure condition assessment, eliminating the major obstacle for advancing in this field, the absence of publicly available data sets. / Ph. D.
190

Advanced deep learning based multi-temporal remote sensing image analysis

Saha, Sudipan 29 May 2020 (has links)
Multi-temporal image analysis has been widely used in many applications such as urban monitoring, disaster management, and agriculture. With the development of the remote sensing technology, the new generation remote sensing satellite images with High/ Very High spatial resolution (HR/VHR) are now available. Compared to the traditional low/medium spatial resolution images, the detailed information of ground objects can be clearly analyzed in the HR/VHR images. Classical methods of multi-temporal image analysis deal with the images at pixel level and have worked well on low/medium resolution images. However, they provide sub-optimal results on new generation images due to their limited capability of modeling complex spatial and spectral information in the new generation products. Although significant number of object-based methods have been proposed in the last decade, they depend on suitable segmentation scale for diverse kinds of objects present in each temporal image. Thus their capability to express contextual information is limited. Typical spatial properties of last generation images emphasize the need of having more flexible models for object representation. Another drawback of the traditional methods is the difficulty in transferring knowledge learned from one specific problem to another. In the last few years, an interesting development is observed in the machine learning/computer vision field. Deep learning, especially Convolution Neural Networks (CNNs) have shown excellent capability to capture object level information and in transfer learning. By 2015, deep learning achieved state-of-the-art performance in most computer vision tasks. Inspite of its success in computer vision fields, the application of deep learning in multi-temporal image analysis saw slow progress due to the requirement of large labeled datasets to train deep learning models. However, by the start of this PhD activity, few works in the computer vision literature showed that deep learning possesses capability of transfer learning and training without labeled data. Thus, inspired by the success of deep learning, this thesis focuses on developing deep learning based methods for unsupervised/semi-supervised multi-temporal image analysis. This thesis is aimed towards developing methods that combine the benefits of deep learning with the traditional methods of multi-temporal image analysis. Towards this direction, the thesis first explores the research challenges that incorporates deep learning into the popular unsupervised change detection (CD) method - Change Vector Analysis (CVA) and further investigates the possibility of using deep learning for multi-temporal information extraction. The thesis specifically: i) extends the paradigm of unsupervised CVA to novel Deep CVA (DCVA) by using a pre-trained network as deep feature extractor; ii) extends DCVA by exploiting Generative Adversarial Network (GAN) to remove necessity of having a pre-trained deep network; iii) revisits the problem of semi-supervised CD by exploiting Graph Convolutional Network (GCN) for label propagation from the labeled pixels to the unlabeled ones; and iv) extends the problem statement of semantic segmentation to multi-temporal domain via unsupervised deep clustering. The effectiveness of the proposed novel approaches and related techniques is demonstrated on several experiments involving passive VHR (including Pleiades), passive HR (Sentinel-2), and active VHR (COSMO-SkyMed) datasets. A substantial improvement is observed over the state-of-the-art shallow methods.

Page generated in 0.1019 seconds