• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 181
  • 16
  • 14
  • 7
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 302
  • 302
  • 269
  • 82
  • 64
  • 56
  • 52
  • 48
  • 46
  • 42
  • 40
  • 31
  • 31
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Estimating Arctic sea ice melt pond fraction and assessing ice type separability during advanced melt

Nasonova, Sasha January 2017 (has links)
Arctic sea ice is rapidly declining in extent, thickness, volume and age, with the majority of the decline in extent observed at the end of the melt season. Advanced melt is a thermodynamic regime and is characterized by the formation of melt ponds on the sea ice surface, which have a lower surface albedo (0.2-0.4) than the surrounding ice (0.5-0.7) allowing more shortwave radiation to enter the system. The loss of multiyear ice (MYI) may have a profound impact on the energy balance of the system because melt ponds on first-year ice (FYI) comprise up to 70% of the ice surface during advanced melt, compared to 40% on MYI. Despite the importance of advanced melt to the ocean-sea ice-atmosphere system, advanced melt and the extent to which winter conditions influence it remain poorly understood due to the highly dynamic nature of melt pond formation and evolution, and a lack of reliable observations during this time. In order to establish quantitative links between winter and subsequent advanced melt conditions, and assess the effects of scale and choice of aggregation features on the relationships, three data aggregation approaches at varied spatial scales were used to compare high resolution satellite GeoEye-1 optical images of melt pond covered sea ice to winter airborne laser scanner surface roughness and electromagnetic induction sea ice thickness measurements. The findings indicate that winter sea ice thickness has a strong association with melt pond fraction (fp) for FYI and MYI. FYI winter surface roughness is correlated with fp, whereas for MYI no association with fp was found. Satellite-borne synthetic aperture radar (SAR) data are heavily relied upon for sea ice observation; however, during advanced melt the reliability of observations is reduced. In preparation for the upcoming launch of the RADARSAT Constellation Mission (RCM), the Kolmogorov-Smirnov (KS) statistical test was used to assess the ability of simulated RCM parameters and grey level co-occurrence matrix (GLCM) derived texture features to discriminate between major ice types during winter and advanced melt, with a focus on advanced melt. RCM parameters with highest discrimination ability in conjunction with optimal GLCM texture features were used as input parameters for Support Vector Machine (SVM) supervised classifications. The results indicate that steep incidence angle RCM parameters show promise for distinguishing between FYI and MYI during advanced melt with an overall classification accuracy of 77.06%. The addition of GLCM texture parameters improved accuracy to 85.91%. This thesis provides valuable contributions to the growing body of literature on fp parameterization and SAR ice type discrimination during advanced melt. / Graduate / 2019-03-21
212

3D Imaging Millimeter Wave Circular Synthetic Aperture Radar

Zhang, Renyuan, Cao, Siyang 17 June 2017 (has links)
In this paper, a new millimeter wave 3D imaging radar is proposed. The user just needs to move the radar along a circular track, and high resolution 3D imaging can be generated. The proposed radar uses the movement of itself to synthesize a large aperture in both the azimuth and elevation directions. It can utilize inverse Radon transform to resolve 3D imaging. To improve the sensing result, the compressed sensing approach is further investigated. The simulation and experimental result further illustrated the design. Because a single transceiver circuit is needed, a light, affordable and high resolution 3D mmWave imaging radar is illustrated in the paper.
213

Ground Deformation Related to Caldera Collapse and Ring-Fault Activity

Liu, Yuan-Kai 05 1900 (has links)
Volcanic subsidence, caused by partial emptying of magma in the subsurface reservoir has long been observed by spaceborne radar interferometry. Monitoring long-term crustal deformation at the most notable type of volcanic subsidence, caldera, gives us insights of the spatial and hazard-related information of subsurface reservoir. Several subsiding calderas, such as volcanoes on the Galapagos islands have shown a complex ground deformation pattern, which is often composed of a broad deflation signal affecting the entire edifice and a localized subsidence signal focused within the caldera floor. Although numerical or analytical models with multiple reservoirs are proposed as the interpretation, geologically and geophysically evidenced ring structures in the subsurface are often ignored. Therefore, it is still debatable how deep mechanisms relate to the observed deformation patterns near the surface. We aim to understand what kind of activities can lead to the complex deformation. Using two complementary approaches, we study the three-dimensional geometry and kinematics of deflation processes evolving from initial subsidence to later collapse of calderas. Firstly, the analog experiments analyzed by structure-from-motion photogrammetry (SfM) and particle image velocimetry (PIV) helps us to relate the surface deformation to the in-depth structures. Secondly, the numerical modeling using boundary element method (BEM) simulates the characteristic deformation patterns caused by a sill-like source and a ring-fault. Our results show that the volcano-wide broad deflation is primarily caused by the emptying of the deep magma reservoir, whereas the localized deformation on the caldera floor is related to ring-faulting at a shallower depth. The architecture of the ring-fault to a large extent determines the deformation localization on the surface. Since series evidence for ring-faulting at several volcanoes are provided, we highlight that it is vital to include ring-fault activity in numerical or analytical deformation source formulation. Ignoring the process of ring-faulting in models by using multiple point sources for various magma reservoirs will result in erroneous, thus meaningless estimates of depth and volume change of the magmatic reservoir(s).
214

Synthetic Aperture Radar Imaging Simulated in MATLAB

Schlutz, Matthew 01 June 2009 (has links)
This thesis further develops a method from ongoing thesis projects with the goal of generating images using synthetic aperture radar (SAR) simulations coded in MATLAB. The project is supervised by Dr. John Saghri and sponsored by Raytheon Space and Airborne Systems. SAR is a type of imaging radar in which the relative movement of the antenna with respect to the target is utilized. Through the simultaneous processing of the radar reflections over the movement of the antenna via the range Doppler algorithm (RDA), the superior resolution of a theoretical wider antenna, termed synthetic aperture, is obtained. The long term goal of this ongoing project is to develop a simulation in which realistic SAR images can be generated and used for SAR Automatic Target Recognition (ATR). Current and past Master’s theses on ATR were restricted to a small data set of Man-portable Surveillance and Target Acquisition Radar (MSTAR) images as most SAR images for military ATR are not released for public use. Also, with an in-house SAR image generation scheme the parameters of noise, target orientation, the elevation angle or look angle to the antenna from the target and other parameters can be directly controlled and modified to best serve ATR purposes or other applications such as three-dimensional SAR holography. At the start of the project in September 2007, the SAR simulation from previous Master’s theses was capable of simulating and imaging point targets in a two dimensional plane with limited mobility. The focus on improvements to this simulation through the course of this project was to improve the SAR simulation for applications to more complex two-dimensional targets and simple three-dimensional targets, such as a cube. The input to the simulation uses a selected two-dimensional, grayscale target image and generates from the input a two-dimensional target profile of reflectivity over the azimuth and range based on the intensity of the pixels in the target image. For three-dimensional simulations, multiple two-dimensional azimuth/range profiles are imported at different altitudes. The output from both the two-dimensional and three-dimensional simulations is the SAR simulated and RDA processed image of the input target profile. Future work on this ongoing project will include an algorithm to calculate line of sight limitations of point targets and processing optimization of the radar information generation implemented in the code so that more complex and realistic targets can be simulated and imaged using SAR for applications in ATR and 3D SAR holography.
215

Geo-localization Refinement of Optical Satellite Images by Embedding Synthetic Aperture Radar Data in Novel Deep Learning Frameworks

Merkle, Nina Marie 06 December 2018 (has links)
Every year, the number of applications relying on information extracted from high-resolution satellite imagery increases. In particular, the combined use of different data sources is rising steadily, for example to create high-resolution maps, to detect changes over time or to conduct image classification. In order to correctly fuse information from multiple data sources, the utilized images have to be precisely geometrically registered and have to exhibit a high absolute geo-localization accuracy. Due to the image acquisition process, optical satellite images commonly have an absolute geo-localization accuracy in the order of meters or tens of meters only. On the other hand, images captured by the high-resolution synthetic aperture radar satellite TerraSAR-X can achieve an absolute geo-localization accuracy within a few decimeters and therefore represent a reliable source for absolute geo-localization accuracy improvement of optical data. The main objective of this thesis is to address the challenge of image matching between high resolution optical and synthetic aperture radar (SAR) satellite imagery in order to improve the absolute geo-localization accuracy of the optical images. The different imaging properties of optical and SAR data pose a substantial challenge for a precise and accurate image matching, in particular for the handcrafted feature extraction stage common for traditional optical and SAR image matching methods. Therefore, a concept is required which is carefully tailored to the characteristics of optical and SAR imagery and is able to learn the identification and extraction of relevant features. Inspired by recent breakthroughs in the training of neural networks through deep learning techniques and the subsequent developments for automatic feature extraction and matching methods of single sensor images, two novel optical and SAR image matching methods are developed. Both methods pursue the goal of generating accurate and precise tie points by matching optical and SAR image patches. The foundation of these frameworks is a semi-automatic matching area selection method creating an optimal initialization for the matching approaches, by limiting the geometric differences of optical and SAR image pairs. The idea of the first approach is to eliminate the radiometric differences between the images trough an image-to-image translation with the help of generative adversarial networks and to realize the subsequent image matching through traditional algorithms. The second approach is an end-to-end method in which a Siamese neural network learns to automatically create tie points between image pairs through a targeted training. The geo-localization accuracy improvement of optical images is ultimately achieved by adjusting the corresponding optical sensor model parameters through the generated set of tie points. The quality of the proposed methods is verified using an independent set of optical and SAR image pairs spread over Europe. Thereby, the focus is set on a quantitative and qualitative evaluation of the two tie point generation methods and their ability to generate reliable and accurate tie points. The results prove the potential of the developed concepts, but also reveal weaknesses such as the limited number of training and test data acquired by only one combination of optical and SAR sensor systems. Overall, the tie points generated by both deep learning-based concepts enable an absolute geo-localization improvement of optical images, outperforming state-of-the-art methods.
216

Reconstruction de trajectoires de cibles mobiles en imagerie RSO aéroportée / Moving target trajectory reconstruction using circular SAR imagery

Poisson, Jean-Baptiste 12 December 2013 (has links)
L’imagerie RSO circulaire aéroportée permet d’obtenir de nombreuses informations sur les zones imagées et sur les cibles mobiles. Les objets peuvent être observés sous plusieurs angles, et l’illumination continue d’une même scène permet de générer plusieurs images successives de la même zone. L’objectif de cette thèse est de développer une méthode de reconstruction de trajectoire de cibles mobiles en imagerie RSO circulaire monovoie, et d’étudier les performances de la méthode proposée. Nous avons tout d’abord mesuré les coordonnées apparentes des cibles mobiles sur les images RSO et leur paramètre de défocalisation. Ceci permet d’obtenir des informations de mouvement des cibles, notamment de vitesse et d’accélération. Nous avons ensuite utilisé ces mesures pour définir un système d’équations non-linéaires permettant de faire le lien entre les trajectoires réelles des cibles mobiles et leurs trajectoires apparentes. Par une analyse mathématique et numérique de la stabilité de ce système, nous avons montré que seul un modèle de cible mobile avec une vitesse constante permet de reconstruire précisément les trajectoires des cibles mobiles, sous réserve d’une excursion angulaire suffisante. Par la suite, nous avons étudié l’influence de la résolution des images sur les performances de reconstruction des trajectoires, en calculant théoriquement les précisions de mesure et les précisions de reconstruction qui en découlent. Nous avons mis en évidence l’existence théorique d’une résolution azimutale optimale, dépendant de la radiométrie des cibles et de la validité des modèles étudiés. Finalement nous avons validé la méthode développée sur deux jeux de données réelles. / Circular SAR imagery brings a lot of information concerning the illuminated scenes and the moving targets. Objects may be seen from any angle, and the continuity of the illumination allows generating a lot of successive images from the same scene. In the scope of this thesis, we develop a moving target trajectory reconstruction methodology using circular SAR imagery, and we study the performances of this methodology. We have first measured the apparent coordinates of the moving targets on SAR images, and also the defocusing parameter of the targets. This enables us to obtain information concerning target movement, especially the velocity and the acceleration. We then used these measurements to develop a non-linear system that makes the link between the apparent trajectories of the moving targets and the real ones. We have shown, by a mathematical and numerical analysis of the robustness, that only a model of moving target with constant velocity enables us to obtain accurate trajectory reconstructions from a sufficient angular span. Then, we have studied the azimuth resolution influence on the reconstruction accuracy. In order to achieve this, we have theoretically estimated the measurement accuracy and the corresponding reconstruction accuracy. We have highlighted the existence of an optimal azimuth resolution, depending on the target radiometry and on the validity of the two target models. Finally, we have validated the method on two real data sets on X-Band acquired by SETHI and RAMSES NG, the ONERA radar systems, and confirmed the theoretical analyses of its performances.
217

Deep learning and quantum annealing methods in synthetic aperture radar

Kelany, Khaled 08 October 2021 (has links)
Mapping of earth resources, environmental monitoring, and many other systems require high-resolution wide-area imaging. Since images often have to be captured at night or in inclement weather conditions, a capability is provided by Synthetic Aperture Radar (SAR). SAR systems exploit radar signal's long-range propagation and utilize digital electronics to process complex information, all of which enables high-resolution imagery. This gives SAR systems advantages over optical imaging systems, since, unlike optical imaging, SAR is effective at any time of day and in any weather conditions. Moreover, advanced technology called Interferometric Synthetic Aperture Radar (InSAR), has the potential to apply phase information from SAR images and to measure ground surface deformation. However, given the current state of technology, the quality of InSAR data can be distorted by several factors, such as image co-registration, interferogram generation, phase unwrapping, and geocoding. Image co-registration aligns two or more images so that the same pixel in each image corresponds to the same point of the target scene. Super-Resolution (SR), on the other hand, is the process of generating high-resolution (HR) images from a low-resolution (LR) one. SR influences the co-registration quality and therefore could potentially be used to enhance later stages of SAR image processing. Our research resulted in two major contributions towards the enhancement of SAR processing. The first one is a new learning-based SR model that can be applied with SAR, and similar applications. A second major contribution is utilizing the devised model for improving SAR co-registration and InSAR interferogram generation, together with methods for evaluating the quality of the resulting images. In the case of phase unwrapping, the process of recovering unambiguous phase values from a two-dimensional array of phase values known only modulo $2\pi$ rad, our research produced a third major contribution. This third major contribution is the finding that quantum annealers can resolve problems associated with phase unwrapping. Even though other potential solutions to this problem do currently exist - based on network programming for example - network programming techniques do not scale well to larger images. We were able to formulate the phase unwrapping problem as a quadratic unconstrained binary optimization (QUBO) problem, which can be solved using a quantum annealer. Since quantum annealers are limited in the number of qubits they can process, currently available quantum annealers do not have the capacity to process large SAR images. To resolve this limitation, we developed a novel method of recursively partitioning the image, then recursively unwrapping each partition, until the whole image becomes unwrapped. We tested our new approach with various software-based QUBO solvers and various images, both synthetic and real. We also experimented with a D-Wave Systems quantum annealer, the first and only commercial supplier of quantum annealers, and we developed an embedding method to map the problem to the D-Wave 2000Q_6, which improved the result images significantly. With our method, we were able to achieve high-quality solutions, comparable to state-of-the-art phase-unwrapping solvers. / Graduate
218

Signal Processing and Machine Learning for Explosive Hazard Detection using Synthetic Aperture Acoustic and High Resolution Voxel Radar

Dowdy, Joshua L 04 May 2018 (has links)
Different signal processing techniques for synthetic aperture acoustic (SAA) and highresolution voxel radar (HRVR) sensing modalities for side-attack explosive ballistic (SAEB) detection are proposed in this thesis. The sensing modalities were vehicle mounted and the data used was collected at an army test site. More specifically, the use of a frequency azimuthal (fraz) feature for SAA and the fusion of a matched filter (MF) and size contrast filter (SCF) for HRVR was explored. For SAA, the focus was to find a signature in the target’s response that would vary as the vehicle’s view on the target changed. For the HRVR, the focus was put on finding objects that were both anomalous (SCF) and target-like (MF). The results in both cases are obtained using receiver operating characteristic (ROC) curves and both are very encouraging.
219

ATREngine: An Orientation-Based Algorithm for Automatic Target Recognition

Kuo, Justin Ting-Jeuan 01 June 2014 (has links) (PDF)
Automatic Target Recognition (ATR) is a subject involving the use of sensor data to develop an algorithm for identifying targets of significance. It is of particular interest in military applications such as unmanned aerial vehicles and missile tracking systems. This thesis develops an orientation-based classification approach from previous ATR algorithms for 2-D Synthetic Aperture Radar (SAR) images. Prior work in ATR includes Chessa Guilas’ Hausdorff Probabilistic Feature Analysis Approach in 2005 and Daniel Cary’s Optimal Rectangular Fit in 2007. A system incorporating multiple modules performing different tasks is developed to streamline the data processing of previous algorithms. Using images from the publicly available Moving and Stationary Target Acquisition and Recognition (MSTAR) database, target orientation was determined to be the best feature for ATR. A rotationally variant algorithm taking advantage of the combination of target orientation and pixel location for classification is proposed in this thesis. Extensive classification results yielding an overall accuracy of 76.78% are presented to demonstrate algorithm functionality.
220

Light Field Imaging Applied to Reacting and Microscopic Flows

Pendlebury, Jonathon Remy 01 December 2014 (has links) (PDF)
Light field imaging, specifically synthetic aperture (SA) refocusing is a method used to combine images from an array of cameras to generate a single image with a narrow depth of field that can be positioned arbitrarily throughout the volume under investigation. Creating a stack of narrow depth of field images at varying locations generates a focal stack that can be used to find the location of objects in three dimensions. SA refocusing is particularly useful when reconstructing particle fields that are then used to determine the movement of the fluid they are entrained in, and it can also be used for shape reconstruction. This study applies SA refocusing to reacting flows and microscopic flows by performing shape reconstruction and 3D PIV on a flame, and 3D PIV on flow through a micro channel. The reacting flows in particular posed problems with the method. Reconstruction of the flame envelope was successful except for significant elongation in the optical axis caused by cameras viewing the flame from primarily one direction. 3D PIV on reacting flows suffered heavily from the index of refraction generated by the flame. The refocusing algorithm used assumed the particles were viewed through a constant refractive index (RI) and does not compensate for variations in the RI. This variation caused apparent motion in the particles that obscured their true locations making the 3D PIV prone to error. Microscopic PIV (µPIV) was performed on a channel containing a backward facing step. A microlens array was placed in the imaging section of the setup to capture a light field from the scene, which was then refocusing using SA refocusing. PIV on these volumes was compared to a CFD simulation on the same channel. Comparisons showed that error was most significant near the boundaries and the step of the channel. The axial velocity in particular had significant error near the step were the axial velocity was highest. Flow-wise velocity, though, appeared accurate with average flow-wise error approximately 20% throughout the channel volume.

Page generated in 0.0679 seconds