• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 2
  • Tagged with
  • 14
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Dual-wavelength radar studies of clouds

Hogan, Robin James January 1998 (has links)
No description available.
2

Design of fuel optimal maneuvers for multi-spacecraft interferometric imaging systems

Ramirez Riberos, Jaime Luis 30 October 2006 (has links)
Multi-spacecraft interferometry imaging is an innovative concept intended to apply formations of satellites to obtain high resolution images allowing for the synthesis of a large size aperture through the combination of the signal from several sub-apertures. The design of such systems requires the design of trajectories that cover a specified region of the observation plane to obtain appropriate information to reconstruct an image of the source. A proposed configuration consists of symmetrical formations which use control thrust to actively follow spiral trajectories that would appropriately cover the specified regions. An optimization problem has to be solved to design the optimal trajectories with minimum fuel consumption. The present work introduces an algorithm to obtain near optimal maneuvers for multi-spacecraft interferometric imaging systems. Solutions to the optimization problem are obtained assuming the optimality of spiral coverage of the spatial frequency plane. The relationship between the error in the frequency content and the reliability of the image is studied to make a connection to the dynamics of the maneuver and define the parameters of the optimization problem. The solution to the problem under deep space dynamics is shown to be convex and is solved by discretization into a non-linear programing problem. Further, the problem is extended to include the effects of dynamical constraints and the effect of time varying relative position from the imaging system to the target. For the calculation of the optimal trajectories, a two-stage hierarchical controller is proposed that obtains acceleration requirements of near minimum fuel maneuvers for different target-system configurations. Several cases are simulated to apply the algorithm. From the obtained results some conclusions about the feasibility and dynamical requirements of these systems are described.
3

Multitemporal Spaceborne Polarimetric SAR Data for Urban Land Cover Mapping

Niu, Xin January 2011 (has links)
Urban represents one of the most dynamic areas in the global change context. To support rational policies for sustainable urban development, remote sensing technologies such as Synthetic Aperture Radar (SAR) enjoy increasing popularity for collecting up-to-date and reliable information such as urban land cover/land-use. With the launch of advanced spaceborne SAR sensors such as RADARSAT-2, multitemporal fully polarimetric SAR data in high-resolution become increasingly available. Therefore, development of new methodologies to analyze such data for detailed and accurate urban mapping is in demand.   This research investigated multitemporal fine resolution spaceborne polarimetric SAR (PolSAR) data for detailed urban land cover mapping. To this end, the north and northwest parts of the Greater Toronto Area (GTA), Ontario, Canada were selected as the study area. Six-date C-band RADARSAT-2 fine-beam full polarimetric SAR data were acquired during June to September in 2008. Detailed urban land covers and various natural classes were focused in this study.   Both object-based and pixel-based classification schemes were investigated for detailed urban land cover mapping. For the object-based approaches, Support Vector Machine (SVM) and rule-based classification method were combined to evaluate the classification capacities of various polarimetric features. Classification efficiencies of various multitemporal data combination forms were assessed. For the pixel-based approach, a temporal-spatial Stochastic Expectation-Maximization (SEM) algorithm was proposed. With an adaptive Markov Random Field (MRF) analysis and multitemporal mixture models, contextual information was explored in the classification process. Moreover, the fitness of alternative data distribution assumptions of multi-look PolSAR data were compared for detailed urban mapping by this algorithm.   Both the object-based and pixel-based classifications could produce the finer urban structures with high accuracy. The superiority of SVM was demonstrated by comparison with the Nearest Neighbor (NN) classifier in object-based cases. Efficient polarimetric parameters such as Pauli parameters and processing approaches such as logarithmically scaling of the data were found to be useful to improve the classification results. Combination of both the ascending and descending data with appropriate temporal span are suitable for urban land cover mapping. The SEM algorithm could preserve the detailed urban features with high classification accuracy while simultaneously overcoming the speckles. Additionally the fitness of the G0p and Kp distribution assumptions were demonstrated better than the Wishart one. / <p>QC 20110315</p>
4

Coseismic Deformation Detection and Quantification for Great Earthquakes Using Spaceborne Gravimetry

Wang, Lei 19 June 2012 (has links)
No description available.
5

Digital Surface Models From Spaceborne Images Without Ground Control

Ataseven, Yoldas 01 September 2012 (has links) (PDF)
Generation of Digital Surface Models (DSMs) from stereo satellite (spaceborne) images is classically performed by Ground Control Points (GCPs) which require site visits and precise measurement equipment. However, collection of GCPs is not always possible and such requirement limits the usage of spaceborne imagery. This study aims at developing a fast, fully automatic, GCP-free workflow for DSM generation. The problems caused by GCP-free workflow are overcome using freely-available, low resolution static DSMs (LR-DSM). LR-DSM is registered to the reference satellite image and the registered LR-DSM is used for i) correspondence generation and ii) initial estimate generation for 3-D reconstruction. Novel methods are developed for bias removal for LR-DSM registration and bias equalization for projection functions of satellite imaging. The LR-DSM registration is also shown to be useful for computing the parameters of simple, piecewise empirical projective models. Recent computer vision approaches on stereo correspondence generation and dense depth estimation are tested and adopted for spaceborne DSM generation. The study also presents a complete, fully automatic scheme for GCPfree DSM generation and demonstrates that GCP-free DSM generation is possible and can be performed in much faster time on computers. The resulting DSM can be used in various remote sensing applications including building extraction, disaster monitoring and change detection.
6

Jammer Cancelation By Using Space-time Adaptive Processing

Uysal, Halil 01 October 2011 (has links) (PDF)
Space-Time Adaptive Processing (STAP) has been widely used in spaceborne and airborne radar platforms in order to track ground moving targets. Jammer is an hostile electronic countermeasure that is being used to degrade radar detection and tracking performance. STAP adapts radar&rsquo / s antenna radiating pattern in order to reduce jamming effectiveness. Jamming power that enters the system is decreased with respect to the adapted radiation pattern. In this thesis, a generic STAP radar model is developed and implemented in simulation environment. The implemented radar model demonstrates that, STAP can be used in order to suppress wideband jammer effectiveness together with ground clutter effects.
7

Applications of CryoSat-2 swath radar altimetry over Icelandic ice caps and Patagonian ice fields

Foresta, Luca Umberto January 2018 (has links)
Satellite altimetry has been traditionally used in the past few decades to measure elevation of land ice, quantify changes in ice topography and infer the mass balance of large and remote areas such as the Greenland and Antarctic ice sheets. Radar altimetry is particularly well suited to this task due to its all-weather year-round capability of observing the ice surface. However, monitoring of ice caps and ice fields - bodies of ice with areas typically smaller than ~ 10,000 km2 - has proven more challenging. The large footprint of a conventional radar altimeter and coarse ground track coverage are less suited to observing comparatively small regions with complex topography. Since 2010, the European Space Agency’s CryoSat-2 satellite has been collecting ice elevation measurements over ice caps and ice fields with its novel radar altimeter. CryoSat-2’s smaller inter-track spacing provides higher density of observations compared to previous satellite altimeters. Additionally, it generates more accurate measurements because (i) the footprint size is reduced in the along-track direction by means of synthetic aperture radar processing and (ii) interferometry allows to precisely locate the the across-track angle of arrival of a reflection from the surface. Furthermore, the interferometric capabilities of CryoSat-2 allow for the processing of the delayed surface reflections after the first echo. When applied over a sloping surface, this procedure generates a swath of elevations a few km wide compared to the conventional approach returning a single elevation. In this thesis, swath processing of CryoSat-2 interferometric data is exploited to generate topographic data over ice caps and ice fields. The dense elevation field is then used to compute maps of elevation change rates at sub-kilometer resolution with the aim of quantifying ice volume change and mass balance. A number of algorithms have been developed in this work, partly or entirely, to form a complete processing chain from generating the elevation field to calculating volume and mass change. These algorithms are discussed in detail before presenting the results obtained in two selected regions: Iceland and Patagonia. Over Icelandic ice caps, the high-resolution mapping reveals complex surface elevation changes, related to climate, ice dynamics and sub-glacial, geothermal and magmatic processes. The mass balance of each of the six largest ice caps (90% of Iceland’s permanent ice cover) is calculated independently for the first time using spaceborne radar altimetry data. Between October 2010 and September 2015 Icelandic ice caps have lost a total of 5.8± 0.7 Gt a ̄1, contributing 0.016± 0.002 mm a ̄1 to eustatic sea level rise. This estimate indicates that over this period the mass balance was 40% less negative than the preceding 15 years, a fact which partly reflects the anomalous positive balance year across the Vatnaj ̈okull ice cap (~ 70% of the glaciated area) in 2014/15. Furthermore, it is demonstrated how swath processing of CryoSat-2 interferometric data allows the monitoring of glaciological processes at the catchment scale. Comparison of the geodetic estimates of mass balance against those based on in situ data shows good agreement. The thesis then investigates surface elevation change on the Northern and Southern Patagonian Ice Fields to quantify their mass balance. This area is characterized by some of the fastest flowing glaciers in the world, displaying complex interactions with the proglacial environments (including marine fjords and freshwater lakes) they often drain into. Field observations are sparse due to the inaccessibility of these ice fields and even remotely sensed data are limited, often tied to comparisons to the topography in 2000 as measured by the Shuttle Radar Topography Mission. Despite gaps in the spatial coverage, in particular due to the complex topography, CryoSat-2 swath radar altimetry provides insight into the patterns of change on the ice fields in the most recent period (2011 to 2017) and allows to independently calculate the mass balance of glaciers or catchments as small as 300 km2. The northern part of the Southern Patagonian ice field displays the strongest losses due to a combination between ice dynamics and warming temperatures. In contrast Pio XI, the largest glacier on this ice field and in South America, is advancing and gaining mass. Between April 2011 and march 2017, the two ice fields combined have lost an average of 21.29± 1.98 Gt a ̄1 (equivalent to 0.059± 0.005 mm a ̄1 eustatic sea level rise), 24% and 42% more negative when compared to the periods 2000-2012/14 and 1975-2000. In particular the Northern Patagonian ice field, responsible for one third of the mass loss, is losing mass 70% faster compared to the first decade of the 21st century. These results confirm the overall strong mass loss of the Patagonian ice fields, second only to glaciers and ice caps in Alaska and the Canadian Arctic, and higher than High Mountain Asia, which all extend over areas ~ 5-8 times larger (excluding glaciers at the periphery of the Greenland and Antarctic ice sheets).
8

Multitemporal Spaceborne Polarimetric SAR Data for Urban Land Cover Mapping

Niu, Xin January 2012 (has links)
Urban land cover mapping represents one of the most important remote sensing applications in the context of rapid global urbanization. In recent years, high resolution spaceborne Polarimetric Synthetic Aperture Radar (PolSAR) has been increasingly used for urban land cover/land-use mapping, since more information could be obtained in multiple polarizations and the collection of such data is less influenced by solar illumination and weather conditions.  The overall objective of this research is to develop effective methods to extract accurate and detailed urban land cover information from spaceborne PolSAR data. Six RADARSAT-2 fine-beam polarimetric SAR and three RADARSAT-2 ultra-fine beam SAR images were used. These data were acquired from June to September 2008 over the north urban-rural fringe of the Greater Toronto Area, Canada. The major landuse/land-cover classes in this area include high-density residential areas, low-density residential areas, industrial and commercial areas, construction sites, roads, streets, parks, golf courses, forests, pasture, water and two types of agricultural crops. In this research, various polarimetric SAR parameters were evaluated for urban land cover mapping. They include the parameters from Pauli, Freeman and Cloude-Pottier decompositions, coherency matrix, intensities of each polarization and their logarithms.  Both object-based and pixel-based classification approaches were investigated. Through an object-based Support Vector Machine (SVM) and a rule-based approach, efficiencies of various PolSAR features and the multitemporal data combinations were evaluated. For the pixel-based approach, a contextual Stochastic Expectation-Maximization (SEM) algorithm was proposed. With an adaptive Markov Random Field (MRF) and a modified Multiscale Pappas Adaptive Clustering (MPAC), contextual information was explored to improve the mapping results. To take full advantages of alternative PolSAR distribution models, a rule-based model selection approach was put forward in comparison with a dictionary-based approach.  Moreover, the capability of multitemporal fine-beam PolSAR data was compared with multitemporal ultra-fine beam C-HH SAR data. Texture analysis and a rule-based approach which explores the object features and the spatial relationships were applied for further improvement. Using the proposed approaches, detailed urban land-cover classes and finer urban structures could be mapped with high accuracy in contrast to most of the previous studies which have only focused on the extraction of urban extent or the mapping of very few urban classes. It is also one of the first comparisons of various PolSAR parameters for detailed urban mapping using an object-based approach. Unlike other multitemporal studies, the significance of complementary information from both ascending and descending SAR data and the temporal relationships in the data were the focus in the multitemporal analysis. Further, the proposed novel contextual analyses could effectively improve the pixel-based classification accuracy and present homogenous results with preserved shape details avoiding over-averaging. The proposed contextual SEM algorithm, which is one of the first to combine the adaptive MRF and the modified MPAC, was able to mitigate the degenerative problem in the traditional EM algorithms with fast convergence speed when dealing with many classes. This contextual SEM outperformed the contextual SVM in certain situations with regard to both accuracy and computation time. By using such a contextual algorithm, the common PolSAR data distribution models namely Wishart, G0p, Kp and KummerU were compared for detailed urban mapping in terms of both mapping accuracy and time efficiency. In the comparisons, G0p, Kp and KummerU demonstrated better performances with higher overall accuracies than Wishart. Nevertheless, the advantages of Wishart and the other models could also be effectively integrated by the proposed rule-based adaptive model selection, while limited improvement could be observed by the dictionary-based selection, which has been applied in previous studies. The use of polarimetric SAR data for identifying various urban classes was then compared with the ultra-fine-beam C-HH SAR data. The grey level co-occurrence matrix textures generated from the ultra-fine-beam C-HH SAR data were found to be more efficient than the corresponding PolSAR textures for identifying urban areas from rural areas. An object-based and pixel-based fusion approach that uses ultra-fine-beam C-HH SAR texture data with PolSAR data was developed. In contrast to many other fusion approaches that have explored pixel-based classification results to improve object-based classifications, the proposed rule-based fusion approach using the object features and contextual information was able to extract several low backscatter classes such as roads, streets and parks with reasonable accuracy. / <p>QC 20121112</p>
9

The characterization of deep convection in the tropical tropopause layer using active and passive satellite observations

Young, Alisa H. 08 July 2011 (has links)
Several studies suggest that deep convection that penetrates the tropical tropopause layer may influence the long-term trends in lower stratospheric water vapor. This thesis investigates the relationship between penetrating deep convection and lower stratospheric water vapor variability using historical infrared (IR) observations. However, since infrared observations do not directly resolve cloud vertical structure and cloud top height, and there has been some debate on their usefulness to characterize penetrating deep convective clouds, CloudSat/Calipso and Aqua MODIS observations are first combined to understand how to best interpret IR observations of penetrating tops. The major findings of the combined CloudSat/Calipso and Aqua MODIS analysis show that penetrating deep convection predominantly occur in the western tropical Pacific Ocean. This finding is consistent with IR studies but is in contrast to previous radar studies where penetrating deep convective clouds predominantly occur over land regions such as equatorial Africa. Estimates on the areal extent of penetrating deep convection show that when using IR observations with a horizontal resolution of 10 km, about two thirds of the events are large enough to be detected. Evaluation of two different IR detection schemes, which includes cold cloud features/pixels and positive brightness temperature differences (+BTD), show that neither schemes completely separate between penetrating deep convection and other types of high clouds. However, the predominant fraction of +BTD distributions and cold cloud features/pixels ≤ 210 K is due to the coldest and highest penetrating tops as inferred from collocated IR and radar/lidar observations. This result is in contrast to previous studies that suggest the majority of cold cloud features/pixels ≤ 210 K are cirrus/anvil cloud fractions that coexist with deep convective clouds. Observations also show that a sufficient fraction of penetrating deep convective cloud tops occur in the extratropics. This provides evidence that penetrating deep convection should be documented as a pathway of stratospheric-tropospheric exchange within the extratropical region. Since the cold cloud feature/pixel ≤ 210 K approach was found to be a sufficient method to detect penetrating deep convection it was used to develop a climatology of the coldest penetrating deep convective clouds from GridSat observations covering years 1998-2008. The highest frequencies of the coldest penetrating deep convective clouds consistently occur in the western-central Pacific and Indian Ocean. Monthly frequency anomalies in penetrating deep convection were evaluated against monthly anomalies in lower stratospheric water vapor at 82 mb and show higher correlations for the western-central Pacific regions in comparison to the tropics. At a lag of 3 months, the combined western-central Pacific had a small but significant anticorrelation, where the largest amount of variance explained by the combined western-central Pacific region was 8.25%. In conjunction with anomalies in the 82 mb water vapor mixing ratios, decreasing trends for the 1998-2008 period were also observed for tropics, the western Pacific and Indian Ocean. Although none of these trends were significant at the 95% confidence level, decreases in the frequency of penetrating deep convection over the 1998-2008 shows evidence that could explain in part some of the 82 mb lower stratospheric water vapor variability.
10

Optimisation de l’analyse de données de la mission spatiale MICROSCOPE pour le test du principe d’équivalence et d’autres applications / Optimization of the data analysis of the MICROSCOPE space mission for the test of the Equivalence Principle and other applications

Baghi, Quentin 12 October 2016 (has links)
Le Principe d'Equivalence (PE) est un pilier fondamental de la Relativité Générale. Il est aujourd'hui remis en question par les tentatives d'élaborer une théorie plus exhaustive en physique fondamentale, comme la théorie des cordes. La mission spatiale MICROSCOPE vise à tester ce principe à travers l'universalité de la chute libre, avec un objectif de précision de 10-15, c'est-à-dire un gain de deux ordres de grandeurs par rapport aux expériences actuelles. Le satellite embarque deux accéléromètres électrostatiques, chacun intégrant deux masses-test. Les masses de l'accéléromètre servant au test du PE sont de compositions différentes, alors que celles de l'accéléromètre de référence sont constituées d'un même matériau. L'objectif est de mesurer la chute libre des masses-test dans le champ gravitationnel de la Terre, en mesurant leur accélération différentielle avec une précision attendue de 10-12 ms-2Hz-1/2 dans la bande d'intérêt. Une violation du PE se traduirait par une différence périodique caractéristique entre les deux accélérations. Cependant, diverses perturbations sont également mesurées en raison de la grande sensibilité de l'instrument. Certaines d'entre elles, comme les gradients de gravité et d'inertie, sont bien définies. En revanche d'autres ne sont pas modélisées ou ne le sont qu'imparfaitement, comme le bruit stochastique et les pics d'accélérations dus à l'environnement du satellite, qui peuvent entraîner des saturations de la mesure ou des données lacunaires. Ce contexte expérimental requiert le développement d'outils adaptés pour l'analyse de données, qui s'inscrivent dans le cadre général de l'analyse des séries temporelles par régression linéaire.On étudie en premier lieu la détection et l’estimation de perturbations harmoniques dans le cadre de l'analyse moindres carrés. On montre qu’avec cette technique la projection des perturbations harmoniques sur le signal de violation du PE peut être maintenue à un niveau acceptable. On analyse ensuite l'impact des pertes de données sur la performance du test du PE. On montre qu'avec l'hypothèse pire cas sur la fréquence des interruptions de données (environ 300 interruptions de 0.5 seconde par orbite, chiffre évalué avant le vol), l'incertitude des moindres carrés ordinaires est multipliée par un facteur 35 à 60. Pour compenser cet effet, une méthode de régression linéaire basée sur une estimation autorégressive du bruit est développée, qui permet de décorréler les observations disponibles, sans calcul ni inversion directs de la matrice de covariance. La variance de l'estimateur ainsi construit est proche de la valeur optimale, ce qui permet de réaliser un test du PE au niveau attendu, même en présence de pertes de données fréquentes. On met également en place une méthode pour évaluer plus précisément la DSP du bruit à partir des données disponibles, sans utilisation de modèle a priori. L'approche est fondée sur une modification de l'algorithme espérance-maximisation (EM) avec une hypothèse de régularité de la DSP, en utilisant une imputation statistique des données manquantes. On obtient une estimée de la DSP avec une erreur inférieure à 10-12 ms-2Hz-1/2. En dernier lieu, on étend les applications de l'analyse de données en étudiant la faisabilité de la mesure du gradient de gravité terrestre avec MICROSCOPE. On évalue la capacité de cette observable à déchiffrer la géométrie des grandes échelles du géopotentiel. Par simulation des signaux obtenus à partir de différents modèles du manteau terrestre profond, on montre que leurs particularités peuvent être distinguées. / The Equivalence Principle (EP) is a cornerstone of General Relativity, and is called into question by the attempts to build more comprehensive theories in fundamental physics such as string theories. The MICROSCOPE space mission aims at testing this principle through the universality of free fall, with a target precision of 10-15, two orders of magnitude better than current on-ground experiments. The satellite carries on-board two electrostatic accelerometers, each one including two test-masses. The masses of the test accelerometer are made with different materials, whereas the masses of the reference accelerometer have the same composition. The objective is to monitor the free fall of the test-masses in the gravitational field of the earth by measuring their differential accelerations with an expected precision of 10-12 ms-2Hz-1/2 in the bandwidth of interest. An EP violation would result in a characteristic periodic difference between the two accelerations. However, various perturbations are also measured because of the high sensitivity of the instrument. Some of them are well defined, e.g. gravitational and inertial gradient disturbances, but others are unmodeled, such as random noise and acceleration peaks due to the satellite environment, which can lead to saturations in the measurement or data gaps. This experimental context requires us to develop suited tools for the data analysis, which are applicable in the general framework of linear regression analysis of time series.We first study the statistical detection and estimation of unknown harmonic disturbances in a least squares framework, in the presence of a colored noise of unknown PSD. We show that with this technique the projection of the harmonic disturbances onto the WEP violation signal can be rejected. Secondly we analyze the impact of the data unavailability on the performance of the EP test. We show that with the worst case before-flight hypothesis (almost 300 gaps of 0.5 second per orbit), the uncertainty of the ordinary least squares is increased by a factor 35 to 60. To counterbalance this effect, a linear regression method based on an autoregressive estimation of the noise is developed, which allows a proper decorrelation of the available observations, without direct computation and inversion of the covariance matrix. The variance of the constructed estimator is close to the optimal value, allowing us to perform the EP test at the expected level even in case of very frequent data interruptions. In addition, we implement a method to more accurately characterize the noise PSD when data are missing, with no prior model on the noise. The approach is based on modified expectation-maximization (EM) algorithm with a smooth assumption on the PSD, and use a statistical imputation of the missing data. We obtain a PSD estimate with an error less than 10-12 ms-2Hz-1/2. Finally, we widen the applications of the data analysis by studying the feasibility of the measurement of the earth's gravitational gradient with MICROSCOPE data. We assess the ability of this set-up to decipher the large scale geometry of the geopotential. By simulating the signals obtained from different models of the earth's deep mantle, and comparing them to the expected noise level, we show that their features can be distinguished.

Page generated in 0.0383 seconds