1 |
Application based image compression for micro-satellite optical imagingHou, Peixin January 1999 (has links)
No description available.
|
2 |
Exploiting weather forecast data for cloud detectionMackie, Shona January 2009 (has links)
Accurate, fast detection of clouds in satellite imagery has many applications, for example Numerical Weather Prediction (NWP) and climate studies of both the atmosphere and of the Earth’s surface temperature. Most operational techniques for cloud detection rely on the differences between observations of cloud and of clear-sky being more or less constant in space and in time. In reality, this is not the case - different clouds have different spectral properties, and different cloud types are more or less likely in different places and at different times, depending on atmospheric conditions and on the Earth’s surface properties. Observations of clear sky also vary in space and time, depending on atmospheric and surface conditions, and on the presence or absence of aerosol particles. The Bayesian approach adopted in this project allows pixel-specific physical information (for example from NWP) to be used to predict pixel-specific observations of clear sky. A physically-based, spatially- and temporally-specific probability that each pixel contains a cloud observation is then calculated. An advantage of this approach is that identification of ambiguously classed pixels from a probabilistic result is straightforward, in contrast to the binary result generally produced by operational techniques. This project has developed and validated the Bayesian approach to cloud detection, and has extended the range of applications for which it is suitable, achieving skills scores that match or exceed those achieved by operational methods in every case. High temperature gradients can make observations of clear sky around ocean fronts, particularly at thermal wavelengths, appear similar to cloud observations. To address this potential source of ambiguous cloud detection results, a region of imagery acquired by the AATSR sensor which was noted to contain some ocean fronts, was selected. Pixels in the region were clustered according to their spectral properties with the aim of separating pixels that correspond to different thermal regimes of the ocean. The mean spectral properties of pixels in each cluster were then processed using the Bayesian cloud detection technique and the resulting posterior probability of clear then assigned to individual pixels. Several clustering methods were investigated, and the most appropriate, which allowed pixels to be associated with multiple clusters, with a normalized vector of ‘membership strengths’, was used to conduct a case study. The distribution of final calculated probabilities of clear became markedly more bimodal when clustering was included, indicating fewer ambiguous classifications, but at the cost of some single pixel clouds being missed. While further investigations could provide a solution to this, the computational expense of the clustering method made this impractical to include in the work of this project. This new Bayesian approach to cloud detection has been successfully developed by this project to a point where it has been released under public license. Initially designed as a tool to aid retrieval of sea surface temperature from night-time imagery, this project has extended the Bayesian technique to be suitable for imagery acquired over land as well as sea, and for day-time as well as for night-time imagery. This was achieved using the land surface emissivity and surface reflectance parameter products available from the MODIS sensor. This project added a visible Radiative Transfer Model (RTM), developed at University of Edinburgh, and a kernel-based surface reflectance model, adapted here from that used by the MODIS sensor, to the cloud detection algorithm. In addition, the cloud detection algorithm was adapted to be more flexible, making its implementation for data from the SEVIRI sensor straightforward. A database of ‘difficult’ cloud and clear targets, in which a wide range of both spatial and temporal locations was represented, was provided by M´et´eo-France and used in this work to validate the extensions made to the cloud detection scheme and to compare the skill of the Bayesian approach with that of operational approaches. For night land and sea imagery, the Bayesian technique, with the improvements and extensions developed by this project, achieved skills scores 10% and 13% higher than M´et´eo-France respectively. For daytime sea imagery, the skills scores were within 1% of each other for both approaches, while for land imagery the Bayesian method achieved a 2% higher skills score. The main strength of the Bayesian technique is the physical basis of the differentiation between clear and cloud observations. Using NWP information to predict pixel-specific observations for clear-sky is relatively straightforward, but making such predictions for cloud observations is more complicated. The technique therefore relies on an empirical distribution rather than a pixel-specific prediction for cloud observations. To try and address this, this project developed a means of predicting cloudy observations through the fast forward-modelling of pixel-specific NWP information. All cloud fields in the pixel-specific NWP data were set to 0, and clouds were added to the profile at discrete intervals through the atmosphere, with cloud water- and ice- path (cwp, cip) also set to values spaced exponentially at discrete intervals up to saturation, and with cloud pixel fraction set to 25%, 50%, 75% and 100%. Only single-level, single-phase clouds were modelled, with the justification that the resulting distribution of predicted observations, once smoothed through considerations of uncertainties, is likely to include observations that would correspond to multi-phase and multi-level clouds. A fast RTM was run on the profile information for each of these individual clouds and cloud altitude-, cloud pixel fraction- and channel-specific relationships between cwp (and similarly cip) and predicted observations were calculated from the results of the RTM. These relationships were used to infer predicted observations for clouds with cwp/cip values other than those explicitly forward modelled. The parameters used to define the relationships were interpolated to define relationships for predicted observations of cloud at 10m vertical intervals through the atmosphere, with pixel coverage ranging from 25% to 100% in increments of 1%. A distribution of predicted cloud observations is then achieved without explicit forward-modelling of an impractical number of atmospheric states. Weights are applied to the representation of individual clouds within the final Probability Density Function (PDF) in order to make the distribution of predicted observations realistic, according to the pixel-specific NWP data, and to distributions seen in a global reference dataset of NWP profiles from the European Centre for Medium Range Weather Forecasting (ECMWF). The distribution is then convolved with uncertainties in forward-modelling, in the NWP data, and with sensor noise to create the final PDF in observation space, from which the conditional probability that the pixel observation corresponds to a cloud observation can be read. Although the relatively fast computational implementation of the technique was achieved, the results are disappointingly poor for the SEVIRI-acquired dataset, provided by M´et´eo-France, against which validation was carried out. This is thought to be explained by both the uncertainties in the NWP data, and the forward-modelling dependence on those uncertainties, being poorly understood, and treated too optimistically in the algorithm. Including more errors in the convolution introduces the problem of quantifying those errors (a non-trivial task), and would increase the processing time, making implementation impractical. In addition, if the uncertianties considered are too high then a PDF flatter than the empirical distribution currently used would be produced, making the technique less useful.
|
3 |
Research into illumination variance in video processingJavadi, Seyed Mahdi Sadreddinhajseyed January 2018 (has links)
Inthisthesiswefocusontheimpactofilluminationchangesinvideoand we discuss how we can minimize the impact of illumination variance in video processing systems. Identifyingandremovingshadowsautomaticallyisaverywellestablished and an important topic in image and video processing. Having shadowless image data would benefit many other systems such as video surveillance, tracking and object recognition algorithms. Anovelapproachtoautomaticallydetectandremoveshadowsispresented in this paper. This new method is based on the observation that, owing to the relative movement of the sun, the length and position of a shadow changes linearly over a relatively long period of time in outdoor environments,wecanconvenientlydistinguishashadowfromotherdark regions in an input video. Then we can identify the Reference Shadow as the one with the highest confidence of the mentioned linear changes. Once one shadow is detected, the rest of the shadow can also be identifiedandremoved. Wehaveprovidedmanyexperimentsandourmethod is fully capable of detecting and removing the shadows of stationary and moving objects. Additionally we have explained how reference shadows can be used to detect textures that reflect the light and shiny materials such as metal, glass and water. ...
|
4 |
The Combined Effects of Light and Temperature on Coral Bleaching: A Case Study of the Florida Reef Tract Using Satellite DataBarnes, Brian Burnel 01 January 2013 (has links)
Coral reefs are greatly impacted by the physical characteristics of the water surrounding them. Incidence and severity of mass coral bleaching and mortality events are increasing worldwide due primarily to increased water temperature, but also in response to other stressors. This decline in reef health demands clearer understanding of the compounding effects of multiple stressors, as well as widespread assessment of coral reef health in near-real time.
Satellites offer a means by which some of the physical stressors on coral reefs can be measured. The synoptic spatial coverage and high repeat sampling frequency of such instruments allow for a quantity of data unattainable by in situ measurements. Unfortunately, errors in cloudmasking algorithms contaminate satellite derived sea surface temperature (SST) measurements, especially during anomalously cold events. Similarly, benthic interference of satellite-derived reflectance signals has resulted in large errors in derivations of water quality or clarity in coral reef environments.
This work provides solutions to these issues for the coral reef environments of the Florida Keys. Specifically, improved SST cloudmasking algorithms were developed for both Advanced Very High Resolution Radiometer (AVHRR; Appendix A) and Moderate Resolution Imaging Spectroradiometer (MODIS) data (Appendix B). Both of these improved algorithms were used to reveal the extent and severity of a January 2010 cold event that resulted in widespread mortality of Florida Keys corals. Applied to SST data from 2010, the improved MODIS cloudmasking algorithm also showed improved quantity of SST retrievals with minimal sacrifice in data quality.
Two separate algorithms to derive water clarity from MODIS measurements of optically shallow waters were developed and validated, one focusing on the diffuse downwelling attenuation coefficient (Kd, m-1) in visible bands (Appendix C), the other on Kd in the ultraviolet (Appendix D). The former utilized a semi-analytical approach to remove bottom influence, modified from an existing algorithm. The latter relied on empirical relationships between an extensive in situ training dataset and variations in MODIS-derived spectral shape, determined using a stepwise principal components regression. Both of these algorithms showed satisfactory validation statistics, and were used to elucidate spatiotemporal patterns of water clarity in the Florida Keys. Finally, an approach was developed to use Landsat data to detect concurrent MODIS-derived reflectance anomalies with over 90% accuracy (Appendix E). Application of this approach to historical Landsat data allowed for long-term, synoptic assessment of the water environment of the Florida Keys ecosystem. Using this approach, shifts in seagrass density, turbidity increases, black water events, and phytoplankton blooms were detected using Landsat data and corroborated with known environmental events.
Many of these satellite data products were combined with in situ reports of coral bleaching to determine the specific environmental parameters individually and synergistically contributing to coral bleaching. As such, SST and visible light penetration were found to be parsimoniously explaining variance in bleaching intensity, as were the interactions between SST, wind and UV penetration. These relationships were subsequently used to create a predictive model for coral bleaching via canonical analysis of principal coordinates. Leave-one-out-cross-validation indicated that this model predicted `severe bleaching' and `no bleaching' conditions with 64% and 60% classification success, respectively, nearly 3 times greater than that predicted by chance. This model also showed improvement over similar models created using only temperature data, further indicating that satellite assessment of coral bleaching based only on SST data can be improved with other environmental data. Future work should further supplement the environmental parameters considered in this research with databases of other coral stressors, as well as improved quantification of the temperature at the depth of corals, in order to gain a more complete understanding of coral bleaching in response to environmental stress.
Overall, this dissertation presents five new algorithms to the field of satellite oceanography research. Although validated primarily in the Florida Keys region, most of these algorithms should be directly applicable for use in other coastal environments. Identification of the specific environmental factors contributing to coral bleaching enhances understanding of the interplay between multiple causes of reef decline, while the predictive model for coral bleaching may provide researchers and managers with widespread, near real-time assessments of coral reef health.
|
5 |
On-board image quality assessment for a satelliteMarais, Izak van Zyl 03 1900 (has links)
Thesis (PhD (Electronic Engineering))--University of Stellenbosch, 2009. / The downloading of images is a bottleneck in the image acquisition chain
for low earth orbit, remote sensing satellites. An on-board image quality assessment
system could optimise use of available downlink time by prioritising
images for download, based on their quality.
An image quality assessment system based on measuring image degradations
is proposed. Algorithms for estimating degradations are investigated.
The degradation types considered are cloud cover, additive sensor noise and
the defocus extent of the telescope.
For cloud detection, the novel application of heteroscedastic discriminant
analysis resulted in better performance than comparable dimension reducing
transforms from remote sensing literature. A region growing method, which
was previously used on-board a micro-satellite for cloud cover estimation, is
critically evaluated and compared to commonly used thresholding. The thresholding
method is recommended. A remote sensing noise estimation algorithm
is compared to a noise estimation algorithm based on image pyramids. The
image pyramid algorithm is recommended. It is adapted, which results in
smaller errors. A novel angular spectral smoothing method for increasing the
robustness of spectral based, direct defocus estimation is introduced. Three
existing spectral based defocus estimation methods are compared with the
angular smoothing method.
An image quality assessment model is developed that models the mapping
of the three estimated degradation levels to one quality score. A subjective
image quality evaluation experiment is conducted, during which more than
18000 independent human judgements are collected. Two quality assessment
models, based on neural networks and splines, are tted to this data. The
spline model is recommended.
The integrated system is evaluated and image quality predictions are shown
to correlate well with human quality perception.
|
6 |
Design, implementation, and characterisation of a novel lidar ceilometerVande Hey, Joshua D. January 2013 (has links)
A novel lidar ceilometer prototype based on divided lens optics has been designed, built, characterised, and tested. The primary applications for this manufacturable ground-based sensor are the determination of cloud base height and the measurement of vertical visibility. First, the design, which was developed in order to achieve superior performance at a low cost, is described in detail, along with the process used to develop it. The primary design considerations of optical signal to noise ratio, range-dependent overlap of the transmitter and receiver channels, and manufacturability, were balanced to develop an instrument with good signal to noise ratio, fast turn-on of overlap for detection of close range returns, and a minimised number of optical components and simplicity of assembly for cost control purposes. Second, a novel imaging method for characterisation of transmitter-receiver overlap as a function of range is described and applied to the instrument. The method is validated by an alternative experimental method and a geometric calculation that is specific to the unique geometry of the instrument. These techniques allow the calibration of close range detection sensitivity in order to acquire information prior to full overlap. Finally, signal processing methods used to automate the detection process are described. A novel two-part cloud base detection algorithm has been developed which combines extinction-derived visibility thresholds in the inverted cloud return signal with feature detection on the raw signal. In addition, standard approaches for determination of visibility based on an iterative far boundary inversion method, and calibration of attenuated backscatter profile using returns from a fully-attenuating water cloud, have been applied to the prototype. The prototype design, characterisation, and signal processing have been shown to be appropriate for implementation into a commercial instrument. The work that has been carried out provides a platform upon which a wide range of further work can be built.
|
7 |
Évaluation d’algorithmes stéréoscopiques de haute précision en faible B/H / Evaluation of high precision low baseline stereo vision algorithmsDagobert, Tristan 04 December 2017 (has links)
Cette thèse étudie la précision en vision stéréo, les méthodes de détection dites a contrario et en présente une application à l'imagerie satellitaire. La première partie a été réalisée dans le cadre du projet DGA-ANR-ASTRID "STÉRÉO". Son but est de définir les limites effectives des méthodes de reconstruction stéréo quand on contrôle toute la chaîne d’acquisition à la précision maximale, que l’on acquiert des paires stéréo en rapport B/H très faible et sans bruit. Pour valider ce concept, nous créons des vérités terrains très précises en utilisant un rendeur. En gardant les rayons calculés durant le rendu, nous avons une information très dense sur la scène 3D. Ainsi nous créons des cartes d'occultations, de disparités dont l'erreur de précision est inférieure à 10e-6. Nous avons mis à la disposition de la communauté de recherche des images de synthèse avec un SNR supérieur à 500 : un ensemble de 66 paires stéréo dont le B/H varie de1/2500 à 1/50. Pour évaluer les méthodes de stéréo sur ce nouveau type de données, nous proposons des métriques calculant la qualité des cartes de disparités estimées, combinant la précision et la densité des points dont l'erreur relative est inférieure à un certain seuil. Nous évaluons plusieurs algorithmes représentatifs de l'état de l'art, sur les paires créées ainsi sur les paires de Middlebury, jusqu'à leurs limites de fonctionnement. Nous confirmons par ces analyses, que les hypothèses théoriques sur le bien-fondé du faible B/H en fort SNR sont valides, jusqu'à une certaine limite que nous caractérisons. Nous découvrons ainsi que de simples méthodes de flux optique pour l'appariement stéréo deviennent plus performantes que des méthodes variationnelles discrètes plus élaborées. Cette conclusion n'est toutefois valide que pour des forts rapports signal à bruit. L'exploitation des données denses nous permet de compléter les vérités terrain par une détection très précise des bords d'occultation. Nous proposons une méthode de calcul de contours vectoriels subpixéliens à partir d'un nuage de points très dense, basée sur des méthodes a contrario de classification de pixels. La seconde partie de la thèse est dédiée à une application du flot optique subpixélien et des méthodes a contrario pour détecter des nuages en imagerie satellitaire. Nous proposons une méthode qui n'exploite que l'information visible optique. Elle repose sur la redondance temporelle obtenue grâce au passage répété des satellites au-dessus des mêmes zones géographiques. Nous définissons quatre indices pour séparer les nuages du paysage : le mouvement apparent inter-canaux, la texture locale, l'émergence temporelle et la luminance. Ces indices sont modélisés dans le cadre statistique des méthodes a contrario qui produisent un NFA (nombre de fausses alarmes pour chacun). Nous proposons une méthode pour combiner ces indices et calculer un NFA beaucoup plus discriminant. Nous comparons les cartes de nuages estimées à des vérités terrain annotées et aux cartes nuageuses produites par les algorithmes liés aux satellites Landsat-8 etSentinel-2. Nous montrons que les scores de détection et de fausses alarmes sont supérieurs à ceux obtenus avec ces algorithmes, qui pourtant utilisent une dizaine de bandes multi-spectrales. / This thesis studies the accuracy in stereo vision, detection methods calleda contrario and presents an application to satellite imagery. The first part was carried out within the framework of the project DGA-ANR-ASTRID"STEREO". His The aim is to define the effective limits of stereo reconstruction when controlling the entire acquisition chain at the maximum precision, that one acquires stereo pairs in very low baseline and noise-free. To validate thisconcept, we create very precise ground truths using a renderer. By keeping the rays computed during rendering, we have very dense information on the 3Dscene. Thus we create occultation and disparity maps whose precision error is less than 10e-6. We have made synthetic images available to the research community with an SNR greater than 500: a set of 66 stereo pairs whoseB/H varies from 1/2500 to 1/50. To evaluate stereo methods onthis new type of data, we propose metrics computing the quality of the estimated disparity maps, combining the precision and the density of the points whose relative error is less than a certain threshold. We evaluate several algorithmsrepresentative of the state of the art, on the pairs thus created and on theMiddlebury pairs, up to their operating limits. We confirm by these analyzesthat the theoretical assumptions about the merit of the low B/H in highSNR are valid, up to a certain limit that we characterize. We thus discover that simple optical flow methods for stereo matching become more efficient than more sophisticated discrete variational methods. This conclusion, however, is only valid for high signal-to-noise ratios. The use of the dense data allows us to complete the ground truths a subpixel detection of the occlusion edges. We propose a method to compute subpixel vector contours from a very dense cloud ofpoints, based on pixel classification a contrario methods. The second part of the thesis is devoted to an application of the subpixelian optical flowand a contrario methods to detect clouds in satellite imagery. We propose a method that exploits only visible optical information. It is based onthe temporal redundancy obtained by the repeated passages of the satellites overthe same geographical zones. We define four clues to separate the clouds fromthe landscape: the apparent inter-channel movement, Local texture, temporal emergence and luminance. These indices are modeled in the statistical framework of a contrario methods which produce an NFA (number of false alarms for each). We propose a method for combining these indices and computing a much more discriminating NFA. We compare the estimated cloud maps to annotated ground truths and the cloud maps produced by the algorithms related to the Landsat-8and Sentinel-2 satellites. We show that the detection and false alarms scores are higher than those obtained with these algorithms, which however use a dozen multi-spectral bands.
|
8 |
Exploring Diversity of Spectral Data in Cloud Detection with Machine Learning Methods : Contribution of Near Infrared band in improving cloud detection in winter images / Utforska diversitet av spektraldata i molndetektering med maskininlärningsmetoder : Bidrag från Near Infrared band för att förbättra molndetektering i vinterbilderSunil Oza, Nakita January 2022 (has links)
Cloud detection on satellite imagery is an essential pre-processing step for several remote sensing applications. In general, machine learning based methods for cloud detection perform well, especially the ones based on deep learning as they consider both spatial and spectral features of the input image. However, false alarms become a major issue in winter images, wherein bright objects like snow/ice are also detected as cloud. This affects further image analysis like urban change detection, weather forecast, disaster risk management. In this thesis, we consider optical remote sensing images from small satellites constellation of PlanetScope. These have limited multispectral capacity of four bands: Red, Green, Blue (RGB) and Near-Infrared (NIR) bands. Detection algorithms tend to be more efficient when considering information from more than one spectral band to perform the detection. This study explores the data diversity provided by NIR band to RGB band images in terms of improvement in cloud detection accuracy. Two deep learning algorithms based on convolutional neural networks with different architectures are trained on RGB, NIR and RGB+NIR image data, resulting in six trained models. Each of these networks is tested with winter images of varying amounts of clouds and land covered with snow and ice. The evaluation is done based on performance metrics for accuracy and Intersection-over-Union (IoU) scores, as well as visual inspection. A total of eighteen experiments are performed, and it is observed that NIR band provides significant data diversity when combined with RGB bands, by reducing the false alarms and improving the accuracy. In terms of processing time, there is no significant increase for the algorithms evaluated, therefore better cloud detection can be achieved without significantly increasing the computational costs. Based on this analysis, Unibap iX10-100 embedded system is a possible choice for implementing these algorithms as it is suitable for AI applications. / Detektering av moln på satellitbilder är ett viktigt bearbetningssteg för flera fjärr analysapplikationer. I allmänhet fungerar maskininlärningsbaserade metoder för molndetektering bra, särskilt de som är baserade på djupinlärning eftersom de tar hänsyn till både spatiala och spektrala egenskaper i input bilder. Men falsklarm blir ett stort problem i vinterbilder, där medbringande föremål som snö/is också upptäcks som moln. Detta påverkar ytterligare bildanalyser som upptäckt av stadsförändringar, väderprognos, katastrofrisk-hantering. I denna avhandling tar vi hänsyn till optiska fjärranalysbilder från små satellitkonstellationer PlanetScope. Dessa har begränsad multispektral kapacitet på fyra band: röda, gröna, blå (RGB) och near-infrared (NIR) band. Detektionsalgoritmer tenderar att vara mer effektiva när man överväger information från mer än ett spektralband för att utföra detekteringen. Denna studie utforskar datadiversiteten som tillhandahålls av NIR-band till RGB-bandbilder när det gäller förbättring av molndetekteringsnoggrannheten. Två djupinlärningsalgoritmer baserade på konvolutionella neurala nätverk med olika arkitekturer tränas på RGB-, NIR- och RGB+NIR-bilddata, vilket resulterar i sex tränade modeller. Vart och ett av dessa nätverk testas med vinterbilder av varierande mängder moln och land täckt med snö och is. Utvärderingen görs baserat på prestandamått för noggrannhet och Intersection-over-Union (IoU) poäng, samt visuell inspektion. Totalt arton experiment utförs, och det observeras att NIR-bandet ger betydande datadiversitet när det kombineras med RGB-band, genom att minska de falska larmen och förbättra noggrannheten. När det gäller bearbetningstid finns det ingen signifikant ökning av den för de utvärderade algoritmerna, därför kan bättre molndetektering uppnås utan att nämnvärt öka beräkningskostnaderna. Baserat på denna analys är Unibap iX10-100 inbyggt system ett möjligt val för implementera dessa algoritmer eftersom det är lämpligt för AI-tillämpningar.
|
Page generated in 0.1074 seconds