Spelling suggestions: "subject:"desolution enhancement"" "subject:"cesolution enhancement""
1 |
Developing A Model To Increase Quality Of DemPasaogullari, Onur 01 February 2013 (has links) (PDF)
Low resolution (LR) Grid Digital Elevation Models (DEMs) are the inputs of multi frame super resolution (MFSR) algorithm to obtain high resolution (HR) grid DEM. In digital image MFSR, non-redundant information carrying LR image pairs are a necessity. By using the analogy between digital image and grid DEMs, it is proven that, although the LR grid DEMs have a single source, they carry non-redundant information and they can be inputs of MFSR.
Quality of grid DEM can be increased by using MFSR techniques. The level of spatial enhancement is directly related to the amount of non-redundant information that the LR grid DEM pairs carry. It is seen that super resolution techniques have potential to increase the accuracy of grid DEMs from a limited sampling.
|
2 |
Statistical Fusion of Scientific ImagesMohebi, Azadeh 30 July 2009 (has links)
A practical and important class of scientific images are the 2D/3D
images obtained from porous materials such as concretes, bone, active
carbon, and glass. These materials constitute an important class
of heterogeneous media possessing complicated
microstructure that is difficult to
describe qualitatively. However, they are not totally
random and there is a mixture of organization and randomness
that makes them difficult to characterize and study.
In order to study different
properties of porous materials, 2D/3D high resolution samples are
required. But obtaining high resolution samples usually requires
cutting, polishing and exposure to air, all of which affect the
properties of the sample. Moreover, 3D samples obtained by Magnetic
Resonance Imaging (MRI) are very low resolution and noisy. Therefore,
artificial samples of porous media are required to be generated
through a porous media reconstruction
process. The recent contributions in the reconstruction task are either only based on a prior model, learned from statistical features of real high resolution training data, and generating samples from that model, or based on a prior model and the measurements.
The main objective of this thesis is to some up with a statistical data fusion framework by which different images of porous materials at different resolutions and modalities are combined in order to generate artificial samples of porous media with enhanced resolution. The current super-resolution, multi-resolution and registration methods in image processing fail to provide a general framework for the porous media reconstruction purpose since they are usually based on finding an estimate rather than a typical sample, and also based on having the images from the same scene -- the case which is not true for porous media images.
The statistical fusion approach that we propose here is based on a Bayesian framework by which a prior model learned from high resolution samples are combined with a measurement model defined based on the low resolution, coarse-scale information, to come up with a posterior model. We define a measurement model, in the non-hierachical and hierarchical image modeling framework, which describes how the low resolution information is asserted in the posterior model. Then, we propose a posterior sampling approach by which 2D posterior samples of porous media are generated from the posterior model. A more general framework that we propose here is asserting other constraints rather than the measurement in the model and then propose a constrained sampling strategy based on simulated annealing to generate artificial samples.
|
3 |
Statistical Fusion of Scientific ImagesMohebi, Azadeh 30 July 2009 (has links)
A practical and important class of scientific images are the 2D/3D
images obtained from porous materials such as concretes, bone, active
carbon, and glass. These materials constitute an important class
of heterogeneous media possessing complicated
microstructure that is difficult to
describe qualitatively. However, they are not totally
random and there is a mixture of organization and randomness
that makes them difficult to characterize and study.
In order to study different
properties of porous materials, 2D/3D high resolution samples are
required. But obtaining high resolution samples usually requires
cutting, polishing and exposure to air, all of which affect the
properties of the sample. Moreover, 3D samples obtained by Magnetic
Resonance Imaging (MRI) are very low resolution and noisy. Therefore,
artificial samples of porous media are required to be generated
through a porous media reconstruction
process. The recent contributions in the reconstruction task are either only based on a prior model, learned from statistical features of real high resolution training data, and generating samples from that model, or based on a prior model and the measurements.
The main objective of this thesis is to some up with a statistical data fusion framework by which different images of porous materials at different resolutions and modalities are combined in order to generate artificial samples of porous media with enhanced resolution. The current super-resolution, multi-resolution and registration methods in image processing fail to provide a general framework for the porous media reconstruction purpose since they are usually based on finding an estimate rather than a typical sample, and also based on having the images from the same scene -- the case which is not true for porous media images.
The statistical fusion approach that we propose here is based on a Bayesian framework by which a prior model learned from high resolution samples are combined with a measurement model defined based on the low resolution, coarse-scale information, to come up with a posterior model. We define a measurement model, in the non-hierachical and hierarchical image modeling framework, which describes how the low resolution information is asserted in the posterior model. Then, we propose a posterior sampling approach by which 2D posterior samples of porous media are generated from the posterior model. A more general framework that we propose here is asserting other constraints rather than the measurement in the model and then propose a constrained sampling strategy based on simulated annealing to generate artificial samples.
|
4 |
Observation and Tracking of Tropical Cyclones Using Resolution Enhanced ScatterometryHalterman, Richard Ryan 11 December 2006 (has links) (PDF)
The QuikSCAT scatterometer provides global daily coverage of oceanic near-surface vector winds. Recently, algorithms have been developed to enhance the spatial resolution of QuikSCAT winds from 25~km to 2.5~km posting. These ultra-high resolution winds are used, in comparison with standard L2B data product winds, to observe and track tropical cyclones. Resolution enhanced winds are found to provide additional storm structure such as inner core size and structure and the presence of multiple eyewalls compared with standard resolution winds. The 2.5~km winds are also able to observe storms nearer to the shore than 25~km winds. An analysis of circulation center locatability with each resolution wind field is performed. Center fixes with enhanced resolution winds are nearer the National Hurricane Center best-track positions than are standard resolution center fixes. A data and image set of every tropical cyclone worldwide observed by Seawinds on QuikSCAT or SeaWinds on ADEOS II from 1999 through 2005 is generated and made available to the scientific community at http://scp.byu.edu.
|
5 |
Diminution of the lithographic process variability for advanced technology nodes / Diminution de la variabilité du procédé lithographique pour les noeuds technologiques avancésSzucs, Anna 10 December 2015 (has links)
A l’heure actuelle, la lithographie optique 193 nm arrive à ces limites de capacité en termes de résolution des motifs dans la fenêtre du procédé souhaitée pour les nœuds avancés. Des lithographies de nouvelle génération (NGL) sont à l’étude, comme la lithographie EUV (EUV). La complexité de mise en production de ces nouvelles lithographie entraine que la lithographie 193 nm continue à être exploitée pour les nœuds 28 nm et au-delà. Afin de suivre la miniaturisation le rôle des techniques alternatives comme le RET (en anglais Resolution Enhancement Technique) tels que l’OPC (Optical Proximity Correction) est devenu primordial et essentiel. Néanmoins, la complexité croissante de design et de la variabilité du procédé lithographique font qu’il est nécessaire de faire des compromis. Dans ce contexte de complexité croissante du procédé de fabrication, l’objectif de la thèse est de mettre en place une méthode de boucles de correction des facteurs de variabilité. Cela signifie une diminution de la variabilité des motifs complexes pour assurer une résolution suffisante dans la fenêtre de procédé. Ces motifs complexes sont très importants, car c’est eux qui peuvent diminuer la profondeur du champ commune (uDoF). Afin d'accomplir cette tâche, nous avons proposé et validé un enchainement qui pourra être plus tard implémenté en production. L’enchainement en question consiste en une méthodologie de détection basée sur la simulation des motifs les plus critiques étant impactés par les effets issus de la topographie du masque et du profil de la résine. En outre cette méthodologie consiste en une diminution et la compensation de ces effets, une fois que ces motifs les plus critiques sont détectés. Le résultat de l’enchaînement complété sont encourageants : une méthode qui détecte et diminue les variabilités du processus lithographique pour des nœuds de technologie de 28nm a été validée. En plus elle pourrait être adaptée pour les nœuds au-delà de 28 nm. / The currently used 193 nm optical lithography reaches its limits from resolution point of view. Itis despite of the fact that various techniques have been developed to push this limit as much aspossible. Indeed new generation lithography exists such as the EUV, but are not yet reliable to beapplied in mass production. Thus in orders to maintain a robust lithographic process for theseshrunk nodes, 28 nm and beyond, the optical lithography needs to be further explored. It ispossible through alternatives techniques: e.g. the RETs (Resolution Enhancement Techniques),such as OPC (Optical Proximity Correction) and the double patterning. In addition to theresolution limits, advanced technology nodes are dealing with increasing complexity of design andsteadily increasing process variability requiring more and more compromises.In the light of this increasing complexity, this dissertation work is addressed to mitigate thelithographic process variability by the implementation of a correction (mitigation) flow exploredmainly through the capability of computational lithography. Within this frame, our main objectiveis to participate to the challenge of assuring a good imaging quality for the process windowlimiting patterns with an acceptable gain in uDoF (usable Depth of Focus).In order to accomplish this task, we proposed and validated a flow that might be laterimplemented in the production. The proposed flow consists on simulation based detectionmethodology of the most critical patterns that are impacted by effects coming from the masktopography and the resist profile. Furthermore it consists of the mitigation and the compensationof these effects, once the critical patterns are detected. The obtained results on the completedflow are encouraging: a validated method that detects the critical patterns and then mitigates thelithographic process variability been developed successfully.
|
6 |
Edge directed resolution enhancement and demosaicingPekkucuksen, Ibrahim Ethem 19 August 2011 (has links)
The objective of the proposed research is to develop high performance, low computational complexity resolution enhancement and demosaicing algorithms. Our approach to both problems is to find creative ways to incorporate edge information into the algorithm design. However, in contrast with the usual edge directed approaches, we do not try to detect edge presence and orientation explicitly. For the image interpolation problem, we study the relationship between low resolution and high resolution pixels, and derive a general interpolation formula to be used on all pixels. This simple interpolation algorithm is able to generate sharp edges in any orientation. We also propose a simple 3 by 3 filter that quantifies local luminance transition and apply it to the demosaicing problem. Additionally, we propose a gradient based directional demosaicing method that does not require setting any thresholds. We show that the performance of this algorithm can be improved by using multiscale gradients. Finally, we address the low spectral correlation demosaicing problem by proposing a new family of hybrid color filter array (CFA) patterns and a local algorithm that is two orders of magnitude faster than a comparable non-local solution while offering the same level of performance.
|
7 |
Ground State Depletion Fluorescence Microscopy / Hochauflösende Fluoreszenzmikrospie durch Entvölkerung des GrundzustandesBretschneider, Stefan 21 December 2007 (has links)
No description available.
|
8 |
Development of digital imaging technologies for the segmentation of solar features and the extraction of filling factors from SODISM imagesAlasta, Amro F.A. January 2018 (has links)
Solar images are one of the most important sources of available information on the current state and behaviour of the sun, and the PICARD satellite is one of several ground and space-based observatories dedicated to the collection of that data. The PICARD satellite hosts the Solar Diameter Imager and Surface Mapper (SODISM), a telescope aimed at continuously monitoring the Sun. It has generated a huge cache of images and other data that can be analysed and interpreted to improve the monitoring of features, such as sunspots and the prediction and diagnosis of solar activity.
In proportion to the available raw material, the little-published analysis of SODISM data has provided the impetus for this study, specifically a novel method of contributing to the development of a system to enhance, detect and segment sunspots using new hybrid methods. This research aims to yield an improved understanding of SODISM data by providing novel methods to tabulate a sunspot and filling factor (FF) catalogue, which will be useful for future forecasting activities.
The developed technologies and the findings achieved in this research will work as a corner stone to enhance the accuracy of sunspot segmentation; create efficient filling factor catalogue systems, and enhance our understanding of SODISM image enhancement. The results achieved can be summarised as follows:
i) Novel enhancement method for SODISM images.
ii) New efficient methods to segment dark regions and detect sunspots.
iii) Novel catalogue for filling factor including the number, size and sunspot location.
v) Novel statistical method to summarise FFs catalogue.
Image processing and partitioning techniques are used in this work; these methods have been applied to remove noise and detect sunspots and will provide more information such as sunspot numbers, size and filling factor. The performance of the model is compared to the fillers extracted from other satellites, such as SOHO. Also, the results were compared with the NOAA catalogue and achieved a precision of 98%. Performance measurement is also introduced and applied to verify results and evaluate proposal methods.
Algorithms, implementation, results and future work have been explained in this thesis.
|
9 |
Fpga Implementation Of Real Time Digital Video Superresolution For Infrared CamerasAktukmak, Mehmet 01 January 2013 (has links) (PDF)
At present, the quality of image taken from infrared cameras is low compared to the other cameras because of manufacturing technology. So, resolution enhancement processes are becoming more important for these cameras. Super resolution is a good approach to solve this resolution problem. In general, the systems that infrared cameras used require video processing to perform in real time. So, a suitable approach should be selected and implemented to work in real time. The computational load and processing time are big issues in this case. FPGAs are proven to be suitable hardware devices for these types of works.
Super resolution involves two parts as global motion estimation and high resolution image reconstruction. In this study, one suitable algorithm, namely as PM, for global motion estimation in the literature is selected to be implemented in real time. On the other hand, for high resolution image reconstruction part, FPGA structures of some well known algorithms in the literature, namely as POCS, MLE, MAP and LMS are proposed and their performance, resource requirements and timing considerations are discussed. Most efficient one is selected and implemented in FPGA.
|
10 |
Generalized Gaussian Decompositions for Image Analysis and SynthesisBritton, Douglas Frank 16 November 2006 (has links)
This thesis presents a new technique for performing image analysis, synthesis, and modification using a generalized Gaussian model. The joint time-frequency characteristics of a generalized Gaussian are combined with the flexibility of the analysis-by-synthesis (ABS) decomposition technique to form the basis of the model. The good localization properties of the
Gaussian make it an appealing basis function for image analysis, while the ABS process provides a more flexible representation with enhanced functionality. ABS was first explored in conjunction with sinusoidal modeling of speech and audio signals [George87]. A 2D extension of the ABS technique is developed here to perform the image decomposition. This model forms the basis for new approaches in image analysis
and enhancement.
The major contribution is made in the resolution enhancement of images generated using coherent imaging modalities such as Synthetic Aperture Radar (SAR) and ultrasound. The ABS generalized Gaussian model is used to decouple natural image
features from the speckle and facilitate independent control over feature characteristics and speckle granularity. This has the beneficial effect of increasing the perceived resolution and
reducing the obtrusiveness of the speckle while preserving the edges and the definition of the image features. A consequence of its inherent flexibility, the model does not preclude image
processing applications for non-coherent image data. This is illustrated by its application as a feature extraction tool for a FLIR imagery complexity measure.
|
Page generated in 0.0975 seconds