Spelling suggestions: "subject:"image sharpening"" "subject:"image sharpenings""
1 |
Content-Adaptive Automatic Image SharpeningTajima, Johji, Kobayashi, Tatsuya January 2010 (has links)
No description available.
|
2 |
Piston Phase Measurements to Accelerate Image Reconstruction in Multi-Aperture SystemsKraczek, Jeffrey Read January 2011 (has links)
No description available.
|
3 |
Sobolev Gradient Flows and Image ProcessingCalder, Jeffrey 25 August 2010 (has links)
In this thesis we study Sobolev gradient flows for Perona-Malik style energy functionals and generalizations thereof. We begin with first order isotropic flows which are shown to be regularizations of the heat equation. We show that these flows are well-posed in the forward and reverse directions which yields an effective linear sharpening algorithm. We furthermore establish a number of maximum principles for the forward flow and show that edges are preserved for a finite period of time. We then go on to study isotropic Sobolev gradient flows with respect to higher order Sobolev metrics. As the Sobolev order is increased, we observe an increasing reluctance to destroy fine details and texture. We then consider Sobolev gradient flows for non-linear anisotropic diffusion functionals of arbitrary order. We establish existence, uniqueness and continuous dependence on initial data for a broad class of such equations. The well-posedness of these new anisotropic gradient flows opens the door to a wide variety of sharpening and diffusion techniques which were previously impossible under L2 gradient descent. We show how one can easily use this framework to design an anisotropic sharpening algorithm which can sharpen image features while suppressing noise. We compare our sharpening algorithm to the well-known shock filter and show that Sobolev sharpening produces natural looking images without the "staircasing" artifacts that plague the shock filter. / Thesis (Master, Mathematics & Statistics) -- Queen's University, 2010-08-25 10:44:12.23
|
4 |
Color Image Processing based on Graph TheoryPérez Benito, Cristina 22 July 2019 (has links)
[ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida.
Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen.
Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce.
En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas. / [CA] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida.
Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge.
Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç.
En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes. / [EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired.
Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in
poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing.
There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step.
In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening.
To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view. / Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955
|
5 |
<b>Advanced Algorithms for X-ray CT Image Reconstruction and Processing</b>Madhuri Mahendra Nagare (17897678) 05 February 2024 (has links)
<p dir="ltr">X-ray computed tomography (CT) is one of the most widely used imaging modalities for medical diagnosis. Improving the quality of clinical CT images while keeping the X-ray dosage of patients low has been an active area of research. Recently, there have been two major technological advances in the commercial CT systems. The first is the use of Deep Neural Networks (DNN) to denoise and sharpen CT images, and the second is use of photon counting detectors (PCD) which provide higher spectral and spatial resolution compared to the conventional energy-integrating detectors. While both techniques have potential to improve the quality of CT images significantly, there are still challenges to improve the quality further.</p><p dir="ltr"><br></p><p dir="ltr">A denoising or sharpening algorithm for CT images must retain a favorable texture which is critically important for radiologists. However, commonly used methodologies in DNN training produce over-smooth images lacking texture. The lack of texture is a systematic error leading to a biased estimator.</p><p><br></p><p dir="ltr">In the first portion of this thesis, we propose three algorithms to reduce the bias, thereby to retain the favorable texture. The first method proposes a novel approach to designing a loss function that penalizes bias in the image more while training a DNN, producing more texture and detail in results. Our experiments verify that the proposed loss function outperforms the commonly used mean squared error loss function. The second algorithm proposes a novel approach to designing training pairs for a DNN-based sharpener. While conventional sharpeners employ noise-free ground truth producing over-smooth images, the proposed Noise Preserving Sharpening Filter (NPSF) adds appropriately scaled noise to both the input and the ground truth to keep the noise texture in the sharpened result similar to that of the input. Our evaluations show that the NPSF can sharpen noisy images while producing desired noise level and texture. The above two algorithms merely control the amount of texture retained and are not designed to produce texture that matches to a target texture. A Generative Adversarial Network (GAN) can produce the target texture. However, naive application of GANs can introduce inaccurate or even unreal image detail. Therefore, we propose a Texture Matching GAN (TMGAN) that uses parallel generators to separate anatomical features from the generated texture, which allows the GAN to be trained to match the target texture without directly affecting the underlying CT image. We demonstrate that TMGAN generates enhanced image quality while also producing texture that is desirable for clinical application.</p><p><br></p><p dir="ltr">In the second portion of this research, we propose a novel algorithm for the optimal statistical processing of photon-counting detector data for CT reconstruction. Current reconstruction and material decomposition algorithms for photon counting CT are not able to utilize simultaneously both the measured spectral information and advanced prior models. We propose a modular framework based on Multi-Agent Consensus Equilibrium (MACE) to obtain material decomposition and reconstructions using the PCD data. Our method employs a detector agent that uses PCD measurements to update an estimate along with a prior agent that enforces both physical and empirical knowledge about the material-decomposed sinograms. Importantly, the modular framework allows the two agents to be designed and optimized independently. Our evaluations on simulated data show promising results.</p>
|
6 |
Open geospatial data fusion and its application in sustainable urban developmentXu, Shaojuan 17 July 2020 (has links)
This thesis presents the implementation of data fusion techniques for sustainable urban development. Recently, increasingly more geospatial data have been made easily available for no cost. The immeasurable quantities of geospatial data are mainly from four kinds of sources: remote sensing satellites, geographic information systems (GIS) data, citizen science, and sensor web. Among them, satellite images have been mostly used, due to the frequent and repetitive coverage, as well as the data acquisition over a long time period. However, the rather coarse spatial resolution of e.g. 30 m for Landsat 8 multispectral images impairs the application of satellite images in urban areas. Even though image fusion techniques have been used to improve the spatial resolution, the existing image fusion methods are neither suitable for sharpening one band thermal images nor for hyperspectral images with hundreds of bands. Therefore, simplified Ehlers fusion was developed. It adds the spatial information of a high-resolution image into a low-resolution image in the frequency domain through fast Fourier transform (FFT) and filter techniques. The developed algorithm successfully improved the spatial resolution of both one band thermal images as well as hyperspectral images. It can enhance various images, regardless of the number of bands and the spectral coverage, providing more precise measurement and richer information. To investigate the performance of simplified Ehlers fusion in practical use, it was applied for urban heat island (UHI) analysis. This was done by sharpening daytime and nighttime thermal images from Landsat 8, Landsat 7, and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). The developed algorithm effectively improved the spatial details of the original images so that the temperature differences between agricultural, forest, industrial, transportation, and residential areas could be distinguished from each other. Based on that, it was found that in the study city the causes of UHI are mainly anthropogenic heat from industrial areas as well as high temperatures from the road surface and dense urban fabric. Based on this analysis, corresponding mitigation strategies were tailored. Remote sensing images are useful yet not sufficient to retrieve land use related information, despite high spatial resolution. For sustainable urban development research, remote sensing images need to be incorporated with data from other sources. Accordingly, image fusion needs to be extended to broader data fusion. Extraction of urban vacant land was therefore taken as a second application case. Much effort was spent on the definition of vacant land as unclear definitions lead to ineffective data fusion and incorrect site extraction results. Through an intensive study of the current research and the available open data sources, a vacant land typology is proposed. It includes four categories: transportation-associated land, natural sites, unattended areas or remnant parcels, and brownfields. Based on this typology, a two-level data fusion framework was developed. On the feature level, sites are identified. For each type of vacant land, an individual site extraction rule and data fusion procedure is implemented. The overall data fusion involves satellite images, GIS data, citizen science, and social media data. In the end, four types of vacant land features were extracted from the study area. On the decision level, these extracted sites could be conserved or further developed to support sustainable urban development.
|
Page generated in 0.0711 seconds