• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 2
  • 1
  • 1
  • Tagged with
  • 21
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A study of some parameters affecting the sharpness retention of cutlery

Lukat, Robert Norton 12 1900 (has links)
No description available.
2

Preferred Amounts of Virtual Image Sharpening in Augmented Reality Applications using the Sharpview Algorithm

Cook, Henry Ford 11 August 2017 (has links)
The thesis presented in this paper is an attempt to quantify generally preferred amounts of virtual image sharpening in augmented reality applications. This preferred amount of sharpening is sought after in an effort to alleviate eye fatigue, and other negative symptoms, caused by accommodation switching between virtual images and real objects in augmented reality (AR) systems. This is an important area of research within the AR world due to the presence of many AR applications that supplement the real world with virtual information, often in the form of virtual text for users to read. An experiment, involving human subjects choosing between higher and lower sharpening amounts, was run to expose preferred amounts of sharpening or patterns of chosen amounts in relation to a number of variables within the experiment; those variables are: virtual text accommodative distance, real text accommodative distance, and the object of focus (real or virtual). The results of this experimentation may benefit future AR research and implementations, specifically in how they handle users switching focus.
3

Content-Adaptive Automatic Image Sharpening

Tajima, Johji, Kobayashi, Tatsuya January 2010 (has links)
No description available.
4

Fusion of hyperspectral and panchromatic images with very high spatial resolution / Fusion d'images panchromatiques et hyperspectrales à très haute résolution spatiale

Loncan, Laëtitia 26 October 2016 (has links)
Les méthodes standard de pansharpening visent à fusionner une image panchromatique avec une image multispectrale afin de générer une image possédant la haute résolution spatiale de la première et la haute résolution spectrale de la dernière. Durant la dernière décennie, beaucoup de méthodes de pansharpening utilisant des images multispectrales furent créées. Avec la disponibilité croissante d’images hyperspectrales, ces méthodes s’étendent maintenant au pansharpening hyperspectral, c’est-à-dire à la fusion d’une image panchromatique possédant une très bonne résolution spatiale avec une image hyperspectrale possédant une résolution spatiale plus faible. Toutefois les méthodes de pansharpening hyperspectrale issues de l’état de l’art ignorent souvent le problème des pixels mixtes. Le but de ses méthodes est de préserver l’information spectrale tout en améliorant l’information spatiale. Dans cette thèse, dans une première partie, nous présentons et analysons les méthodes de l’état de l’art afin de les analyser pour connaitre leurs performances et leurs limitations. Dans une seconde partie, nous présentons une approche qui s’occupe du cas des pixels mixtes en intégrant une étape pré-fusion pour les démélanger. Cette méthode améliore les résultats en ajoutant de l’information spectrale qui n’est pas présente dans l’image hyperspectrale à cause des pixels mixtes. Les performances de notre méthode sont évaluées sur différents jeux de données possédant des résolutions spatiales et spectrales différentes correspondant à des environnements différents. Notre méthode sera évaluée en comparaison avec les méthodes de l’état de l’art à une échelle globale et locale. / Standard pansharpening aims at fusing a panchromatic image with a multispectral image in order to synthesize an image with the high spatial resolution of the former and the spectral resolution of the latter. In the last decade many pansharpening algorithms have been presented in the literature using multispectral data. With the increasing availability of hyperspectral systems, these methods are now extending to hyperspectral pansharpening, i.e. the fusion of a panchromatic image with a high spatial resolution and a hyperspectral image with a coarser spatial resolution. However, state of the art hyperspectral pansharpening methods usually do not consider the problem of the mixed pixels. Their goal is solely to preserve the spectral information while adding spatial information. In this thesis, in a first part, we present the state-of-the-art methods and analysed them to identified there performances and limitations. In a second part, we present an approach to actually deal with mixed pixels as a pre-processing step before performing the fusion. This improves the result by adding missing spectral information that is not directly available in the hyperspectral image because of the mixed pixels. The performances of our proposed approach are assessed on different real data sets, with different spectral and spatial resolutions and corresponding to different contexts. They are compared qualitatively and quantitatively with state of the art methods, both at a global and a local scale.
5

Combined Spatial-Spectral Processing of Multisource Data Using Thematic Content

Filiberti, Daniel Paul January 2005 (has links)
In this dissertation, I design a processing approach, implement and test several solutions to combining spatial and spectral processing of multisource data. The measured spectral information is assumed to come from a multispectral or hyperspectral imaging system with low spatial resolution. Thematic content from a higher spatial resolution sensor is used to spatially localize different materials by their spectral signature. This approach results in both spectralunmixing and sharpening, a spatial-spectral fusion. The main real imagery example, fusion of polarimetric synthetic aperture radar (SAR) with hyperspectral imagery, poses a unique challenge due to the phenomenological differences between the sensors.Theoretical models for electro-optical image formation and scene reflectivity are shown to lead naturally to a set of pixel mixing equations. Several solutions for the spatial unmixing form of these equations are examined, based on the method of least squares. In particular, a method for introducing thematic content into the solution for spatial unmixing is defined using weighted least squares. Finally, and most significantly, a spatial-spectral fusion algorithm based on the theory of projection onto convex sets (POCS) is presented. Theoretical aspects of POCS are briefly discussed, showing how the use of constraints in the form of closed convex sets drives the solution. Then, constraints are derived that are intimately tied to the underlying theoretical models. Simulated imagery is used to characterize the different constraintcombinations that can be used in a POCS-based fusion algorithm.The fusion algorithms are applied to real imagery from two data sets, a Landsat ETM+ scene over Tucson, AZ and an AVIRIS/AirSAR scene over Tombstone, AZ. The results of the fusion are analyzed using scattergrams and correlation statistics. The POCS-based fusion algorithm is shown to produce a reasonable fusion of the AVIRIS/AirSAR data, with some sharpening of spatial-spectral features.
6

The effect of sharpening stones upon curette surface roughness a thesis submitted in partial fulfillment ... periodontics ... /

Setter, Mark Koenen. January 1981 (has links)
Thesis (M.S.)--University of Michigan, 1981.
7

The effect of sharpening stones upon curette surface roughness a thesis submitted in partial fulfillment ... periodontics ... /

Setter, Mark Koenen. January 1981 (has links)
Thesis (M.S.)--University of Michigan, 1981.
8

Multiresolution based, multisensor, multispectral image fusion

Pradhan, Pushkar S 06 August 2005 (has links)
Spaceborne sensors, which collect imagery of the Earth in various spectral bands, are limited by the data transmission rates. As a result the multispectral bands are transmitted at a lower resolution and only the panchromatic band is transmitted at its full resolution. The information contained in the multispectral bands is an invaluable tool for land use mapping, urban feature extraction, etc. However, the limited spatial resolution reduces the appeal and value of this information. Pan sharpening techniques enhance the spatial resolution of the multispectral imagery by extracting the high spatial resolution of the panchromatic band and adding it to the multispectral images. There are many different pan sharpening methods available like the ones based on the Intensity-Hue-Saturation and the Principal Components Analysis transformation. But these methods cause heavy spectral distortion of the multispectral images. This is a drawback if the pan sharpened images are to be used for classification based applications. In recent years, multiresolution based techniques have received a lot of attention since they preserve the spectral fidelity in the pan sharpened images. Many variations of the multiresolution based techniques exist. They differ based on the transform used to extract the high spatial resolution information from the images and the rules used to synthesize the pan sharpened image. The superiority of many of the techniques has been demonstrated by comparing them with fairly simple techniques like the Intensity-Hue-Saturation or the Principal Components Analysis. Therefore there is much uncertainty in the pan sharpening community as to which technique is the best at preserving the spectral fidelity. This research investigates these variations in order to find an answer to this question. An important parameter of the multiresolution based methods is the number of decomposition levels to be applied. It is found that the number of decomposition levels affects both the spatial and spectral quality of the pan sharpened images. The minimum number of decomposition levels required to fuse the multispectral and panchromatic images was determined in this study for image pairs with different resolution ratios and recommendations are made accordingly.
9

Color Image Processing based on Graph Theory

Pérez Benito, Cristina 22 July 2019 (has links)
[ES] La visión artificial es uno de los campos en mayor crecimiento en la actualidad que, junto con otras tecnologías como la Biometría o el Big Data, se ha convertido en el foco de interés de numerosas investigaciones y es considerada como una de las tecnologías del futuro. Este amplio campo abarca diversos métodos entre los que se encuentra el procesamiento y análisis de imágenes digitales. El éxito del análisis de imágenes y otras tareas de procesamiento de alto nivel, como pueden ser el reconocimiento de patrones o la visión 3D, dependerá en gran medida de la buena calidad de las imágenes de partida. Hoy en día existen multitud de factores que dañan las imágenes dificultando la obtención de imágenes de calidad óptima, esto ha convertido el (pre-) procesamiento digital de imágenes en un paso fundamental previo a la aplicación de cualquier otra tarea de procesado. Los factores más comunes son el ruido y las malas condiciones de adquisición: los artefactos provocados por el ruido dificultan la interpretación adecuada de la imagen y la adquisición en condiciones de iluminación o exposición deficientes, como escenas dinámicas, causan pérdida de información de la imagen que puede ser clave para ciertas tareas de procesamiento. Los pasos de (pre-)procesamiento de imágenes conocidos como suavizado y realce se aplican comúnmente para solventar estos problemas: El suavizado tiene por objeto reducir el ruido mientras que el realce se centra en mejorar o recuperar la información imprecisa o dañada. Con estos métodos conseguimos reparar información de los detalles y bordes de la imagen con una nitidez insuficiente o un contenido borroso que impide el (post-)procesamiento óptimo de la imagen. Existen numerosos métodos que suavizan el ruido de una imagen, sin embargo, en muchos casos el proceso de filtrado provoca emborronamiento en los bordes y detalles de la imagen. De igual manera podemos encontrar una enorme cantidad de técnicas de realce que intentan combatir las pérdidas de información, sin embargo, estas técnicas no contemplan la existencia de ruido en la imagen que procesan: ante una imagen ruidosa, cualquier técnica de realce provocará también un aumento del ruido. Aunque la idea intuitiva para solucionar este último caso será el previo filtrado y posterior realce, este enfoque ha demostrado no ser óptimo: el filtrado podrá eliminar información que, a su vez, podría no ser recuperable en el siguiente paso de realce. En la presente tesis doctoral se propone un modelo basado en teoría de grafos para el procesamiento de imágenes en color. En este modelo, se construye un grafo para cada píxel de tal manera que sus propiedades permiten caracterizar y clasificar dicho pixel. Como veremos, el modelo propuesto es robusto y capaz de adaptarse a una gran variedad de aplicaciones. En particular, aplicamos el modelo para crear nuevas soluciones a los dos problemas fundamentales del procesamiento de imágenes: suavizado y realce. Se ha estudiado el modelo en profundidad en función del umbral, parámetro clave que asegura la correcta clasificación de los píxeles de la imagen. Además, también se han estudiado las posibles características y posibilidades del modelo que nos han permitido sacarle el máximo partido en cada una de las posibles aplicaciones. Basado en este modelo se ha diseñado un filtro adaptativo capaz de eliminar ruido gaussiano de una imagen sin difuminar los bordes ni perder información de los detalles. Además, también ha permitido desarrollar un método capaz de realzar los bordes y detalles de una imagen al mismo tiempo que se suaviza el ruido presente en la misma. Esta aplicación simultánea consigue combinar dos operaciones opuestas por definición y superar así los inconvenientes presentados por el enfoque en dos etapas. / [CA] La visió artificial és un dels camps en major creixement en l'actualitat que, junt amb altres tecnlogies com la Biometria o el Big Data, s'ha convertit en el focus d'interés de nombroses investigacions i és considerada com una de les tecnologies del futur. Aquest ampli camp comprén diversos m`etodes entre els quals es troba el processament digital d'imatges i anàlisis d'imatges digitals. L'èxit de l'anàlisis d'imatges i altres tasques de processament d'alt nivell, com poden ser el reconeixement de patrons o la visió 3D, dependrà en gran manera de la bona qualitat de les imatges de partida. Avui dia existeixen multitud de factors que danyen les imatges dificultant l'obtenció d'imatges de qualitat òptima, açò ha convertit el (pre-) processament digital d'imatges en un pas fonamental previa la l'aplicació de qualsevol altra tasca de processament. Els factors més comuns són el soroll i les males condicions d'adquisició: els artefactes provocats pel soroll dificulten la inter- pretació adequada de la imatge i l'adquisició en condicions d'il·luminació o exposició deficients, com a escenes dinàmiques, causen pèrdua d'informació de la imatge que pot ser clau per a certes tasques de processament. Els passos de (pre-) processament d'imatges coneguts com suavitzat i realç s'apliquen comunament per a resoldre aquests problemes: El suavitzat té com a objecte reduir el soroll mentres que el real se centra a millorar o recuperar la informació imprecisa o danyada. Amb aquests mètodes aconseguim reparar informació dels detalls i bords de la imatge amb una nitidesa insuficient o un contingut borrós que impedeix el (post-)processament òptim de la imatge. Existeixen nombrosos mètodes que suavitzen el soroll d'una imatge, no obstant això, en molts casos el procés de filtrat provoca emborronamiento en els bords i detalls de la imatge. De la mateixa manera podem trobar una enorme quantitat de tècniques de realç que intenten combatre les pèrdues d'informació, no obstant això, aquestes tècniques no contemplen l'existència de soroll en la imatge que processen: davant d'una image sorollosa, qualsevol tècnica de realç provocarà també un augment del soroll. Encara que la idea intuïtiva per a solucionar aquest últim cas seria el previ filtrat i posterior realç, aquest enfocament ha demostrat no ser òptim: el filtrat podria eliminar informació que, al seu torn, podria no ser recuperable en el seguënt pas de realç. En la present Tesi doctoral es proposa un model basat en teoria de grafs per al processament d'imatges en color. En aquest model, es construïx un graf per a cada píxel de tal manera que les seues propietats permeten caracteritzar i classificar el píxel en quëstió. Com veurem, el model proposat és robust i capaç d'adaptar-se a una gran varietat d'aplicacions. En particular, apliquem el model per a crear noves solucions als dos problemes fonamentals del processament d'imatges: suavitzat i realç. S'ha estudiat el model en profunditat en funció del llindar, paràmetre clau que assegura la correcta classificació dels píxels de la imatge. A més, també s'han estudiat les possibles característiques i possibilitats del model que ens han permés traure-li el màxim partit en cadascuna de les possibles aplicacions. Basat en aquest model s'ha dissenyat un filtre adaptatiu capaç d'eliminar soroll gaussià d'una imatge sense difuminar els bords ni perdre informació dels detalls. A més, també ha permés desenvolupar un mètode capaç de realçar els bords i detalls d'una imatge al mateix temps que se suavitza el soroll present en la mateixa. Aquesta aplicació simultània aconseguix combinar dues operacions oposades per definició i superar així els inconvenients presentats per l'enfocament en dues etapes. / [EN] Computer vision is one of the fastest growing fields at present which, along with other technologies such as Biometrics or Big Data, has become the focus of interest of many research projects and it is considered one of the technologies of the future. This broad field includes a plethora of digital image processing and analysis tasks. To guarantee the success of image analysis and other high-level processing tasks as 3D imaging or pattern recognition, it is critical to improve the quality of the raw images acquired. Nowadays all images are affected by different factors that hinder the achievement of optimal image quality, making digital image processing a fundamental step prior to the application of any other practical application. The most common of these factors are noise and poor acquisition conditions: noise artefacts hamper proper image interpretation of the image; and acquisition in poor lighting or exposure conditions, such as dynamic scenes, causes loss of image information that can be key for certain processing tasks. Image (pre-) processing steps known as smoothing and sharpening are commonly applied to overcome these inconveniences: Smoothing is aimed at reducing noise and sharpening at improving or recovering imprecise or damaged information of image details and edges with insufficient sharpness or blurred content that prevents optimal image (post-)processing. There are many methods for smoothing the noise in an image, however in many cases the filtering process causes blurring at the edges and details of the image. Besides, there are also many sharpening techniques, which try to combat the loss of information due to blurring of image texture and need to contemplate the existence of noise in the image they process. When dealing with a noisy image, any sharpening technique may amplify the noise. Although the intuitive idea to solve this last case would be the previous filtering and later sharpening, this approach has proved not to be optimal: the filtering could remove information that, in turn, may not be recoverable in the later sharpening step. In the present PhD dissertation we propose a model based on graph theory for color image processing from a vector approach. In this model, a graph is built for each pixel in such a way that its features allow to characterize and classify the pixel. As we will show, the model we proposed is robust and versatile: potentially able to adapt to a variety of applications. In particular, we apply the model to create new solutions for the two fundamentals problems in image processing: smoothing and sharpening. To approach high performance image smoothing we use the proposed model to determine if a pixel belongs to a at region or not, taking into account the need to achieve a high-precision classification even in the presence of noise. Thus, we build an adaptive soft-switching filter by employing the pixel classification to combine the outputs from a filter with high smoothing capability and a softer one to smooth edge/detail regions. Further, another application of our model allows to use pixels characterization to successfully perform a simultaneous smoothing and sharpening of color images. In this way, we address one of the classical challenges within the image processing field. We compare all the image processing techniques proposed with other state-of-the-art methods to show that they are competitive both from an objective (numerical) and visual evaluation point of view. / Pérez Benito, C. (2019). Color Image Processing based on Graph Theory [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/123955
10

Contribuição ao estudo do processo de fabricação de microponteiras de silicio ultra-finas / Contribution to the study of ultra-sharp silicon microtips fabrication process

Faria, Pedro Henrique Librelon de 29 May 2007 (has links)
Orientador: Marco Antonio Robert Alves / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-10T11:56:48Z (GMT). No. of bitstreams: 1 Faria_PedroHenriqueLibrelonde_M.pdf: 1862796 bytes, checksum: 14125f2154ddd1e1590e9c04b5149128 (MD5) Previous issue date: 2007 / Resumo: Neste trabalho realizamos uma contribuição ao estudo do processo de fabricação de microponteiras de Si ultra-finas, utilizando o processo de afinamento por oxidação térmica. Fabricamos as microponteiras de silício e submetemos as mesmas a sucessivas etapas de oxidação e remoção do óxido para obtenção de microponteiras de Si ultra-finas. Utilizamos um microscópio eletrônico (SEM ¿ Scaning Electronic Microscopy) para investigação da redução gradativa do diâmetro da ponta após cada etapa de oxidação. Para caracterização do processo, foram fabricadas microponteiras com diâmetro médio de ponta de 0,7um e após as etapas de afinamento por oxidação térmica foram obtidas microponteiras com diâmetro médio de ponta de 0,06 um. O diâmetro da ponta foi reduzido em 92 %, sem redução significativa da altura da microponteira, que se manteve em 4,5 um. A partir dos dados obtidos do processo de afinamento, caracterizamos as taxas de oxidação da ponta das microponteiras em função do diâmetro da ponta. Verificamos que, conforme proposto pelo modelo de Kao et al., a taxa de oxidação na ponta da microponteira é inferior à taxa de oxidação em uma superfície planar de Si e se reduz à medida que o diâmetro da ponta diminui. Finalmente, caracterizamos eletricamente um array de microponteiras através do levantamento das curvas características I-V (corrente-tensão) e I-t (corrente-tempo). Calculamos os parâmetros de emissão de Fowler-Nordheim, verificamos a característica de histerese na emissão por campo e analisamos a característica de estabilidade de corrente de emissão em curto prazo / Abstract: In this work, we present a contribution to the study of ultra-sharp silicon microtips fabrication process using oxidation sharpening technique. The silicon microtips were fabricated and submitted to repeated oxidation steps and oxide strip to achieve ultra-sharp tips. SEM (Scaning Electronic Microscopy) investigation were performed to observe the gradual tip diameter reduction after each oxidation step. Before the oxidation sharpening process, the silicon microtips array had initially a tip diameter of 0,7 um, and after the process the tip diameter was found in 0,06 um. The tip diameter was reduced in 92 % without any significant reduction of its height, which was kept in 4,5 um. The oxidation rate was characterized as a function of the tip diameter. The results obtained are in agreement with the model proposed by Kao et al., which states that the oxidation rate is reduced as the tip diameter become smaller. We also observed that the oxidation rates in the tips are lower than oxidation rate in flat surface of silicon. Finally, we performed the electrical characterization of the silicon microtips. We calculate the FN emission parameters and analyzed the short term stability characteristics / Mestrado / Eletrônica, Microeletrônica e Optoeletrônica / Mestre em Engenharia Elétrica

Page generated in 0.047 seconds