• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 18
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Role of color in face recognition

Yip, Andrew, Sinha, Pawan 13 December 2001 (has links)
One of the key challenges in face perception lies in determining the contribution of different cues to face identification. In this study, we focus on the role of color cues. Although color appears to be a salient attribute of faces, past research has suggested that it confers little recognition advantage for identifying people. Here we report experimental results suggesting that color cues do play a role in face recognition and their contribution becomes evident when shape cues are degraded. Under such conditions, recognition performance with color images is significantly better than that with grayscale images. Our experimental results also indicate that the contribution of color may lie not so much in providing diagnostic cues to identity as in aiding low-level image-analysis processes such as segmentation.
2

Grayscale patterning of PEDOT: PSS films by multi-photon lithography

Yao, Xiao January 1900 (has links)
Master of Science / Department of Chemistry / Daniel A. Higgins / Lithography techniques have been widely used to fabricate optical, electronic and optoelectronic devices with sub-micron scale spatial resolution. In the most common lithographic procedures, a light sensitive polymer, called a photoresist, is exposed and developed to form a binary relief pattern on a substrate. The finest features are produced by X-ray or electron-beam methods, both of which are very expensive to employ. Less expensive methods use ultraviolet (UV) light to expose the photoresist through a photomask. The resolution in these methods is somewhat lower and is governed by diffraction of light by the photomask, the quality of the photomask, and by any chemical/physical development steps subsequently employed. Due to the above limitations, we have been investigating direct-write, ablative multiphoton lithography as an alternative method for preparing high-resolution patterns. With this method, near-IR light from an ultrafast pulsed laser source is focused into a polymer film, leading to depolymerization and vaporization of the polymer. Arbitrary binary patterns can be produced by raster scanning the sample while controlling exposure of the film to the laser. Importantly, high-resolution etching of the polymer film is achieved without the use of a photomask and without chemical development steps. While arbitrary patters are easily prepared, it is also possible to prepare three-dimensional (i.e. grayscale) surface relief structures. In this study, ablative multiphoton photolithography is used to prepare binary and grayscale structures in thin films of PEDOT:PSS, an electrically conductive organic polymer blend. A simple kinetic model is proposed to explain the etching process. Data on the power-dependence of polymer etching can be fit to this model and is used to determine the order of the nonlinear optical process involved. The etch depth as a function of laser focus is also investigated and shown to follow the same kinetic model. The results show that three-dimensional (grayscale) patterns can be prepared by modulating either the laser power or the laser focus. Images of several binary and grayscale structures prepared by this method are presented.
3

Edge Detection based on Grayscale Morphology on Hexagonal Images

Tsai, Wei-cheng 29 August 2012 (has links)
This study focuses on hexagonally sampled images and grayscale morphology. We combine hexagonal image processing and grayscale morphology to develop hexagonal grayscale morphology, and propose an algorithm to detect and enhance edges. Hexagonal image processing consists of three important steps: conversion of hexagonally sampled images, processing, and display of processed images on simulated hexagonal grid. We construct four different sizes of hexagonal structuring elements to apply morphological operations on hexagonal images. In this study, we applied morphological gradient for edge detection and proposed algorithm for edge enhancement. Moreover, we developed six different shapes of structuring elements to find an optimum one. Finally, we assessed two methods to compare our results, and identified the best result and optimum structuring element. We expect that proposed algorithm will offer a useful tool of image processing on hexagonally sampled images.
4

CHARACTERIZATION OF SEED DEFECTS IN HIGHLY SPECULAR SMOOTH COATED SURFACES

GNANAPRAKASAM, PRADEEP 01 January 2004 (has links)
Many smooth, highly specular coatings such as automotive paints are subjected to considerable performance demands as the customer expectations for appearance of coatings are continually increasing. Therefore it is vital to develop robust methods to monitor surface quality online. An automated visual assessment of specular coated surface that would not only provide a cost effective and reliable solution to the industries but also facilitate the implementation of a real-time feedback loop. The scope of this thesis is a subset of the inspection technology that facilitates real-time close loop control of the surface quality and concentrates on one common surface defect the seed defect. This machine vision system design utilizes surface reflectance models as a rational basis. Using a single high-contrast image the height of the seed defect is computed; the result is obtained rapidly and is reasonably accurate approximation of the actual height.
5

Effect of display type and room illuminance in viewing digital dental radiography:display performance in panoramic and intraoral radiography

Kallio-Pulkkinen, S. (Soili) 17 November 2015 (has links)
Abstract Today, digital imaging is widely used in dentistry. In medical radiography, the importance of displays and room illuminance has been shown in many studies, whereas the effect of these factors in the diagnosis of dental radiography is not clear and remains controversial. There is limited knowledge among dentists as to how observer performance is affected by the type of display, level of ambient light or grayscale calibration. The aim of this thesis was to compare observer performance in the detection of both anatomical structures and pathology in panoramic and bitewing radiographs using consumer grade display with γ 2.2- and DICOM-calibration, a tablet (3rd generation Apple iPad® and a 6 MegaPixel (MP) display under different lighting conditions. Furthermore, the thesis aimed at providing recommendations for type of display and acceptable illuminance levels in the room for interpretation of dental radiographs. Thirty panoramic and bitewing radiographs were randomly evaluated on four displays under bright (510 lx) and dim (16 lx) ambient lighting by two observers. Both anatomical structures and pathology were evaluated because they provided both low- and high-contrast structure. Consensus was considered as reference. Intra- and inter-observer agreement was determined. The proportion of equivalent ratings and weighted kappa were used to assess the reliability. The level of significance was set to P<0.05. DICOM calibration may improve observer performance in the detection of pathology in panoramic radiographs regardless of the room illuminance level. DICOM calibration improves the detection of enamel and dentinal caries in bitewing radiographs, particularly in bright lighting conditions. On the other hand, in dental practice the room illuminance level is often higher, and it is thus recommended that the overall lighting level should be decreased. Furthermore, a DICOM-calibrated consumer grade display can be used instead of a medical display in dental practice without compromising the diagnostic quality and it saves costs. Tablet displays are recommended to use with care in dental radiography. / Tiivistelmä Hammaslääketieteessä käytetään nykyään pääasiassa digitaalista kuvantamista. Lääketieteellisessä radiologiassa näyttöjen ja käyttöympäristön valaistuksen merkitys kuvien katseluun on osoitettu lukuisissa tutkimuksissa, kun taas hammaslääketieteellisten tutkimusten tulokset näiden tekijöiden vaikutuksista röntgenkuvien tulkintaan eivät ole yksiselitteisiä ja niissä on ristiriitaisuutta. Hammaslääkäreiden tiedot näyttöjen, kalibroinnin ja ympäröivän valaistuksen vaikutuksesta röntgenkuvan tulkintaan ovat puutteellisia. Tämän väitöskirjan tarkoituksena oli vertailla näyttöjen suorituskykyä panoraama- ja purusiivekekuvien tulkinnassa eri valaistusolosuhteissa. Tutkimuksessa vertailtiin γ 2.2- ja DICOM-kalibroitua perusnäyttöä, tablettia (kolmannen polven Apple iPad®) sekä 6 MegaPikselin (MP) lääketieteelliseen käyttöön tarkoitettua näyttöä. Lisäksi väitöskirjan tarkoituksena oli antaa hammaslääketieteellisten röntgenkuvien katseluun soveltuvia näyttöjä ja käyttöympäristön valaistusta koskevia suosituksia. Kaksi tulkitsijaa arvioi 30 panoraama- ja purusiivekeröntgenkuvaa satunnaisessa järjestyksessä neljältä eri näytöltä kirkkaassa (510 luksia) ja hämärässä (16 luksia) valaistuksessa. Tutkimuksessa arvioitiin sekä korkeakontrastisia anatomisia rakenteita että matalakontrastisia patologisia löydöksiä. Tuloksia verrattiin tutkijoiden väliseen yhteisluentaan. Luotettavuuden arviointiin käytettiin yhdenmukaisuusosuutta sekä painotettua kappaa. Toistettavuuden arvioimiseksi laskettiin kapat toisen alkuperäisten ja uusintaluentojen sekä molempien tulkitsijoiden alkuperäisluentojen välille. Merkitsevyystasoksi määriteltiin p<0,05. DICOM-kalibrointi voi parantaa patologisten löydösten tulkintaa panoraamakuvissa molemmissa valaistusolosuhteissa. DICOM-kalibrointi parantaa selvästi purusiivekekuvien hammaskiille- ja hammasluukarieksen tulkintaa erityisesti kirkkaassa valaistuksessa. Hammaslääkäreiden työskentelytilojen valaistus on yleensä korkeampi kuin tutkimuksessa käytetty, joten näyttöjen käyttöympäristön valaistusta tulisi laskea toimistovalaistusta vastaavaksi. DICOM-kalibroitua perusnäyttöä voidaan suositella käytettäväksi kalliiden medikaalinäyttöjen sijaan. Tablettia tulee sen sijaan käyttää harkiten hammaskuvien tulkintaan.
6

Využití neuronové sítě při identifikaci znaku v obraze / Picture symbol identification with the aid of neural network

Pavlík, Daniel January 2008 (has links)
This thesis is about using neural networks in recognition of letters A to Z and numbers 0 to 9. In the first part is theoretically described substance of neural networks and concretically described principle the method of learning multiple-layer network with backward spreaded error(a.ka Backpropagation). Basic problematic of processing the picture and resilence of network against degradation picture by a noise and compression JPEG is also described here. Second part is directed to practical realization of feed foward multiple-layer network with recognition the binary patterns of alphabetical letters and numbers 0 to 9, which was created in Matlab and Simulink environment. Next and final part is about practical realization of feed foward network with recognition the grayscale patterns of alphabetical letters and numbers 0 to 9, which was also created in Matlab and Simulink environment.
7

An evaluation of image preprocessing for classification of Malaria parasitization using convolutional neural networks / En utvärdering av bildförbehandlingsmetoder för klassificering av malariaparasiter med hjälp av Convolutional Neural Networks

Engelhardt, Erik, Jäger, Simon January 2019 (has links)
In this study, the impact of multiple image preprocessing methods on Convolutional Neural Networks (CNN) was studied. Metrics such as accuracy, precision, recall and F1-score (Hossin et al. 2011) were evaluated. Specifically, this study is geared towards malaria classification using the data set made available by the U.S. National Library of Medicine (Malaria Datasets n.d.). This data set contains images of thin blood smears, where uninfected and parasitized blood cells have been segmented. In the study, 3 CNN models were proposed for the parasitization classification task. Each model was trained on the original data set and 4 preprocessed data sets. The preprocessing methods used to create the 4 data sets were grayscale, normalization, histogram equalization and contrast limited adaptive histogram equalization (CLAHE). The impact of CLAHE preprocessing yielded a 1.46% (model 1) and 0.61% (model 2) improvement over the original data set, in terms of F1-score. One model (model 3) provided inconclusive results. The results show that CNN’s can be used for parasitization classification, but the impact of preprocessing is limited. / I denna studie studerades effekten av flera bildförbehandlingsmetoder på Convolutional Neural Networks (CNN). Mätvärden såsom accuracy, precision, recall och F1-score (Hossin et al. 2011) utvärderades. Specifikt är denna studie inriktad på malariaklassificering med hjälp av ett dataset som tillhandahålls av U.S. National Library of Medicine (Malaria Datasets n.d.). Detta dataset innehåller bilder av tunna blodutstryk, med segmenterade oinfekterade och parasiterade blodceller. I denna studie föreslogs 3 CNN-modeller för parasiteringsklassificeringen. Varje modell tränades på det ursprungliga datasetet och 4 förbehandlade dataset. De förbehandlingsmetoder som användes för att skapa de 4 dataseten var gråskala, normalisering, histogramutjämning och kontrastbegränsad adaptiv histogramutjämning (CLAHE). Effekten av CLAHE-förbehandlingen gav en förbättring av 1.46% (modell 1) och 0.61% (modell 2) jämfört med det ursprungliga datasetet, vad gäller F1-score. En modell (modell 3) gav inget resultat. Resultaten visar att CNN:er kan användas för parasiteringsklassificering, men effekten av förbehandling är begränsad.
8

Parallélisation de la ligne de partage des eaux dans le cadre des graphes à arêtes valuées sur architecture multi-cœurs / Parallelization of the watershed transform in weighted graphs on multicore architecture

Braham, Yosra 24 November 2018 (has links)
Notre travail s'inscrit dans le cadre de la parallélisation d’algorithmes de calcul de la Ligne de Partage des Eaux (LPE) en particulier la LPE d’arêtes qui est une notion de la LPE introduite dans le cadre des Graphes à Arêtes Valuées. Nous avons élaboré un état d'art sur les algorithmes séquentiels de calcul de la LPE afin de motiver le choix de l'algorithme qui fait l'objet de notre étude qui est l'algorithme de calcul de noyau par M-bord. L'objectif majeur de cette thèse est de paralléliser cet algorithme afin de réduire son temps de calcul. En premier lieu, nous avons présenté les travaux qui se sont intéressés à la parallélisation des différentes variantes de la LPE et ce afin de dégager les problématiques que soulèvent cette tâche et les solutions adéquates à notre contexte. Dans un second lieu, nous avons montré que malgré la localité de l'opération de base de cet algorithme qui est l’abaissement de la valeur de certaines arêtes nommées arêtes M-bord, son exécution parallèle se trouve pénaliser par un problème de dépendance de données, en particulier au niveau des arêtes M-bord qui ont un sommet non minimum commun. Dans ce contexte, nous avons proposé trois stratégies de parallélisation de cet algorithme visant à résoudre ce problème de dépendance de données. La première stratégie consiste à diviser le graphe de départ en des bandes appelées partitions, et les traiter en parallèle sur P processeurs. La deuxième stratégie consiste à diviser les arêtes du graphe de départ en alternance en des sous-ensembles d’arêtes indépendantes. La troisième stratégie consiste à examiner les sommets au lieu des arêtes du graphe initial tout en préservant le paradigme d’amincissement sur lequel est basé l’algorithme séquentiel initial. Par conséquent, l’ensemble des sommets non-minima adjacents aux sommets minima sont traités en parallèle. En dernier lieu, nous avons étudié la parallélisation d'une technique de segmentation basée sur l'algorithme de calcul de noyau par M-bord. Cette technique comprend les étapes suivantes : la recherche des minima régionaux, la pondération des sommets et le calcul des sommets minima et enfin calcul du noyau par M-bord. A cet égard, nous avons commencé par faire une étude relative à la dépendance des données des différentes étapes qui la constituent et nous avons proposé des algorithmes parallèles pour chacune d'entre elles. Afin d'évaluer nos contributions, nous avons implémenté les différents algorithmes parallèles proposés dans le cadre de cette thèse sur une architecture multi-cœurs à mémoire partagée. Les résultats obtenus ont montré des gains en termes de temps d’exécution. Ce gain est traduit par des facteurs d’accélération qui augmentent avec le nombre de processeurs et ce quel que soit la taille des images à segmenter / Our work is a contribution of the parallelization of the Watershed Transform in particular the Watershed cuts which are a notion of watershed introduced in the framework of Edge Weighted Graphs. We have developed a state of art on the sequential watershed algorithms in order to motivate the choice of the algorithm that is the subject of our study, which is the M-border Kernel algorithm. The main objective of this thesis is to parallelize this algorithm in order to reduce its running time. First, we presented a review on the works that have treated the parallelization of the different types of Watershed in order to identify the issues raised by this task and the appropriate solutions to our context. In a second place, we have shown that despite the locality of the basic operation of this algorithm which is the lowering of some edges named the M-border edges; its parallel execution raises a data dependency problem, especially at the M-border edges which have a common non-minimum vertex. In this context, we have proposed three strategies of parallelization of this algorithm that solve this problematic: the first strategy consists of dividing the initial graph into bands called partitions processed in parallel by P processors. The second strategy is to divide the edges of the initial graph alternately into subsets of independent edges. The third strategy consists in examining the vertices instead of the edges of the initial graph while preserving the thinning paradigm on which the sequential algorithm is based. Therefore, the set of non-minima vertices adjacent to the minima ones are processed in parallel. Finally, we studied the parallelization of a segmentation technique based on the M-border kernel algorithm. This technique consists of three main steps which are: regional minima detection, vertices valuation and M-border kernel computation. For this purpose, we began by studying the data dependency of the different stages of this technique and we proposed parallel algorithms for each one of them. In order to evaluate our contributions, we implemented the parallel algorithms proposed in this thesis, on a shared memory multi-core architecture. The results obtained showed a notable gain in terms of execution time. This gain is translated by speedup factors that increase with the number of processors whatever is the resolution of the input images
9

幾何圖像的平衡度與偏好度知覺歷程研究 / The Study of Perceptual Process of Balance and Aesthetic Preference in Geometric Images

林幸蓉 Unknown Date (has links)
平衡是視覺藝術中一項重要的構圖原則,因為它能將畫面中分散的元素統整起來,使得各個元素所造成的知覺強度或張力(tensions)能在平衡中心相互制衡,進而成為一個有秩序的整體,因此以往文獻對於平衡與美感偏好的關係有諸多論述。本研究目的在於以幾何圖像探討平衡性與美感偏好的知覺歷程。參考Wilson與Chatterjee(2005)的研究成果,本研究以幾何圖像為對象,除了對其實驗加以重複驗證,以及進行更深入的分析,並進一步將畫面元素的明暗納進來一併探討。本研究包括四項實驗,實驗一和實驗二皆採用二值化圖像探討平衡度及美感偏好度,實驗三和實驗四則是採用灰階圖像探討平衡度。實驗一探討圖像中元素位置的分布如何影響個體知覺到的整體平衡度,並根據分析結果以改進Wilson與Chatterjee的算則。實驗二探討圖像中元素分布位置之不同如何影響個體的美感偏好度,並探討偏好度與各項平衡指標的關係。實驗三探討圖像中元素的不同灰階是否影響個體知覺平衡度。實驗四則是以實驗三為基礎,進一步操弄畫面中元素的灰階變化,以觀察元素分布位置與灰階對整體平衡度知覺所造成的影響,並檢視改進後的算則是否更能有效預測主觀平衡度。結果指出,採用二值化圖像探討平衡度及美感偏好時,重心偏離度指標和四項軸對稱指標平均對於主觀平衡度均有極佳的預測力。然而,八項對稱指標平均對於主觀美感偏好有較佳的預測力。灰階的主要效果達顯著,支持先前灰階會影響主觀平衡度的想法。最後,將灰階權重納入算則後,大部分客觀平衡性指標對於主觀平衡度的預測力均有增加,然而其差異僅在重心偏離度指標達顯著。基於本研究結果,各分項指標對於主觀平衡度的預測力不盡相同,因此在發展預測主觀平衡度的指標時,應對各分項指標賦予不同的權重。然而,在尚無足以預測美感偏好之最佳指標的情況下,Wilson 與Chatterjee (2005)所發展出的八項對稱指標平均對其的預測力仍是最佳的。最後,研究者建議未來在從事相關研究時,應將影響平衡的因子一併納入考慮。 / Balance is an important compositional principle in visual arts. Balance gives unity to an image with separate elements, allowing them to produce visual forces and tensions that compensate for each other, and then becoming a whole with order. Previous research has provided plenty of discussions on the relationship between balance and aesthetic preference. The purpose of this study was to investigate the perceptual process of balance and aesthetic preference in geometric images. Based on Wilson and Chatterjee (2005), geometric images were used again to reexamine their proposal more thoroughly and study the balance and aesthetic preference further taking grayscale into consideration. In this study, four experiments were conducted. Binary images were used in Experiments 1 and 2. Experiment 1 was aimed to test the effects of element distribution on perception of balance and further improve the algorithm proposed by Wilson and Chatterjee (2005). Experiment 2 was intended to investigate how element distribution affects aesthetic preferences and how each measure of balance is related to aesthetic preferences. In the Experiments 3 and 4, grayscale images were used instead. The goal of Experiment 3 was to test whether grayscale affects the perception of balance. Experiment 4 manipulated grayscale levels based on the results of Experiment 3. The goal of this experiment was to observe the effects of element distribution and grayscale levels on balance perception and examine whether introducing the grayscale weight into the algorithm could help predict subjective perception of balance. Results showed that for binary images, deviation of center of weight and the average of symmetry measures along four principal axes were good predictors for subjective balance rather than the average of eight symmetry measures. In contrast, aesthetic preferences were better predicted by the average of the eight measures of symmetry. The main effect of grayscale was significant, supporting the hypothesis that grayscale contributes to the subjective perception of balance. Finally, after the grayscale weight was included in the algorithm, most objective measures of balance improved predicting power for subjective perception of balance, but the difference was significant only for deviation of center of balance. According to the findings of this research, it is suggested that the weight of four measures of inner and outer symmetry should be reduced when applying to predict the perceived balance because including them lowered the predicting power. As to aesthetic preferences, the average of the eight measures of symmetry introduced by Wilson and Chatterjee (2005) was still a better index for predicting aesthetic preferences. Finally, this study suggested that future researchers should consider other factors which also affecting balance perception and evaluate their effects respectively.
10

Segmentação de imagens naturais baseada em modelos de cor de diferença cromática, máscaras de detecção de contornos e supressão morfológica de texturas

COSTA, Diogo Cavalcanti 02 March 2015 (has links)
Submitted by Fabio Sobreira Campos da Costa (fabio.sobreira@ufpe.br) on 2017-04-24T14:27:21Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE__DIOGO_CAVALCANTI_COSTA.pdf: 8696014 bytes, checksum: 6ecb7de16968f61db789940caeae149e (MD5) / Made available in DSpace on 2017-04-24T14:27:21Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) TESE__DIOGO_CAVALCANTI_COSTA.pdf: 8696014 bytes, checksum: 6ecb7de16968f61db789940caeae149e (MD5) Previous issue date: 2015-03-02 / CNPQ / Desde os anos 1960, foram criadas inúmeras técnicas para segmentação de imagens, contudo poucas se aproximam do nível de desempenho humano, sendo essas computacionalmente custosas e inadequadas para aplicação em tempo real. Portanto, nesta tese é apresentada uma técnica de segmentação de baixo custo computacional, baseada em descontinuidades e em multirresolução, voltada à detecção de contornos de objetos em imagens naturais – fotografias do mundo real. A estrutura da técnica proposta é dividida em cinco etapas. Na primeira, atributos de cor e foco são realçados na imagem de entrada. O mapeamento de cor realça as diferenças de cor entre os canais RGB e propicia a detecção de bordas entre os canais de cor por operadores de gradiente. Dois modelos de cor de diferença cromática, RhGhBh e LgC, são propostos para esse fim. Também é proposta a transformada de decomposição de cor que segmenta a escala de cor RGB em canais independentes, isolando as cores aditivas e subtrativas, e os tons de cinza. Assim, é possível mensurar a variação local de cada cor para criar um mapeamento das regiões em foco. Na segunda etapa, uma filtragem morfológica para supressão de texturas suaviza as mudanças abruptas de cor no interior das mesmas, possibilitando a identificação de seus contornos e diminuindo a falsa identificação de bordas internas. Na terceira etapa, oito máscaras orientadas, batizadas de máscaras de detecção de contornos, são usadas para calcular o gradiente local, realçando os contornos dos objetos em detrimento de suas bordas internas. Na quarta etapa, um afinamento em tons de cinza é realizado por meio de um empilhamento topológico das bordas erodidas e suavizadas, no qual os pixels de bordas maximamente centralizados são isolados e afinados morfologicamente. Por fim, na quinta etapa, a intensidade das bordas é corrigida função do gradiente local e da densidade local das bordas, realçando os contornos dos objetos. Comparações com técnicas de segmentação recentes e clássicas são conduzidas com auxílio do Berkeley Segmentation Dataset and Benchmark. Os resultados obtidos posicionam a técnica proposta em quinto lugar no Benchmark, com tempo de processamento inferior a 0,5% do tempo das técnicas melhor classificadas, sendo adequada para uso em tempo real. / Since the 1960’s, numerous image segmentation techniques were developed, however only a few approach human level segmentation, being computationally costly and inadequate to real time applications. Therefore, this Thesis presents a low computational cost multi-resolution and edge-based image segmentation technique for objects’ contour detection in natural images – real world scenes photographs. The proposed technique’s framework is divided into five steps. First, color and focus features are mapped from the input image. The color mapping enhances the color differences between RGB channels, allowing the inter-channel colors edge detection by gradient operators. Two chromatic difference color models are proposed, RhGhBh and LgC. The color decomposition transform is also proposed, which is able to segment the RGB color scale in independent channels, isolating the additive and subtractive colors, and the shades of gray. The transform allows the measurement of the local variation within each color, thus, producing the image´s focus map. In the second step, a morphological texture suppression filtering smoothes abrupt color changes inside textures, allowing textures’ outer edges detection and decreasing the false identification of texture inner edges as objects’ contours. In the third step, eight oriented masks, called contour detection masks, are used to calculate the local gradient, enhancing the objects’ contours over their inner edges. In the fourth step, a grayscale thinning is performed through a topological stacking of eroded and smoothed edges, where the maximally centered edge pixels are isolated and morphologically thinned. Finally, in the fifth step, the edges’ intensities are corrected to reflect the local gradient and the local edges’ density, allowing better identification of objects’ contours. Comparisons with recent and classic segmentation techniques are conducted by the Berkeley Segmentation Dataset and Benchmark. The results rank the proposed segmentation in fith position in the Benchmark, with a processing time below 0.5% of the better ranked techniques, being suitable for real-time applications.

Page generated in 0.4266 seconds