• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 542
  • 331
  • 246
  • 92
  • 74
  • 22
  • 18
  • 18
  • 15
  • 11
  • 8
  • 7
  • 7
  • 7
  • 7
  • Tagged with
  • 1646
  • 322
  • 283
  • 205
  • 179
  • 170
  • 152
  • 136
  • 134
  • 124
  • 122
  • 94
  • 88
  • 85
  • 83
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Tilt aftereffect for texture edges is larger than in matched subjective edges, but both are strong adaptors of luminance edges

Keeble, David R.T., Hawley, S.J. January 2006 (has links)
No / The tilt aftereffect (TAE) has been used previously to probe whether contours defined by different attributes are subserved by the same or by different underlying mechanisms. Here, we compare two types of contours between texture surfaces, one with texture orientation contrast across the edge (orientation contrast contour; OC) and one without, commonly referred to as a subjective contour (SC). Both contour types produced curves of TAE versus adapting angle displaying typical positive and negative peaks at ~15 and 70 deg, respectively. The curves are well fit by difference of Gaussian (DoG) functions, with one Gaussian accounting for the contour adaptation effect and the other accounting for the texture orientation adaptation effect. Adaptation to OC elicited larger TAEs than did adaptation to SC, suggesting that they more effectively activate orientation-selective neurons in V1/V2 during prolonged viewing. Surprisingly, both contour types adapted a luminance contour (LC) as strongly as did an LC itself, suggesting that the second-order orientation cue contained in the texture edge activates the same set of orientation-selective neurons as does an LC. These findings have implications for the mechanisms by which the orientations of texture edges and SCs are encoded
182

Reinventing the Museum: Textured Materiality in Modern and Contemporary Women’s Elegies

Oh, Alicia Ye Sul January 2022 (has links)
Thesis advisor: Marjorie Howes / Focusing on elegiac dimensions of the museum, Reinventing the Museum: Textured Materiality in Modern and Contemporary Women’s Elegies contends that Eavan Boland, Sylvia Plath, Elizabeth Bishop, and Gwendolyn Brooks help us conceive a new spatial imaginary, which addresses traditionally underrepresented subject matters and historiographies with ethical alertness. Reading their works and their affective-experiential modes through an interdisciplinary feminist lens, this dissertation explores the ways in which the four women poets revise and update Julia Kristeva’s foundational concept of women’s time. Their decision to draw from museums—both physical and metaphorical—foregrounds woman’s embodied self and its relational ontology, ultimately to challenge the dominant dynamics of historiographical, literary canon formation. Functioning as bookends are my chapters on Boland and Brooks whose writings about Irish and Black experiences respectively explore tactics of space-making at the face of postcolonial, post-slavery displacements and diasporas. Taking Boland’s museum elegies as a point of departure, I move on to Plath’s plastic self-elegies where an inquiry into the plasticity of the self and theatrics of self-exhibit happens. Next, I examine Bishop’s shift from Enlightenment taxonomization to love as a possible antidote to enlightenment culture. Straddling between love poetry and elegy, Bishop’s prismatic love elegies often cast a discursive journey to proto-museums with the beloved as a figure for love. Her occasional superimposition of the lover on the racial other gestures towards Brooks’s necropolitical elegies and elegies of necropolitics, the latter of which resonate with the mission of Black neighborhood museums. Each assigned with a textured materiality—textile, plastic, light, and firmness, respectively—the chapters are divided into three sections that proceed from the poets’ problem posing to their attempt to think through the identified problem. On a broader scale, the chapters progress from the most concrete, fungible, and tangible materiality to the least, which points to a way of being, an attitude towards life. Assigning textured materialities to each chapter additionally draws attention to the interstices between the formal materiality of poetic language and nonlinguistic gestures and speech sounds. In this manner, my project actively builds on the momentum of and expand current debates in and around genre theory, phenomenology, affect studies, a version of historical materialism, postcolonial/Black studies, and museum studies. / Thesis (PhD) — Boston College, 2022. / Submitted to: Boston College. Graduate School of Arts and Sciences. / Discipline: English.
183

Développement d’un modèle d’analyse de texture multibande / New model for multiband texture analysis

Safia, Abdelmounaime January 2014 (has links)
Résumé : En télédétection, la texture facilite l’identification des classes de surfaces sur des critères de similitude d’organisation spatiale des pixels. Les méthodes d’analyse texturale utilisées en télédétection et en traitement d’image en général sont principalement proposées pour extraire la texture dans une seule bande à la fois. Pour les images multispectrales, ceci revient à extraire la texture dans chaque bande spectrale séparément. Cette stratégie ignore la dépendance qui existe entre la texture des différentes bandes (texture inter-bande) qui peut être une source d’information additionnelle aux côtés de l’information texturale classique intra-bande. La prise en charge de la texture multibande (intra- et inter-bande) engendre une complexité calculatoire importante. Dans sa recherche de solution pour l’analyse de la texture multibande, ce projet de thèse revient vers les aspects fondamentaux de l’analyse de la texture, afin de proposer un modèle de texture qui possède intrinsèquement une complexité calculatoire réduite, et cela indépendamment de l’aspect multibande de la texture. Une solution pour la texture multibande est ensuite greffée sur ce nouveau modèle, de manière à lui permettre d’hériter de sa complexité calculatoire réduite. La première partie de ce projet de recherche introduit donc un nouveau modèle l’analyse de texture appelé modèle d’unité texturale compacte (en anglais : Compact Texture Unit, C-TU). Le C-TU prend comme point de départ le modèle de spectre de texture et propose une réduction significative de sa complexité. Cette réduction est atteinte en proposant une solution générale pour une codification de la texture avec la seule information d’occurrence, sans l’information structurelle. En prenant avantage de la grande efficacité calculatoire du modèle de C-TU développé, un nouvel indice qui analyse la texture multibande comme un ensemble indissociable d’interactions spatiales intra- et inter-bandes est proposé. Cet indice, dit C-TU multibande, utilise la notion de voisinage multibande afin de comparer le pixel central avec ses voisins dans la même bande et avec ceux des autres bandes spectrales. Ceci permet à l’indice de C-TU multibande d’extraire la texture de plusieurs bandes simultanément. Finalement, une nouvelle base de données de textures couleurs multibandes est proposée, pour une validation des méthodes texturales multibandes. Une série de tests visant principalement à évaluer la qualité discriminante des solutions proposées a été conduite. L’ensemble des résultats obtenus dont nous faisons rapport ici confirme que le modèle de C-TU proposé ainsi que sa version multibande sont des outils performants pour l’analyse de la texture en télédétection et en traitement d’images en général. Les tests ont également démontré que la nouvelle base de données de textures multibande possède toutes les caractéristiques nécessaires pour être utilisée en validation des méthodes de texture multibande. // Abstract : In multispectral images, texture is typically extracted independently in each band using existing grayscale texture methods. However, reducing texture of multispectral images into a set of independent grayscale texture ignores inter-band spatial interactions which can be a valuable source of information. The main obstacle for characterizing texture as intra- and inter-band spatial interactions is that the required calculations are cumbersome. In the first part of this PhD thesis, a new texture model named the Compact Texture Unit (C-TU) model was proposed. The C-TU model is a general solution for the texture spectrum model, in order to decrease its computational complexity. This simplification comes from the fact that the C-TU model characterizes texture using only statistical information, while the texture spectrum model uses both statistical and structural information. The proposed model was evaluated using a new monoband C-TU descriptor in the context of texture classification and image retrieval. Results showed that the monoband C-TU descriptor that uses the proposed C-TU model provides performances equivalent to those delivered by the texture spectrum model but with much more lower complexity. The calculation efficiency of the proposed C-TU model is exploited in the second part of this thesis in order to propose a new descriptor for multiband texture characterization. This descriptor, named multiband C-TU, extracts texture as a set of intra- and inter-band spatial interactions simultaneously. The multiband C-TU descriptor is very simple to extract and computationally efficient. The proposed descriptor was compared with three strategies commonly adopted in remote sensing. The first is extracting texture using panchromatic data; the second is extracting texture separately from few newbands obtained by principal components transform; and the third is extracting texture separately in each spectral band. These strategies were applied using cooccurrence matrix and monoband compact texture descriptors. For all experiments, the proposed descriptor provided the best results. In the last part of this thesis, a new color texture images database is developed, named Multiband Brodatz Texture database. Images from this database have two important characteristics. First, their chromatic content, even if it is rich, does not have discriminative value, yet it contributes to form texture. Second, their textural content is characterized by high intra- and inter-band variation. These two characteristics make this database ideal for multiband texture analysis without the influence of color information.
184

Détection d'hétérogénéités linéaires dans les textures directionnelles : application à la détection de failles en sismique de réflexion

David, Ciprian Petru 15 December 2008 (has links)
Détection d’hétérogénéités linéaires dans les textures directionnelles – application à la détection de failles en sismique de réflexion: les méthodes développées concernent la classe particulière de textures directionnelles. Dans un premier chapitre, nous rappelons la notion de texture, le concept de contour et le contexte applicatif concernant l’imagerie sismique. Le deuxième chapitre a pour objet l’analyse des différentes contributions que l’on peut trouver dans la littérature concernant la détection de contours dans le contexte des images texturées: les approches qui relèvent du domaine de la géophysique et les méthodes proposées par la communauté des traiteurs d’image pour la détection de contours. Le troisième chapitre regroupe nos propositions: une approche basée sur une un critère géométrique, une variante récursive robuste et une extension alliant mesure 2D et diffusion 3D. Ces propositions sont validées par une analyse quantitative par rapport aux méthodes existantes. / Linear disparity detection in directional textures – application to fault detection in seismic images: the developed approaches deal with the particular family of directional textures. In the first chapter the notion of texture and the concept of contour are introduced. Also, a detailed presentation of the application concerning seismic imagery is presented in the first chapter. The object of the second chapter is the analysis of the different contributions concerning edge and contour detection in textured images found in the literature: the approaches used in the field of geophysics and the approaches proposed by the image processing community. The third chapter regroups our contributions: a geometric criterion based approach, a recursive robust extension of the geometric approach and a 3D recursive robust extension combining a 2D measure and a 3D diffusion technique. Apart the qualitative comparisons, these contributions are validated by a quantitative analysis in comparison with the existing methods.
185

Caractérisation du cerveau humain : application à la biométrie / Characterization of the human brain : application to biometrics

Aloui, Kamel 17 December 2012 (has links)
D'une manière générale, la biométrie a pour objectif d'établir ou de vérifier l'identité d'individu, notamment à partir de ces caractéristiques physiques ou comportementales. Cette pratique tend à remplacer les méthodes traditionnelles basées sur la connaissance, à savoir un mot de passe ou un code PIN ou basées sur les possessions telles qu'une pièce d'identité ou un badge. Au quotidien, plusieurs modalités biométriques ont été développées dans une certaine mesure, dont les produits sont disponibles et déjà utilisés dans des nombreuses applications. La reconnaissance biométrique est un domaine de recherche qui ne cesse pas d'évoluer et la recherche des nouvelles modalités de hautes performances est d'actualité. L'objectif de notre thèse consiste à développer et d'évaluer de nouvelles modalités biométriques basées sur des caractéristiques cachées, infalsifiables et ne pouvant pas être modifiées volontairement. C'est dans ce contexte que nous introduisons une nouvelle modalité biométrique utilisant les caractéristiques du cerveau humain et la faisabilité d'une telle modalité a fait l'objet de notre étude. À cet effet, des images volumiques cérébrales, obtenues par IRM (Imagerie par Résonance Magnétique) sont utilisées pour en extraire les informations pertinentes et générer par la suite des codes biométriques du cerveau, appelés « BrainCode », qui serviront à l'identification ou à l'authentification d'un individu. Ainsi, nous avons élaboré trois techniques de reconnaissance biométrique. La première technique utilise l'information de la texture d'une image numérique du cerveau comme signature individuelle, alors que la deuxième est basée sur l'utilisation des caractéristiques géométriques et morphologiques du cerveau. Enfin, la dernière technique explorée se base sur la fusion des caractéristiques géométriques et les caractéristiques de la texture du cerveau. Ces nouvelles techniques biométriques nécessitent évidemment l'acquisition des images IRM du cerveau en considérant, uniquement des personnes saines et adultes.Les résultats obtenus ont conduit à des performances de reconnaissance intéressantes. Plus précisément, la première technique, basée sur l'analyse de texture et la génération d'un « BrainCode » du cerveau, permet d'obtenir une précision de vérification de l'ordre de 97,53% avec un FAR = 1,5%, FRR = 3,41% et un EER = 2,72%. La deuxième technique, utilisant un modèle géométrique du cerveau, appelé « MGC » (Modèle Géométrique du Cerveau), nous arrivons à une précision maximale de l'ordre de 98,80% avec un FAR = 0,09%, un FRR = 2,31% et un EER = 1,92%. Enfin, la fusion des caractéristiques géométriques et de texture, permet d'atteindre une précision de l'ordre de 99,43% avec un FAR = 0,32% et un FRR = 0,72%. Dans cette étude, nous nous sommes aussi intéressés à l'étude de la robustesse des approches proposées par rapport au bruit / In general, biometrics aims is the identification or verification of individual, especially using their physical or behavioral characteristics. This practice tends to replace the traditional knowledge-based methods such us a password or PIN code and token-based methods such as identity document or a badge. Daily, multiple biometric modalities have been developed, where the products are available and already used in many applications. Biometric recognition is a research area that does not stop evolving and seeking new forms of high performance modalities. The main of this thesis is to develop and evaluate new methods based on hidden biometric features, tamper-proof and can't be voluntarily changed. In this context, that we introduce a new biometric modality that using human brain characteristics and the feasibility of such a method was the object of our study. For this, brain volumetric images, obtained by MRI (Magnetic Resonance Imaging) are used to extract the most discriminative brain patterns. Afterward, biometric code of the brain, called « BrainCode », is generated that serve on individual identification or authentication. Thus, we developed three biometric techniques based on the brain. The first technique uses textural patterns of a brain digital image, while the second technique is based on the use of morphological and geometrical characteristics of the brain. The last explored technique, based on the fusion of geometric features and the textural patterns from brain MRI slice. These new biometric techniques obviously require the acquisition of brain MRI images by considering only healthy and adult peoples. According to obtained results from experiments, the developed techniques lead to interesting recognition performance. More precisely, the first technique based on texture patterns analysis and « BrainCode » generation, provides about 97,53% of accuracy, FAR = 1,5%, FRR = 3,41% and the EER = 2,72%. The second technique, using a geometric model of the brain, called « GMB » (Geometric Model of the Brain), we obtained a maximum accuracy around 98,80%, FAR = 0,09%, FRR = 2,31% and the EER = 1,92%. Finally, the merger of geometric features and the texture, we have reached about 99, 47% of accuracy, FAR = 0,32% and the FRR = 0,72%. In this study, we are also interested on the robustness study of the proposed approaches against noise
186

Particle generation for geometallurgical process modeling

Koch, Pierre-Henri January 2017 (has links)
A geometallurgical model is the combination of a spatial model representing an ore deposit and a process model representing the comminution and concentration steps in beneficiation. The process model itself usually consists of several unit models. Each of these unit models operates at a given level of detail in material characterization - from bulk chemical elements, elements by size, bulk minerals and minerals by size to the liberation level that introduces particles as the basic entity for simulation (Paper 1). In current state-of-the-art process simulation, few unit models are defined at the particle level because these models are complex to design at a more fundamental level of detail, liberation data is hard to measure accurately and large computational power is required to process the many particles in a flow sheet. Computational cost is a consequence of the intrinsic complexity of the unit models. Mineral liberation data depends on the quality of the sampling and the polishing, the settings and stability of the instrument and the processing of the data. This study introduces new tools to simulate a population of mineral particles based on intrinsic characteristics of the feed ore. Features are extracted at the meso-textural level (drill cores) (Paper 2), put in relation to their micro-textures before breakage and after breakage (Paper 3). The result is a population of mineral particles stored in a file format compatible to import into process simulation software. The results show that the approach is relevant and can be generalized towards new characterization methods. The theory of image representation, analysis and ore texture simulation is briefly introduced and linked to 1-point, 2-point, and multiple-point methods from spatial statistics. A breakage mechanism is presented as a cellular automaton. Experimental data and examples are taken from a copper-gold deposit with a chalcopyrite flotation circuit, an iron ore deposit with a magnetic separation process. This study is covering a part of a larger research program, PREP (Primary resource efficiency by enhanced prediction). / PREP
187

Filtrage, segmentation et suivi d'images échographiques : applications cliniques / Filtering, Segmentation and ultrasound images tracking. : clinical applications.

Dahdouh, Sonia 23 September 2011 (has links)
La réalisation des néphrolithotomies percutanées est essentiellement conditionnée par la qualité dela ponction calicièle préalable. En effet, en cas d’échec de celle-ci, l’intervention ne peut avoir lieu.Réalisée le plus souvent sous échographie, sa qualité est fortement conditionnée par celle du retouréchographique, considéré comme essentiel par la deuxième consultation internationale sur la lithiase pour limiter les saignements consécutifs à l’intervention.L’imagerie échographique est largement plébiscitée en raison de son faible coût, de l’innocuité del’examen, liée à son caractère non invasif, de sa portabilité ainsi que de son excellente résolutiontemporelle ; elle possède toutefois une très faible résolution spatiale et souffre de nombreux artefacts tels que la mauvaise résolution des images, un fort bruit apparent et une forte dépendance àl’opérateur.L’objectif de cette thèse est de concevoir une méthode de filtrage des données échographiques ainsiqu’une méthode de segmentation et de suivi du rein sur des séquences ultrasonores, dans le butd’améliorer les conditions d’exécution d’interventions chirurgicales telles que les néphrolithotomiespercutanées.Le filtrage des données, soumis et publié dans SPIE 2010, est réalisé en exploitant le mode deformation des images : le signal radiofréquence est filtré directement, avant même la formation del’image 2D finale. Pour ce faire, nous utilisons une méthode basée sur les ondelettes, en seuillantdirectement les coefficients d’ondelettes aux différentes échelles à partir d’un algorithme de typesplit and merge appliqué avant reconstruction de l’image 2D.La méthode de suivi développée (une étude préliminaire a été publiée dans SPIE 2009), exploiteun premier contour fourni par le praticien pour déterminer, en utilisant des informations purementlocales, la position du contour sur l’image suivante de la séquence. L’image est transformée pourne plus être qu’un ensemble de vignettes caractérisées par leurs critères de texture et une premièresegmentation basée région est effectuée sur cette image des vignettes. Cette première étape effectuée, le contour de l’image précédente de la séquence est utilisé comme initialisation afin de recalculer le contour de l’image courante sur l’image des vignettes segmentée. L’utilisation d’informations locales nous a permis de développer une méthode facilement parallélisable, ce qui permettra de travailler dans une optique temps réel.La validation de la méthode de filtrage a été réalisée sur des signaux radiofréquence simulés. Laméthode a été comparée à différents algorithmes de l’état de l’art en terme de ratio signal sur bruitet de calcul de USDSAI. Les résultats ont montré la qualité de la méthode proposée comparativement aux autres. La méthode de segmentation, quant-à elle, a été validée sans filtrage préalable, sur des séquences 2D réelles pour un temps d’exécution sans optimisation, inférieur à la minute pour des images 512*512. / The achievement of percutaneous nephrolithotomies is mainly conditioned by the quality of the initial puncture. Indeed, if it is not well performed , the intervention cannot be fulfilled.In order to make it more accurate this puncture is often realized under ultrasound control. Thus the quality of the ultrasound feedback is very critical and when clear enough it greatly helps limiting bleeding.Thanks to its low cost, its non invasive nature and its excellent temporal resolution, ultrasound imaging is considered very appropriate for this purpose. However, this solution is not perfect it is characterized by a low spatial resolution and the results present artifacts due to a poor image resolution (compared to images provided by some other medical devices) and speckle noise.Finally this technic is greatly operator dependent.Aims of the work presented here are, first to design a filtering method for ultrasound data and then to develop a segmentation and tracking algorithm on kidney ultrasound sequences in order to improve the executing conditions of surgical interventions such as percutaneous nephrolithotomies.The results about data filtering was submitted and published in SPIE 2010. The method uses the way ultrasound images are formed to filter them: the radiofrequency signal is directly filtered, before the bi-dimensional reconstruction. In order to do so, a wavelet based method, thresholding directly wavelet coefficients at different scales has been developed. The method is based on a “split and merge” like algorithm.The proposed algorithm was validated on simulated signals and its results compared to the ones obtained with different state of the art algorithms. Experiments show that this new proposed approach is better.The segmentation and tracking method (of which a prospective study was published in SPIE 2009) uses a first contour given by a human expert and then determines, using only local informations, the position of the next contour on the following image of the sequence. The tracking technique was validated on real data with no previous filtering and successfully compared with state of the art methods.
188

Face analysis using polynomials / Analyse faciale basée polynômes

Bordei, Cristina 03 March 2016 (has links)
Considéré comme l'un des sujets de recherche les plus actifs et visibles de la vision par ordinateur, de la reconnaissance des formes et de la biométrie, l'analyse faciale a fait l'objet d'études approfondies au cours des deux dernières décennies. Le travail de cette thèse a pour objectif de proposer de nouvelles techniques d'utilisation de représentations de texture basées polynômes pour l'analyse faciale.<br>La première partie de cette thèse est dédiée à l'intégration de bases de polynômes dans les modèles actifs d'apparence. Nous proposons premièrement une manière d'utiliser les coefficients polynomiaux dans la modélisation de l'apparence. Ensuite, afin de réduire la complexité du modèle nous proposons de choisir et d'utiliser les meilleurs coefficients en tant que représentation de texture. Enfin, nous montrons comment ces derniers peuvent être utilisés dans un algorithme de descente de gradient.<br>La deuxième partie de la thèse porte sur l'utilisation des bases polynomiales pour la détection des points/zones d'intérêt et comme descripteur pour la reconnaissance des expressions faciales. Inspirés par des techniques de détection des singularités dans des champ de vecteurs, nous commençons par présenter un algorithme utilisé pour l'extraction des points d'intérêt dans une image. Puis nous montrons comment les bases polynomiales peuvent être utilisées pour extraire des informations sur les expressions faciales. Puisque les coefficients polynomiaux fournissent une analyse précise multi-échelles et multi-orientation et traitent le problème de redondance efficacement ils sont utilisés en tant que descripteurs dans un algorithme de classification d'expression faciale. / As one of the most active and visible research topic in computer vision, pattern recognition and biometries, facial analysis has been extensively studied in the past two decades. The work in this thesis presents novel techniques to use polynomial basis texture representations for facial analysis.<br> The first part of this thesis, is dedicated to the integration of polynomial bases in the Active Appearance Models - a set of statistical tools that proved to be very efficient in modeling faces. First we propose a way to use the coefficients obtained after polynomial projections in the appearance modeling. Then, in order to reduce model complexity we proposed to select and use as a texture representation the strongest polynomial coefficients. Finally we show how in addition to the texture representation polynomial coefficients can be used in a gradient descent algorithm since polynomial decomposition is equivalent to a filter bank.<br>The second part of the thesis concems the use of the polynomial bases for interesting points and areas detection and as a descriptor for facial expression recognition. We start by presenting an algorithm used for accurate image keypoints localization inspired by techniques of singularities detection in a vector field. Our approach consists in two major steps: the calculation of an image vector field of normals and the keypoint selection within the field both presented in a multi-scale multi resolution scheme. Finally we show how polynomial bases can be used to extract informations about facial expressions. Polynomial coefficients are used as descriptors in an facial expression classification algorithm.
189

Descritores locais de textura para classificação de imagens coloridas sob variação de iluminação / Local texture descriptors for color texture classification under varying illumination

Tamiris Trevisan Negri 15 December 2017 (has links)
A classificação de texturas coloridas sob diferentes condições de iluminação é um desafio na área de visão computacional, e depende da eficiência dos descritores de textura em capturar características que sejam discriminantes independentemente das propriedades da fonte de luz incidente sobre o objeto. Visando melhorar o processo de classificação de texturas coloridas iluminadas com diferentes fontes de luz, este trabalho propõe três novos descritores, nomeados Opponent Color Local Mapped Pattern (OCLMP), que combina o descritor de texturas por padrões locais mapeados (Local Mapped Pattern - LMP) com a teoria de cores oponentes; Color Intensity Local Mapped Pattern (CILMP), que extrai as informações de cor e textura de maneira integrada, levando em consideração a textura da cor, combinando estas informações com características da luminância da textura em uma análise multiresolução; e Extended Color Local Mapped Pattern (ECLMP), que utiliza dois operadores para extrair informações de cor e textura de forma integrada (textura da cor) combinadas com informações apenas de textura (sem cor) de uma imagem. Todos esses novos descritores propostos são paramétricos e, sendo o ajuste ótimo de seus parâmetros não trivial, o processo exige um tempo excessivo de computação. Portanto, foi proposto nesta tese a utilização de algoritmos genéticos para o ajuste automático dos parâmetros. A avaliação dos descritores propostos foi realizada em duas bases de dados de texturas coloridas com variação de iluminação: RawFooT (Raw Food Texture Database) e KTH-TIPS- 2b (Textures under varying Illumination, Pose and Scale Database), utilizando-se um classificador. Os resultados experimentais mostraram que os descritores propostos são mais robustos à variação de iluminação do que outros decritores de textura comumente utilizados na literatura. Os descritores propostos apresentaram um desempenho superior aos descritores comparados em 15% na base de dados RawFooT e 4% na base de dados KTH-TIPS-2b. / Color texture classification under varying illumination remains a challenge in the computer vision field, and it greatly relies on the efficiency at which the texture descriptors capture discriminant features, independent of the illumination condition. The aim of this thesis is to improve the classification of color texture acquired with varying illumination sources. We propose three new color texture descriptors, namely: the Opponent Color Local Mapped Pattern (OCLMP), which combines a local methodology (LMP) with the opponent colors theory, the Color Intensity Local Mapped Pattern (CILMP), which extracts color and texture information jointly, in a multi-resolution fashion, and the Extended Color Local Mapped Pattern (ECLMP), which applies two operators to extract color and texture information jointly as well. As the proposed methods are based on the LMP algorithm, they are parametric functions. Finding the optimal set of parameters for the descriptor can be a cumbersome task. Therefore, this work proposes the use of genetic algorithms to automatically adjust the parameters. The methods were assessed using two data sets of textures acquired using varying illumination sources: the RawFooT (Raw Food Texture Database), and the KTH-TIPS-2b (Textures under varying Illumination, Pose and Scale Database). The experimental results show that the proposed descriptors are more robust to variations to the illumination source than other methods found in the literature. The improvement on the accuracy was higher than 15% on the RawFoot data set, and higher than 4% on the KTH-TIPS-2b data set.
190

A procedural model for snake skin texture generation

Pinheiro, Jefferson Magalhães January 2017 (has links)
Existem milhares de espécies de serpentes no mundo, muitas com padrões distintos e intricados. Esta diversidade se torna um problema para usuários que precisam criar texturas de pele de serpente para aplicar em modelos 3D, pois a dificuldade em criar estes padrões complexos é considerável. Nós primeiramente propomos uma categorização de padrões de pele de serpentes levando em conta suas características visuais. Então apresentamos um modelo procedural capaz de sintetizar uma vasta gama de textura de padrões de pele de serpentes. O modelo usa processamento de imagem simples (tal como sintetizar bolinhas e listras) bem como autômatos celulares e geradores de ruído para criar texturas realistas para usar em renderizadores modernos. Nossos resultados mostram boa similaridade visual com pele de serpentes reais. As texturas resultantes podem ser usadas não apenas em computação gráfica, mas também em educação sobre serpentes e suas características visuais. Nós também realizamos testes com usuários para avaliar a usabilidade de nossa ferramenta. O escore da Escala de Usabilidade do Sistema foi de 85:8, sugerindo uma ferramenta de texturização altamente efetiva. / There are thousands of snake species in the world, many with intricate and distinct skin patterns. This diversity becomes a problem for users who need to create snake skin textures to apply on 3D models, as the difficulty for creating such complex patterns is considerable. We first propose a categorization of snake skin patterns considering their visual characteristics. We then present a procedural model capable of synthesizing a wide range of texture skin patterns from snakes. The model uses simple image processing (such as synthesizing spots and stripes) as well as cellular automata and noise generators to create realistic textures for use in a modern renderer. Our results show good visual similarity with real skin found in snakes. The resulting textures can be used not only for computer graphics texturing, but also in education about snakes and their visual characteristics. We have also performed a user study to assess the usability of our tool. The score from the System Usability Scale was 85:8, suggesting a highly effective texturing tool.

Page generated in 0.0327 seconds