1 |
Handling imperfections for multimodal image annotation / Gestion des imperfections pour l’annotation multimodale d’imagesZnaidia, Amel 11 February 2014 (has links)
La présente thèse s’intéresse à l’annotation multimodale d’images dans le contexte des médias sociaux. Notre objectif est de combiner les modalités visuelles et textuelles (tags) afin d’améliorer les performances d’annotation d’images. Cependant, ces tags sont généralement issus d’une indexation personnelle, fournissant une information imparfaite et partiellement pertinente pour un objectif de description du contenu sémantique de l’image. En outre, en combinant les scores de prédiction de différents classifieurs appris sur les différentes modalités, l’annotation multimodale d’image fait face à leurs imperfections: l’incertitude, l’imprécision et l’incomplétude. Dans cette thèse, nous considérons que l’annotation multimodale d’image est soumise à ces imperfections à deux niveaux : niveau représentation et niveau décision. Inspiré de la théorie de fusion de l’information, nous concentrons nos efforts dans cette thèse sur la définition, l’identification et la prise en compte de ces aspects d’imperfections afin d’améliorer l’annotation d’images. / This thesis deals with multimodal image annotation in the context of social media. We seek to take advantage of textual (tags) and visual information in order to enhance the image annotation performances. However, these tags are often noisy, overly personalized and only a few of them are related to the semantic visual content of the image. In addition, when combining prediction scores from different classifiers learned on different modalities, multimodal image annotation faces their imperfections (uncertainty, imprecision and incompleteness). Consequently, we consider that multimodal image annotation is subject to imperfections at two levels: the representation and the decision. Inspired from the information fusion theory, we focus in this thesis on defining, identifying and handling imperfection aspects in order to improve image annotation.
|
2 |
Handling Imperfections for Multimodal Image AnnotationZnaidia, Amel 11 February 2014 (has links) (PDF)
This thesis deals with multimodal image annotation in the context of social media. We seek to take advantage of textual (tags) and visual information in order to enhance the image annotation performances. However, these tags are often noisy, overly personalized and only a few of them are related to the semantic visual content of the image. In addition, when combining prediction scores from different classifiers learned on different modalities, multimodal image annotation faces their imperfections (uncertainty, imprecision and incompleteness). Consequently, we consider that multimodal image annotation is subject to imperfections at two levels: the representation and the decision. Inspired from the information fusion theory, we focus in this thesis on defining, identifying and handling imperfection aspects in order to improve image annotation.
|
3 |
Pokročilé metody segmentace cévního řečiště na fotografiích sítnice / Advanced retinal vessel segmentation methods in colour fundus imagesSvoboda, Ondřej January 2013 (has links)
Segmentation of vasculature tree is an important step of the process of image processing. There are many methods of automatic blood vessel segmentation. These methods are based on matched filters, pattern recognition or image classification. Use of automatic retinal image processing greatly simplifies and accelerates retinal images diagnosis. The aim of the automatic image segmentation algorithms is thresholding. This work primarily deals with retinal image thresholding. We discuss a few works using local and global image thresholding and supervised image classification to segmentation of blood tree from retinal images. Subsequently is to set of results from two different methods used image classification and discuss effectiveness of the vessel segmentation. Use image classification instead of global thresholding changed statistics of first method on healthy part of HRF. Sensitivity and accuracy decreased to 62,32 %, respectively 94,99 %. Specificity increased to 95,75 %. Second method achieved sensitivity 69.24 %, specificity 98.86% and 95.29 % accuracy. Combining the results of both methods achieved sensitivity up to72.48%, specificity to 98.59% and the accuracy to 95.75%. This confirmed the assumption that the classifier will achieve better results. At the same time, was shown that extend the feature vector combining the results from both methods have increased sensitivity, specificity and accuracy.
|
Page generated in 0.1645 seconds