• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 11
  • 11
  • 7
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Indexação e recuperação de imagens por cor e estrutura / Image indexing and retrieval by color and shape

Costa, Yandre Maldonado e Gomes da January 2002 (has links)
Este trabalho descreve um conjunto de técnicas para a recuperação de imagens baseada nos aspectos cromático e estrutural das mesmas. A abordagem aqui descrita utiliza mecanismos que permitem a preservação de informação espacial referente aos conteúdos extraídos da imagem de forma que a sua precisão possa ser ajustada de acordo com a necessidade da consulta. Um outro importante aspecto aqui considerado, é a possibilidade de se optar por um dos seguintes espaços de cores para a verificação de distâncias entre cores no momento da recuperação: RGB, L*u*v*, ou L*a*b*. Com estas diferentes possibilidades de espaços de cores, será verificada a influência que os mesmos podem provocar no processo de recuperação de imagens baseado em aspectos cromáticos. O conjunto de técnicas para a recuperação de imagens abordadas neste trabalho levou à construção do sistema RICE, um ambiente computacional através do qual pode-se realizar consultas a partir de um repositório de imagens. Para a verificação do desempenho dos diferentes parâmetros ajustáveis na recuperação de imagens aqui descrita e implementada no sistema RICE, foram utilizadas curvas de “Recall x Precision”. / This work describes a set of image retrieval techniques by color and shape similarity. The approach presented here allows to preserve spacial relantionships of the contents extracted from the image. And it can be adjusted accordingly to the query needs. Another important feature considered here, is the possibility of choosing between the RGB, L*u*v*, and L*a*b* color spaces to compute color distances during the image retrieval operation. With these three options of color spaces, the influence of each one in the image retrieval process based in chromatic contents will be verified. The set of techniques for image retrieval described here led to development of the RICE system, a computational environment for image retrieval by color and shape similarity. Furthermore, the recall x precision graph was applied in order to verify the performance of the RICE system in several configuration modes of image retrieval.
2

Indexação e recuperação de imagens por cor e estrutura / Image indexing and retrieval by color and shape

Costa, Yandre Maldonado e Gomes da January 2002 (has links)
Este trabalho descreve um conjunto de técnicas para a recuperação de imagens baseada nos aspectos cromático e estrutural das mesmas. A abordagem aqui descrita utiliza mecanismos que permitem a preservação de informação espacial referente aos conteúdos extraídos da imagem de forma que a sua precisão possa ser ajustada de acordo com a necessidade da consulta. Um outro importante aspecto aqui considerado, é a possibilidade de se optar por um dos seguintes espaços de cores para a verificação de distâncias entre cores no momento da recuperação: RGB, L*u*v*, ou L*a*b*. Com estas diferentes possibilidades de espaços de cores, será verificada a influência que os mesmos podem provocar no processo de recuperação de imagens baseado em aspectos cromáticos. O conjunto de técnicas para a recuperação de imagens abordadas neste trabalho levou à construção do sistema RICE, um ambiente computacional através do qual pode-se realizar consultas a partir de um repositório de imagens. Para a verificação do desempenho dos diferentes parâmetros ajustáveis na recuperação de imagens aqui descrita e implementada no sistema RICE, foram utilizadas curvas de “Recall x Precision”. / This work describes a set of image retrieval techniques by color and shape similarity. The approach presented here allows to preserve spacial relantionships of the contents extracted from the image. And it can be adjusted accordingly to the query needs. Another important feature considered here, is the possibility of choosing between the RGB, L*u*v*, and L*a*b* color spaces to compute color distances during the image retrieval operation. With these three options of color spaces, the influence of each one in the image retrieval process based in chromatic contents will be verified. The set of techniques for image retrieval described here led to development of the RICE system, a computational environment for image retrieval by color and shape similarity. Furthermore, the recall x precision graph was applied in order to verify the performance of the RICE system in several configuration modes of image retrieval.
3

Indexação e recuperação de imagens por cor e estrutura / Image indexing and retrieval by color and shape

Costa, Yandre Maldonado e Gomes da January 2002 (has links)
Este trabalho descreve um conjunto de técnicas para a recuperação de imagens baseada nos aspectos cromático e estrutural das mesmas. A abordagem aqui descrita utiliza mecanismos que permitem a preservação de informação espacial referente aos conteúdos extraídos da imagem de forma que a sua precisão possa ser ajustada de acordo com a necessidade da consulta. Um outro importante aspecto aqui considerado, é a possibilidade de se optar por um dos seguintes espaços de cores para a verificação de distâncias entre cores no momento da recuperação: RGB, L*u*v*, ou L*a*b*. Com estas diferentes possibilidades de espaços de cores, será verificada a influência que os mesmos podem provocar no processo de recuperação de imagens baseado em aspectos cromáticos. O conjunto de técnicas para a recuperação de imagens abordadas neste trabalho levou à construção do sistema RICE, um ambiente computacional através do qual pode-se realizar consultas a partir de um repositório de imagens. Para a verificação do desempenho dos diferentes parâmetros ajustáveis na recuperação de imagens aqui descrita e implementada no sistema RICE, foram utilizadas curvas de “Recall x Precision”. / This work describes a set of image retrieval techniques by color and shape similarity. The approach presented here allows to preserve spacial relantionships of the contents extracted from the image. And it can be adjusted accordingly to the query needs. Another important feature considered here, is the possibility of choosing between the RGB, L*u*v*, and L*a*b* color spaces to compute color distances during the image retrieval operation. With these three options of color spaces, the influence of each one in the image retrieval process based in chromatic contents will be verified. The set of techniques for image retrieval described here led to development of the RICE system, a computational environment for image retrieval by color and shape similarity. Furthermore, the recall x precision graph was applied in order to verify the performance of the RICE system in several configuration modes of image retrieval.
4

EDGE-SUPPRESSED COLOR IMAGE INDEXING AND RETRIEVAL USING ANGLE-DISTANCE MEASUREMENT IN THE SCALED-SPACE OF PRINCIPAL COMPONENTS

Bobik, Sergei January 2000 (has links)
No description available.
5

Search-based automatic image annotation using geotagged community photos / Recherche basée sur l’annotation automatique des images à l'aide de photos collaboratives géolocalisées

Mousselly Sergieh, Hatem 26 September 2014 (has links)
La technologie Web 2.0 a donné lieu à un large éventail de plates-formes de partage de photos. Il est désormais possible d’annoter des images de manière collaborative, au moyen de mots-clés; ce qui permet une gestion et une recherche efficace de ces images. Toutefois, l’annotation manuelle est laborieuse et chronophage. Au cours des dernières années, le nombre grandissant de photos annotées accessibles sur le Web a permis d'expérimenter de nouvelles méthodes d'annotation automatique d'images. L'idée est d’identifier, dans le cas d’une photo non annotée, un ensemble d'images visuellement similaires et, a fortiori, leurs mots-clés, fournis par la communauté. Il existe actuellement un nombre considérable de photos associées à des informations de localisation, c'est-à-dire géo-localisées. Nous exploiterons, dans le cadre de cette thèse, ces informations et proposerons une nouvelle approche pour l'annotation automatique d'images géo-localisées. Notre objectif est de répondre aux principales limites des approches de l'état de l'art, particulièrement concernant la qualité des annotations produites ainsi que la rapidité du processus d'annotation. Tout d'abord, nous présenterons une méthode de collecte de données annotées à partir du Web, en se basant sur la localisation des photos et les liens sociaux entre leurs auteurs. Par la suite, nous proposerons une nouvelle approche afin de résoudre l’ambiguïté propre aux tags d’utilisateurs, le tout afin d’assurer la qualité des annotations. L'approche démontre l'efficacité de l'algorithme de recherche de caractéristiques discriminantes, dit de Laplace, dans le but d’améliorer la représentation de l'annotation. En outre, une nouvelle mesure de distance entre mots-clés sera présentée, qui étend la divergence de Jensen-Shannon en tenant compte des fluctuations statistiques. Dans le but d'identifier efficacement les images visuellement proches, la thèse étend sur deux point l'algorithme d'état de l'art en comparaison d'images, appelé SURF (Speeded-Up Robust Features). Premièrement, nous présenterons une solution pour filtrer les points-clés SURF les plus significatifs, au moyen de techniques de classification, ce qui accélère l'exécution de l'algorithme. Deuxièmement, la précision du SURF sera améliorée, grâce à une comparaison itérative des images. Nous proposerons une un modèle statistique pour classer les annotations récupérées selon leur pertinence du point de vue de l'image-cible. Ce modèle combine différents critères, il est centré sur la règle de Bayes. Enfin, l'efficacité de l'approche d'annotation ainsi que celle des contributions individuelles sera démontrée expérimentalement. / In the Web 2.0 era, platforms for sharing and collaboratively annotating images with keywords, called tags, became very popular. Tags are a powerful means for organizing and retrieving photos. However, manual tagging is time consuming. Recently, the sheer amount of user-tagged photos available on the Web encouraged researchers to explore new techniques for automatic image annotation. The idea is to annotate an unlabeled image by propagating the labels of community photos that are visually similar to it. Most recently, an ever increasing amount of community photos is also associated with location information, i.e., geotagged. In this thesis, we aim at exploiting the location context and propose an approach for automatically annotating geotagged photos. Our objective is to address the main limitations of state-of-the-art approaches in terms of the quality of the produced tags and the speed of the complete annotation process. To achieve these goals, we, first, deal with the problem of collecting images with the associated metadata from online repositories. Accordingly, we introduce a strategy for data crawling that takes advantage of location information and the social relationships among the contributors of the photos. To improve the quality of the collected user-tags, we present a method for resolving their ambiguity based on tag relatedness information. In this respect, we propose an approach for representing tags as probability distributions based on the algorithm of Laplacian Score feature selection. Furthermore, we propose a new metric for calculating the distance between tag probability distributions by extending Jensen-Shannon Divergence to account for statistical fluctuations. To efficiently identify the visual neighbors, the thesis introduces two extensions to the state-of-the-art image matching algorithm, known as Speeded Up Robust Features (SURF). To speed up the matching, we present a solution for reducing the number of compared SURF descriptors based on classification techniques, while the accuracy of SURF is improved through an efficient method for iterative image matching. Furthermore, we propose a statistical model for ranking the mined annotations according to their relevance to the target image. This is achieved by combining multi-modal information in a statistical framework based on Bayes' Rule. Finally, the effectiveness of each of mentioned contributions as well as the complete automatic annotation process are evaluated experimentally.
6

Vers un système interactif de structuration des index pour une recherche par le contenu dans des grandes bases d'images / Towards an interactive index structuring system for content-based image retrieval in large image databases

Lai, Hien Phuong 02 October 2013 (has links)
Cette thèse s'inscrit dans la problématique de l'indexation et la recherche d'images par le contenu dans des bases d'images volumineuses. Les systèmes traditionnels de recherche d'images par le contenu se composent généralement de trois étapes : l'indexation, la structuration et la recherche. Dans le cadre de cette thèse, nous nous intéressons plus particulièrement à l'étape de structuration qui vise à organiser, dans une structure de données, les signatures visuelles des images extraites dans la phase d'indexation afin de faciliter, d'accélérer et d'améliorer les résultats de la recherche ultérieure. A la place des méthodes traditionnelles de structuration, nous étudions les méthodes de regroupement des données (clustering) qui ont pour but d'organiser les signatures en groupes d'objets homogènes (clusters), sans aucune contrainte sur la taille des clusters, en se basant sur la similarité entre eux. Afin de combler le fossé sémantique entre les concepts de haut niveau sémantique exprimés par l'utilisateur et les signatures de bas niveau sémantique extraites automatiquement dans la phase d'indexation, nous proposons d'impliquer l'utilisateur dans la phase de clustering pour qu'il puisse interagir avec le système afin d'améliorer les résultats du clustering, et donc améliorer les résultats de la recherche ultérieure. En vue d'impliquer l'utilisateur dans la phase de clustering, nous proposons un nouveau modèle de clustering semi-supervisé interactif en utilisant les contraintes par paires (must-link et cannot-link) entre les groupes d'images. Tout d'abord, les images sont regroupées par le clustering non supervisé BIRCH (Zhang et al., 1996). Ensuite, l'utilisateur est impliqué dans la boucle d'interaction afin d'aider le clustering. Pour chaque itération interactive, l'utilisateur visualise les résultats de clustering et fournit des retours au système via notre interface interactive. Par des simples cliques, l'utilisateur peut spécifier les images positives ainsi que les images négatives pour chaque cluster. Il peut aussi glisser les images entre les clusters pour demander de changer l'affectation aux clusters des images. Les contraintes par paires sont ensuite déduites en se basant sur les retours de l'utilisateur ainsi que les informations de voisinage. En tenant compte de ces contraintes, le système réorganise les clusters en utilisant la méthode de clustering semi-supervisé proposée dans cette thèse. La boucle d'interaction peut être répétée jusqu'à ce que le résultat du clustering satisfasse l'utilisateur. Différentes stratégies pour déduire les contraintes par paires entre les images sont proposées. Ces stratégies sont analysées théoriquement et expérimentalement. Afin d'éviter que les résultats expérimentaux dépendent subjectivement de l'utilisateur humain, un agent logiciel simulant le comportement de l'utilisateur humain pour donner des retours est utilisé pour nos expérimentations. En comparant notre méthode avec la méthode de clustering semi-supervisé la plus populaire HMRF-kmeans (Basu et al., 2004), notre méthode donne de meilleurs résultats. / This thesis deals with the problem of Content-Based Image Retrieval (CBIR) on large image databases. Traditional CBIR systems generally rely on three phases : feature extraction, feature space structuring and retrieval. In this thesis, we are particularly interested in the structuring phase, which aims at organizing the visual feature descriptors of all images into an efficient data structure in order to facilitate, accelerate and improve further retrieval. The visual feature descriptor of each image is extracted from the feature extraction phase. Instead of traditional structuring methods, clustering methods which aim at organizing image descriptors into groups of similar objects (clusters), without any constraint on the cluster size, are studied. In order to reduce the “semantic gap” between high-level semantic concepts expressed by the user and the low-level features automatically extracted from the images, we propose to involve the user in the clustering phase so that he/she can interact with the system so as to improve the clustering results, and thus improve the results of further retrieval. With the aim of involving the user in the clustering phase, we propose a new interactive semi-supervised clustering model based on pairwise constraints (must-link and cannot-link) between groups of images. Firstly, images are organized into clusters by using the unsupervised clustering method BIRCH (Zhang et al., 1996). Then the user is involved into the interaction loop in order to guide the clustering process. In each interactive iteration, the user visualizes the clustering results and provide feedback to the system via our interactive interface. With some simple clicks, the user can specify the positive and/or negative images for each cluster. The user can also drag and drop images between clusters in order to change the cluster assignment of some images. Pairwise constraints are then deduced based on the user feedback as well as the neighbourhood information. By taking into account these constraints, the system re-organizes the data set, using the semi-supervised clustering proposed in this thesis. The interaction loop can be iterated until the clustering result satisfies the user. Different strategies for deducing pairwise constraints are proposed. These strategies are theoretically and experimentally analyzed. In order to avoid the subjective dependence of the clustering results on the human user, a software agent simulating the behaviour of the human user for providing feedback to the system is used in our experiments. By comparing our method with the most popular semi-supervised clustering HMRF-kmeans (Basu et al., 2004), our method gives better results.
7

Image manipulation and user-supplied index terms.

Schultz, Leah 05 1900 (has links)
This study investigates the relationships between the use of a zoom tool, the terms they supply to describe the image, and the type of image being viewed. Participants were assigned to two groups, one with access to the tool and one without, and were asked to supply terms to describe forty images, divided into four categories: landscape, portrait, news, and cityscape. The terms provided by participants were categorized according to models proposed in earlier image studies. Findings of the study suggest that there was not a significant difference in the number of terms supplied in relation to access to the tool, but a large variety in use of the tool was demonstrated by the participants. The study shows that there are differences in the level of meaning of the terms supplied in some of the models. The type of image being viewed was related to the number of zooms and relationships between the type of image and the number of terms supplied as well as their level of meaning in the various models from previous studies exist. The results of this study provide further insight into how people think about images and how the manipulation of those images may affect the terms they assign to describe images. The inclusion of these tools in search and retrieval scenarios may affect the outcome of the process and the more collection managers know about how people interact with images will improve their ability to provide access to the growing amount of pictorial information.
8

Key Views for Visualizing Large Spaces

Cai, Hongyuan 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Image is a dominant medium among video, 3D model, and other media for visualizing environment and creating virtual access on the Internet. The location of image capture is, however, subjective and has relied on the esthetic sense of photographers up until this point. In this paper, we will not only visualize areas with images, but also propose a general framework to determine where the most distinct viewpoints should be located. Starting from elevation data, we present spatial and content information in ground-based images such that (1) a given number of images can have maximum coverage on informative scenes; (2) a set of key views can be selected with certain continuity for representing the most distinct views. According to the scene visibility, continuity, and data redundancy, we evaluate viewpoints numerically with an object-emitting illumination model. Our key view exploration may eventually reduce the visual data to transmit, facilitate image acquisition, indexing and interaction, and enhance perception of spaces. Real sample images are captured based on planned positions to form a visual network to index the area.
9

Indexation et interrogation de pages web décomposées en blocs visuels

Faessel, Nicolas 14 June 2011 (has links)
Cette thèse porte sur l'indexation et l'interrogation de pages Web. Dans ce cadre, nous proposons un nouveau modèle : BlockWeb, qui s'appuie sur une décomposition de pages Web en une hiérarchie de blocs visuels. Ce modèle prend en compte, l'importance visuelle de chaque bloc et la perméabilité des blocs au contenu de leurs blocs voisins dans la page. Les avantages de cette décomposition sont multiples en terme d'indexation et d'interrogation. Elle permet notamment d'effectuer une interrogation à une granularité plus fine que la page : les blocs les plus similaires à une requête peuvent être renvoyés à la place de la page complète. Une page est représentée sous forme d'un graphe acyclique orienté dont chaque nœud est associé à un bloc et étiqueté par l'importance de ce bloc et chaque arc est étiqueté la perméabilité du bloc cible au bloc source. Afin de construire ce graphe à partir de la représentation en arbre de blocs d'une page, nous proposons un nouveau langage : XIML (acronyme de XML Indexing Management Language), qui est un langage de règles à la façon de XSLT. Nous avons expérimenté notre modèle sur deux applications distinctes : la recherche du meilleur point d'entrée sur un corpus d'articles de journaux électroniques et l'indexation et la recherche d'images sur un corpus de la campagne d'ImagEval 2006. Nous en présentons les résultats. / This thesis is about indexing and querying Web pages. We propose a new model called BlockWeb, based on the decomposition of Web pages into a hierarchy of visual blocks. This model takes in account the visual importance of each block as well as the permeability of block's content to their neighbor blocks on the page. Splitting up a page into blocks has several advantages in terms of indexing and querying. It allows to query the system with a finer granularity than the whole page: the most similar blocks to the query can be returned instead of the whole page. A page is modeled as a directed acyclic graph, the IP graph, where each node is associated with a block and is labeled by the coefficient of importance of this block and each arc is labeled by the coefficient of permeability of the target node content to the source node content. In order to build this graph from the bloc tree representation of a page, we propose a new language : XIML (acronym for XML Indexing Management Language), a rule based language like XSLT. The model has been assessed on two distinct dataset: finding the best entry point in a dataset of electronic newspaper articles, and images indexing and querying in a dataset drawn from web pages of the ImagEval 2006 campaign. We present the results of these experiments.
10

Étude sur l’influence du vocabulaire utilisé pour l’indexation des images en contexte de repérage multilingue

Ménard, Elaine 27 November 2008 (has links)
Depuis quelques années, Internet est devenu un média incontournable pour la diffusion de ressources multilingues. Cependant, les différences linguistiques constituent souvent un obstacle majeur aux échanges de documents scientifiques, culturels, pédagogiques et commerciaux. En plus de cette diversité linguistique, on constate le développement croissant de bases de données et de collections composées de différents types de documents textuels ou multimédias, ce qui complexifie également le processus de repérage documentaire. En général, on considère l’image comme « libre » au point de vue linguistique. Toutefois, l’indexation en vocabulaire contrôlé ou libre (non contrôlé) confère à l’image un statut linguistique au même titre que tout document textuel, ce qui peut avoir une incidence sur le repérage. Le but de notre recherche est de vérifier l’existence de différences entre les caractéristiques de deux approches d’indexation pour les images ordinaires représentant des objets de la vie quotidienne, en vocabulaire contrôlé et en vocabulaire libre, et entre les résultats obtenus au moment de leur repérage. Cette étude suppose que les deux approches d’indexation présentent des caractéristiques communes, mais également des différences pouvant influencer le repérage de l’image. Cette recherche permet de vérifier si l’une ou l’autre de ces approches d’indexation surclasse l’autre, en termes d’efficacité, d’efficience et de satisfaction du chercheur d’images, en contexte de repérage multilingue. Afin d’atteindre le but fixé par cette recherche, deux objectifs spécifiques sont définis : identifier les caractéristiques de chacune des deux approches d’indexation de l’image ordinaire représentant des objets de la vie quotidienne pouvant influencer le repérage, en contexte multilingue et exposer les différences sur le plan de l’efficacité, de l’efficience et de la satisfaction du chercheur d’images à repérer des images ordinaires représentant des objets de la vie quotidienne indexées à l’aide d’approches offrant des caractéristiques variées, en contexte multilingue. Trois modes de collecte des données sont employés : l’analyse des termes utilisés pour l’indexation des images, la simulation du repérage d’un ensemble d’images indexées selon chacune des formes d’indexation à l’étude réalisée auprès de soixante répondants, et le questionnaire administré aux participants pendant et après la simulation du repérage. Quatre mesures sont définies pour cette recherche : l’efficacité du repérage d’images, mesurée par le taux de succès du repérage calculé à l’aide du nombre d’images repérées; l’efficience temporelle, mesurée par le temps, en secondes, utilisé par image repérée; l’efficience humaine, mesurée par l’effort humain, en nombre de requêtes formulées par image repérée et la satisfaction du chercheur d’images, mesurée par son autoévaluation suite à chaque tâche de repérage effectuée. Cette recherche montre que sur le plan de l’indexation de l’image ordinaire représentant des objets de la vie quotidienne, les approches d’indexation étudiées diffèrent fondamentalement l’une de l’autre, sur le plan terminologique, perceptuel et structurel. En outre, l’analyse des caractéristiques des deux approches d’indexation révèle que si la langue d’indexation est modifiée, les caractéristiques varient peu au sein d’une même approche d’indexation. Finalement, cette recherche souligne que les deux approches d’indexation à l’étude offrent une performance de repérage des images ordinaires représentant des objets de la vie quotidienne différente sur le plan de l’efficacité, de l’efficience et de la satisfaction du chercheur d’images, selon l’approche et la langue utilisées pour l’indexation. / During the last few years, the Internet has become an indispensable medium for the dissemination of multilingual resources. However, language differences are often a major obstacle to the exchange of scientific, cultural, educational and commercial documents. Besides this linguistic diversity, many databases and collections now contain documents in various formats that can also adversely affect their retrieval process. In general, images are considered to be language-independent resources. Nevertheless, the image indexing process using either a controlled or uncontrolled vocabulary gives the image a linguistic status similar to any other textual document and thus leads to the same difficulties in their retrieval. The goal of our research is to first identify the differences between the indexing approaches using a controlled and an uncontrolled vocabulary for ordinary images of everyday-life objects and to then differentiate between the results obtained at the time of image retrieval. This study supposes that the two indexing approaches show not only common characteristics, but also differences that can influence image retrieval. Thus, this research makes it possible to indicate if one of these indexing approaches surpasses the other in terms of effectiveness, efficiency, and satisfaction of the image searcher in a multilingual retrieval context. For this study, two specific objectives are defined: to identify the characteristics of each approach used for ordinary image indexing of everyday-life objects that can effect image retrieval in a multilingual context; and to explore the differences between the two indexing approaches in terms of their effectiveness, their efficiency, and the satisfaction of the image searcher when trying to retrieve ordinary images of everyday-life objects indexed according to either approach in a multilingual retrieval context. Three methods of data collection are used: an analysis of the image indexing terms, a simulation of the retrieval of a set of images indexed according to each of the two indexing approaches conducted with sixty respondents, and a questionnaire submitted to the participants during and after the retrieval simulation. Four measures are defined in this research: the effectiveness of image retrieval measured by the success rate calculated in terms of the number of retrieved images; time efficiency measured by the average time, in seconds, used to retrieve an image; human efficiency measured in terms of the human effort represented per average number of queries necessary to retrieve an image; and the satisfaction of the image searcher measured by the self-evaluation of the participant of the retrieval process after each completed task. This research shows that in terms of ordinary image indexing representing everyday-life objects, the two approaches investigated are fundamentally distinct on the terminological, perceptual, and structural perspective. Additionally, the analysis of the characteristics of the two indexing approaches reveals that if the indexing language differs, the characteristics vary little within the same indexing approach. Finally, this research underlines that the two indexing approaches of ordinary images representing everyday-life objects have a retrieval performance that is different in terms of its effectiveness, efficiency, and satisfaction of the image searcher according to the approach and the language used for indexing.

Page generated in 0.4838 seconds