• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 33
  • 9
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 104
  • 104
  • 104
  • 104
  • 38
  • 37
  • 36
  • 29
  • 28
  • 26
  • 21
  • 21
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A tree grammar-based visual password scheme

Okundaye, Benjamin January 2016 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy. Johannesburg, August 31, 2015. / Visual password schemes can be considered as an alternative to alphanumeric passwords. Studies have shown that alphanumeric passwords can, amongst others, be eavesdropped, shoulder surfed, or guessed, and are susceptible to brute force automated attacks. Visual password schemes use images, in place of alphanumeric characters, for authentication. For example, users of visual password schemes either select images (Cognometric) or points on an image (Locimetric) or attempt to redraw their password image (Drawmetric), in order to gain authentication. Visual passwords are limited by the so-called password space, i.e., by the size of the alphabet from which users can draw to create a password and by susceptibility to stealing of passimages by someone looking over your shoulders, referred to as shoulder surfing in the literature. The use of automatically generated highly similar abstract images defeats shoulder surfing and means that an almost unlimited pool of images is available for use in a visual password scheme, thus also overcoming the issue of limited potential password space. This research investigated visual password schemes. In particular, this study looked at the possibility of using tree picture grammars to generate abstract graphics for use in a visual password scheme. In this work, we also took a look at how humans determine similarity of abstract computer generated images, referred to as perceptual similarity in the literature. We drew on the psychological idea of similarity and matched that as closely as possible with a mathematical measure of image similarity, using Content Based Image Retrieval (CBIR) and tree edit distance measures. To this end, an online similarity survey was conducted with respondents ordering answer images in order of similarity to question images, involving 661 respondents and 50 images. The survey images were also compared with eight, state of the art, computer based similarity measures to determine how closely they model perceptual similarity. Since all the images were generated with tree grammars, the most popular measure of tree similarity, the tree edit distance, was also used to compare the images. Eight different types of tree edit distance measures were used in order to cover the broad range of tree edit distance and tree edit distance approximation methods. All the computer based similarity methods were then correlated with the online similarity survey results, to determine which ones more closely model perceptual similarity. The results were then analysed in the light of some modern psychological theories of perceptual similarity. This work represents a novel approach to the Passfaces type of visual password schemes using dynamically generated pass-images and their highly similar distractors, instead of static pictures stored in an online database. The results of the online survey were then accurately modelled using the most suitable tree edit distance measure, in order to automate the determination of similarity of our generated distractor images. The information gathered from our various experiments was then used in the design of a prototype visual password scheme. The generated images were similar, but not identical, in order to defeat shoulder surfing. This approach overcomes the following problems with this category of visual password schemes: shoulder surfing, bias in image selection, selection of easy to guess pictures and infrastructural limitations like large picture databases, network speed and database security issues. The resulting prototype developed is highly secure, resilient to shoulder surfing and easy for humans to use, and overcomes the aforementioned limitations in this category of visual password schemes.
12

Análise e avaliação de técnicas de interação humano-computador para sistemas de recuperação de imagens por conteúdo baseadas em estudo de caso / Evaluating human-computer interaction techniques for content-based image retrieval systems through a case study

Filardi, Ana Lúcia 30 August 2007 (has links)
A recuperação de imagens baseada em conteúdo, amplamente conhecida como CBIR (do inglês Content-Based Image Retrieval), é um ramo da área da computação que vem crescendo muito nos últimos anos e vem contribuindo com novos desafios. Sistemas que utilizam tais técnicas propiciam o armazenamento e manipulação de grandes volumes de dados e imagens e processam operações de consultas de imagens a partir de características visuais extraídas automaticamente por meio de métodos computacionais. Esses sistemas devem prover uma interface de usuário visando uma interação fácil, natural e atraente entre o usuário e o sistema, permitindo que o usuário possa realizar suas tarefas com segurança, de modo eficiente, eficaz e com satisfação. Desse modo, o design da interface firma-se como um elemento fundamental para o sucesso de sistemas CBIR. Contudo, dentro desse contexto, a interface do usuário ainda é um elemento constituído de pouca pesquisa e desenvolvimento. Um dos obstáculos para eficácia de design desses sistemas consiste da necessidade em prover aos usuários uma interface de alta qualidade para permitir que o usuário possa consultar imagens similares a uma dada imagem de referência e visualizar os resultados. Para atingir esse objetivo, este trabalho visa analisar a interação do usuário em sistemas de recuperação de imagens por conteúdo e avaliar sua funcionalidade e usabilidade, aplicando técnicas de interação humano-computador que apresentam bons resultados em relação à performance de sistemas com grande complexidade, baseado em um estudo de caso aplicado à medicina / The content-based image retrieval (CBIR) is a challenging area of the computer science that has been growing in a very fast pace in the last years. CBIR systems employ techniques for extracting features from the images, composing the features vectores, and store them together with the images in data bases management system, allowing indexing and querying. CBIR systems deal with large volumes of images. Therefore, the feature vectors are extracted by automatic methods. These systems allow to query the images by content, processing similarity queries, which inherently demands user interaction. Consequently, CBIR systems must pay attention to the user interface, aiming at providing friendly, intuitive and attractive interaction, leading the user to do the tasks efficiently, getting the desired results, feeling safe and fulfilled. From the points highlighted beforehand, we can state that the human-computer interaction (HCI) is a key element of a CBIR system. However, there is still little research on HCI for CBIR systems. One of the requirements of HCI for CBIR is to provide a high quality interface to allow the user to search for similar images to a given query image, and to display the results properly, allowing further interaction. The present dissertation aims at analyzing the user interaction in CBIR systems specially suited to medical applications, evaluating their usability by applying HCI techniques. To do so, a case study was employed, and the results presented
13

Vad säger bilden? : En utvärdering av återvinningseffektiviteten i ImBrowse / What can an Image tell? : An Evaluation of the Retrieval Performance in ImBrowse

Henrysson, Jennie, Johansson, Kristina, Juhlin, Charlotte January 2006 (has links)
The aim of this master thesis is to evaluate the performance of the content-based image retrieval system ImBrowse from a semantic point of view. Evaluation of retrieval performance is a problem in content-based image retrieval (CBIR). There are many different methods for measuring the performance of content-based image retrieval systems, but no common way for performing the evaluation. The main focus is on image retrieval regarding the extraction of the visual features in the image, from three semantic levels. The thesis tries to elucidate the semantic gap, which is the problem when the systems extraction of the visual features from the image and the user’s interpretation of that same information do not correspond. The method of this thesis is based on similar methods in evaluation studies of CBIR systems. The thesis is an evaluation of ImBrowse’s feature descriptors for 30 topics at three semantic levels and compared the descriptors performance based on our relevance assessment. For the computation of the results the precision at DCV = 20 is used. The results are presented in tables and a chart. The conclusion from this evaluation is that the retrieval effectiveness from a general point of view did not meet the semantic level of our relevance assessed topics. However, since the thesis do not have another system with the same search functions to compare with it is difficult to draw a comprehensive conclusion of the results. / Uppsatsnivå: D
14

Improved Scoring Models for Semantic Image Retrieval Using Scene Graphs

Conser, Erik Timothy 28 September 2017 (has links)
Image retrieval via a structured query is explored in Johnson, et al. [7]. The query is structured as a scene graph and a graphical model is generated from the scene graph's object, attribute, and relationship structure. Inference is performed on the graphical model with candidate images and the energy results are used to rank the best matches. In [7], scene graph objects that are not in the set of recognized objects are not represented in the graphical model. This work proposes and tests two approaches for modeling the unrecognized objects in order to leverage the attribute and relationship models to improve image retrieval performance.
15

Topics in Content Based Image Retrieval : Fonts and Color Emotions

Solli, Martin January 2009 (has links)
<p>Two novel contributions to Content Based Image Retrieval are presented and discussed. The first is a search engine for font recognition. The intended usage is the search in very large font databases. The input to the search engine is an image of a text line, and the output is the name of the font used when printing the text. After pre-processing and segmentation of the input image, a local approach is used, where features are calculated for individual characters. The method is based on eigenimages calculated from edge filtered character images, which enables compact feature vectors that can be computed rapidly. A system for visualizing the entire font database is also proposed. Applying geometry preserving linear- and non-linear manifold learning methods, the structure of the high-dimensional feature space is mapped to a two-dimensional representation, which can be reorganized into a grid-based display. The performance of the search engine and the visualization tool is illustrated with a large database containing more than 2700 fonts.</p><p>The second contribution is the inclusion of color-based emotion-related properties in image retrieval. The color emotion metric used is derived from psychophysical experiments and uses three scales: <em>activity</em>, <em>weight </em>and <em>heat</em>. It was originally designed for single-color combinations and later extended to include pairs of colors. A modified approach for statistical analysis of color emotions in images, involving transformations of ordinary RGB-histograms, is used for image classification and retrieval. The methods are very fast in feature extraction, and descriptor vectors are very short. This is essential in our application where the intended use is the search in huge image databases containing millions or billions of images. The proposed method is evaluated in psychophysical experiments, using both category scaling and interval scaling. The results show that people in general perceive color emotions for multi-colored images in similar ways, and that observer judgments correlate with derived values.</p><p>Both the font search engine and the emotion based retrieval system are implemented in publicly available search engines. User statistics gathered during a period of 20 respectively 14 months are presented and discussed.</p>
16

Retrieval by spatial similarity based on interval neighbor group

Huang, Yen-Ren 23 July 2008 (has links)
The objective of the present work is to employ a multiple-instance learning image retrieval system by incorporating a spatial similarity measure. Multiple-Instance learning is a way of modeling ambiguity in supervised learning given multiple examples. From a small collection of positive and negative example images, semantically relevant concepts can be derived automatically and employed to retrieve images from an image database. The degree of similarity between two spatial relations is linked to the distance between the associated nodes in an Interval Neighbor Group (ING). The shorter the distance, the higher degree of similarity, while a longer one, a lower degree of similarity. Once all the pairwise similarity values are derived, an ensemble similarity measure will then integrate these pairwise similarity assessments and give an overall similarity value between two images. Therefore, images in a database can be quantitatively ranked according to the degree of ensemble similarity with the query image. Similarity retrieval method evaluates the ensemble similarity based on the spatial relations and common objects present in the maximum common subimage between the query and a database image are considered. Therefore, reliable spatial relation features extracted from the image, combined with a multiple-instance learning paradigm to derive relevant concepts, can produce desirable retrieval results that better match user¡¦s expectation. In order to demonstrate the feasibility of the proposed approach, two sets of test for querying an image database are performed, namely, the proposed RSS-ING scheme v.s. 2D Be-string similarity method, and single-instance vs. multiple-instance learning. The performance in terms of similarity curves, execution time and memory space requirement show favorably for the proposed multiple-instance spatial similarity-based approach.
17

Object and concept recognition for content-based image retrieval /

Li, Yi, January 2005 (has links)
Thesis (Ph. D.)--University of Washington, 2005. / Vita. Includes bibliographical references (p. 82-87).
18

Image classification, storage and retrieval system for a 3 u cubesat

Gashayija, Jean Marie January 2014 (has links)
Thesis submitted in fulfillment of the requirements for the degree Master of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology / Small satellites, such as CubeSats are mainly utilized for space and earth imaging missions. Imaging CubeSats are equipped with high resolution cameras for the capturing of digital images, as well as mass storage devices for storing the images. The captured images are transmitted to the ground station and subsequently stored in a database. The main problem with stored images in a large image database, identified by researchers and developers within the last number of years, is the retrieval of precise, clear images and overcoming the semantic gap. The semantic gap relates to the lack of correlation between the semantic categories the user requires and the low level features that a content-based image retrieval system offers. Clear images are needed to be usable for applications such as mapping, disaster monitoring and town planning. The main objective of this thesis is the design and development of an image classification, storage and retrieval system for a CubeSat. This system enables efficient classification, storing and retrieval of images that are received on a daily basis from an in-orbit CubeSat. In order to propose such a system, a specific research methodology was chosen and adopted. This entails extensive literature reviews on image classification techniques and image feature extraction techniques, to extract content embedded within an image, and include studies on image database systems, data mining techniques and image retrieval techniques. The literature study led to a requirement analysis followed by the analyses of software development models in order to design the system. The proposed design entails classifying images using content embedded in the image and also extracting image metadata such as date and time. Specific features extraction techniques are needed to extract required content and metadata. In order to achieve extraction of information embedded in the image, colour feature (colour histogram), shape feature (Mathematical Morphology) and texture feature (GLCM) techniques were used. Other major contributions of this project include a graphical user interface which enables users to search for similar images against those stored in the database. An automatic image extractor algorithm was also designed to classify images according to date and time, and colour, texture and shape features extractor techniques were proposed. These ensured that when a user wishes to query the database, the shape objects, colour quantities and contrast contained in an image are extracted and compared to those stored in the database. Implementation and test results concluded that the designed system is able to categorize images automatically and at the same time provide efficient and accurate results. The features extracted for each image depend on colour, shape and texture methods. Optimal values were also incorporated in order to reduce retrieval times. The mathematical morphological technique was used to compute shape objects using erosion and dilation operators, and the co-occurrence matrix was used to compute the texture feature of the image.
19

Análise e avaliação de técnicas de interação humano-computador para sistemas de recuperação de imagens por conteúdo baseadas em estudo de caso / Evaluating human-computer interaction techniques for content-based image retrieval systems through a case study

Ana Lúcia Filardi 30 August 2007 (has links)
A recuperação de imagens baseada em conteúdo, amplamente conhecida como CBIR (do inglês Content-Based Image Retrieval), é um ramo da área da computação que vem crescendo muito nos últimos anos e vem contribuindo com novos desafios. Sistemas que utilizam tais técnicas propiciam o armazenamento e manipulação de grandes volumes de dados e imagens e processam operações de consultas de imagens a partir de características visuais extraídas automaticamente por meio de métodos computacionais. Esses sistemas devem prover uma interface de usuário visando uma interação fácil, natural e atraente entre o usuário e o sistema, permitindo que o usuário possa realizar suas tarefas com segurança, de modo eficiente, eficaz e com satisfação. Desse modo, o design da interface firma-se como um elemento fundamental para o sucesso de sistemas CBIR. Contudo, dentro desse contexto, a interface do usuário ainda é um elemento constituído de pouca pesquisa e desenvolvimento. Um dos obstáculos para eficácia de design desses sistemas consiste da necessidade em prover aos usuários uma interface de alta qualidade para permitir que o usuário possa consultar imagens similares a uma dada imagem de referência e visualizar os resultados. Para atingir esse objetivo, este trabalho visa analisar a interação do usuário em sistemas de recuperação de imagens por conteúdo e avaliar sua funcionalidade e usabilidade, aplicando técnicas de interação humano-computador que apresentam bons resultados em relação à performance de sistemas com grande complexidade, baseado em um estudo de caso aplicado à medicina / The content-based image retrieval (CBIR) is a challenging area of the computer science that has been growing in a very fast pace in the last years. CBIR systems employ techniques for extracting features from the images, composing the features vectores, and store them together with the images in data bases management system, allowing indexing and querying. CBIR systems deal with large volumes of images. Therefore, the feature vectors are extracted by automatic methods. These systems allow to query the images by content, processing similarity queries, which inherently demands user interaction. Consequently, CBIR systems must pay attention to the user interface, aiming at providing friendly, intuitive and attractive interaction, leading the user to do the tasks efficiently, getting the desired results, feeling safe and fulfilled. From the points highlighted beforehand, we can state that the human-computer interaction (HCI) is a key element of a CBIR system. However, there is still little research on HCI for CBIR systems. One of the requirements of HCI for CBIR is to provide a high quality interface to allow the user to search for similar images to a given query image, and to display the results properly, allowing further interaction. The present dissertation aims at analyzing the user interaction in CBIR systems specially suited to medical applications, evaluating their usability by applying HCI techniques. To do so, a case study was employed, and the results presented
20

Découverte et exploitation d'objets visuels fréquents dans des collections multimédia / Mining and exploitation of frequent visual objects in multimedia collections

Letessier, Pierre 28 March 2013 (has links)
L’objectif principal de cette thèse est la découverte d’objets visuels fréquents dans de grandes collections multimédias (images ou vidéos). Comme dans de nombreux domaines (finance, génétique, . . .), il s’agit d’extraire une connaissance de manière automatique ou semi-automatique en utilisant la fréquence d’apparition d’un objet au sein d’un corpus comme critère de pertinence. Une première contribution de la thèse est de fournir un formalisme aux problèmes de découverte et de fouille d’instances d’objets visuels fréquents. La deuxième contribution de la thèse est une méthode générique de résolution de ces deux types de problème reposant d’une part sur un processus itératif d’échantillonnage d’objets candidats et d’autre part sur une méthode efficace d’appariement d’objets rigides à large échelle. La troisième contribution de la thèse s’attache à construire une fonction de vraisemblance s’approchant au mieux de la distribution parfaite, tout en restant scalable et efficace. Les expérimentations montrent que contrairement aux méthodes de l’état de l’artnotre approche permet de découvrir efficacement des objets de très petite taille dans des millions d’images. Pour finir, plusieurs scénarios d’exploitation des graphes visuels produits par notre méthode sont proposées et expérimentés. Ceci inclut la détection d’évènements médiatiques transmédias et la suggestion de requêtes visuelles. / The main goal of this thesis is to discover frequent visual objects in large multimedia collections. As in many areas (finance, genetics, . . .), it consists in extracting a knowledge, using the occurence frequency of an object in a collection as a relevance criterion. A first contribution is to provide a formalism to the problems of mining and discovery of frequent visual objects. The second contribution is a generic method to solve these two problems, based on an iterative sampling process, and on an efficient and scalable rigid objects matching. The third contribution of this work focuses on building a likelihood function close to the perfect distribution. Experiments show that contrary to state-of-the-art methods, our approach allows to discover efficiently very small objects in several millions images. Finally, several applications are presented, including trademark logos discovery, transmedia events detection or visual-based query suggestion.

Page generated in 0.1251 seconds