• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 66
  • 15
  • 12
  • 8
  • 8
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 281
  • 281
  • 147
  • 114
  • 69
  • 59
  • 49
  • 49
  • 44
  • 40
  • 38
  • 36
  • 36
  • 36
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Vyhledání obrázků podle obsahu / Content-based Image Search

Talaš, Josef January 2014 (has links)
This work aims at content-based image search. Different approaches to this type of search are investigated. The main focus of the thesis is special category of content-based image search called sketch-based image search. The most important descriptor types used for image feature extraction in image search are analyzed. Main contribution of the thesis to this research area is a new feature extraction method based on sketch-based image search. This method is implemented together with search interface. The method was evaluated by three test persons. The testing results show promising properties of new method and suggest further possible improve-ments. Powered by TCPDF (www.tcpdf.org)
32

A Content-Based Image Retrieval System for Fish Taxonomy

Teng, Fei 22 May 2006 (has links)
It is estimated that less than ten percent of the world's species have been discovered and described. The main reason for the slow pace of new species description is that the science of taxonomy, as traditionally practiced, can be very laborious: taxonomists have to manually gather and analyze data from large numbers of specimens and identify the smallest subset of external body characters that uniquely diagnoses the new species as distinct from all its known relatives. The pace of data gathering and analysis can be greatly increased by the information technology. In this paper, we propose a content-based image retrieval system for taxonomic research. The system can identify representative body shape characters of known species based on digitized landmarks and provide statistical clues for assisting taxonomists to identify new species or subspecies. The experiments on a taxonomic problem involving species of suckers in the genera Carpiodes demonstrate promising results.
33

A tree grammar-based visual password scheme

Okundaye, Benjamin January 2016 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Doctor of Philosophy. Johannesburg, August 31, 2015. / Visual password schemes can be considered as an alternative to alphanumeric passwords. Studies have shown that alphanumeric passwords can, amongst others, be eavesdropped, shoulder surfed, or guessed, and are susceptible to brute force automated attacks. Visual password schemes use images, in place of alphanumeric characters, for authentication. For example, users of visual password schemes either select images (Cognometric) or points on an image (Locimetric) or attempt to redraw their password image (Drawmetric), in order to gain authentication. Visual passwords are limited by the so-called password space, i.e., by the size of the alphabet from which users can draw to create a password and by susceptibility to stealing of passimages by someone looking over your shoulders, referred to as shoulder surfing in the literature. The use of automatically generated highly similar abstract images defeats shoulder surfing and means that an almost unlimited pool of images is available for use in a visual password scheme, thus also overcoming the issue of limited potential password space. This research investigated visual password schemes. In particular, this study looked at the possibility of using tree picture grammars to generate abstract graphics for use in a visual password scheme. In this work, we also took a look at how humans determine similarity of abstract computer generated images, referred to as perceptual similarity in the literature. We drew on the psychological idea of similarity and matched that as closely as possible with a mathematical measure of image similarity, using Content Based Image Retrieval (CBIR) and tree edit distance measures. To this end, an online similarity survey was conducted with respondents ordering answer images in order of similarity to question images, involving 661 respondents and 50 images. The survey images were also compared with eight, state of the art, computer based similarity measures to determine how closely they model perceptual similarity. Since all the images were generated with tree grammars, the most popular measure of tree similarity, the tree edit distance, was also used to compare the images. Eight different types of tree edit distance measures were used in order to cover the broad range of tree edit distance and tree edit distance approximation methods. All the computer based similarity methods were then correlated with the online similarity survey results, to determine which ones more closely model perceptual similarity. The results were then analysed in the light of some modern psychological theories of perceptual similarity. This work represents a novel approach to the Passfaces type of visual password schemes using dynamically generated pass-images and their highly similar distractors, instead of static pictures stored in an online database. The results of the online survey were then accurately modelled using the most suitable tree edit distance measure, in order to automate the determination of similarity of our generated distractor images. The information gathered from our various experiments was then used in the design of a prototype visual password scheme. The generated images were similar, but not identical, in order to defeat shoulder surfing. This approach overcomes the following problems with this category of visual password schemes: shoulder surfing, bias in image selection, selection of easy to guess pictures and infrastructural limitations like large picture databases, network speed and database security issues. The resulting prototype developed is highly secure, resilient to shoulder surfing and easy for humans to use, and overcomes the aforementioned limitations in this category of visual password schemes.
34

Operações de consulta por similaridade em grandes bases de dados complexos / Similarity search operations in large complex databases

Barioni, Maria Camila Nardini 04 September 2006 (has links)
Os Sistemas de Gerenciamento de Bases de Dados (SGBD) foram desenvolvidos para armazenar e recuperar de maneira eficiente dados formados apenas por números ou cadeias de caracteres. Entretanto, nas últimas décadas houve um aumento expressivo, não só da quantidade, mas da complexidade dos dados manipulados em bases de dados, dentre eles os de natureza multimídia (como imagens, áudio e vídeo), informações geo-referenciadas, séries temporais, entre outros. Assim, surgiu a necessidade do desenvolvimento de novas técnicas que permitam a manipulação eficiente de tipos de dados complexos. Para atender às buscas necessárias às aplicações de base de dados modernas é preciso que os SGBD ofereçam suporte para buscas por similaridade ? consultas que realizam busca por objetos da base similares a um objeto de consulta, de acordo com uma certa medida de similaridade. Outro fator importante que veio contribuir para a necessidade de suportar a realização de consultas por similaridade em SGBD está relacionado à integração de técnicas de mineração de dados. É fundamental para essa integração o fornecimento de recursos pelos SGBD que permitam a realização de operações básicas para as diversas técnicas de mineração de dados existentes. Uma operação básica para várias dessas técnicas, tais como a técnica de detecção de agrupamentos de dados, é justamente o cálculo de medidas de similaridade entre pares de objetos de um conjunto de dados. Embora haja necessidade de fornecer suporte para a realização desse tipo de consultas em SGBD, o atual padrão da linguagem SQL não prevê a realização de consultas por similaridade. Esta tese pretende contribuir para o fornecimento desse suporte, incorporando ao SQL recursos capazes de permitir a realização de operações de consulta por similaridade sobre grandes bases de dados complexos de maneira totalmente integrada com os demais recursos da linguagem / Database Management Systems (DBMS) were developed to store and efficiently retrieve only data composed by numbers and small strings. However, over the last decades, there was an expressive increase in the volume and complexity of the data being managed, such as multimedia data (images, audio tracks and video), geo-referenced information and time series. Thus, the need to develop new techniques that allow the efficient handling of complex data types also increased. In order to support these data and the corresponding applications, the DBMS needs to support similarity queries, i.e., queries that search for objects similar to a query object according to a similarity measure. The need to support similarity queries in DBMS is also related to the integration of data mining techniques, which requires the DBMS acting as the provider for resources that allow the execution of basic operations for several existing data mining techniques. A basic operation for several of these techniques, such as clustering detection, is again the computation of similarity measures among pairs of objects of a data set. Although there is a need to execute these kind of queries in DBMS, the SQL standard does not allow the specification of similarity queries. Hence, this thesis aims at contributing to support such queries, integrating to the SQL the resources capable to execute similarity query operations over large sets of complex data
35

Análise e avaliação de técnicas de interação humano-computador para sistemas de recuperação de imagens por conteúdo baseadas em estudo de caso / Evaluating human-computer interaction techniques for content-based image retrieval systems through a case study

Filardi, Ana Lúcia 30 August 2007 (has links)
A recuperação de imagens baseada em conteúdo, amplamente conhecida como CBIR (do inglês Content-Based Image Retrieval), é um ramo da área da computação que vem crescendo muito nos últimos anos e vem contribuindo com novos desafios. Sistemas que utilizam tais técnicas propiciam o armazenamento e manipulação de grandes volumes de dados e imagens e processam operações de consultas de imagens a partir de características visuais extraídas automaticamente por meio de métodos computacionais. Esses sistemas devem prover uma interface de usuário visando uma interação fácil, natural e atraente entre o usuário e o sistema, permitindo que o usuário possa realizar suas tarefas com segurança, de modo eficiente, eficaz e com satisfação. Desse modo, o design da interface firma-se como um elemento fundamental para o sucesso de sistemas CBIR. Contudo, dentro desse contexto, a interface do usuário ainda é um elemento constituído de pouca pesquisa e desenvolvimento. Um dos obstáculos para eficácia de design desses sistemas consiste da necessidade em prover aos usuários uma interface de alta qualidade para permitir que o usuário possa consultar imagens similares a uma dada imagem de referência e visualizar os resultados. Para atingir esse objetivo, este trabalho visa analisar a interação do usuário em sistemas de recuperação de imagens por conteúdo e avaliar sua funcionalidade e usabilidade, aplicando técnicas de interação humano-computador que apresentam bons resultados em relação à performance de sistemas com grande complexidade, baseado em um estudo de caso aplicado à medicina / The content-based image retrieval (CBIR) is a challenging area of the computer science that has been growing in a very fast pace in the last years. CBIR systems employ techniques for extracting features from the images, composing the features vectores, and store them together with the images in data bases management system, allowing indexing and querying. CBIR systems deal with large volumes of images. Therefore, the feature vectors are extracted by automatic methods. These systems allow to query the images by content, processing similarity queries, which inherently demands user interaction. Consequently, CBIR systems must pay attention to the user interface, aiming at providing friendly, intuitive and attractive interaction, leading the user to do the tasks efficiently, getting the desired results, feeling safe and fulfilled. From the points highlighted beforehand, we can state that the human-computer interaction (HCI) is a key element of a CBIR system. However, there is still little research on HCI for CBIR systems. One of the requirements of HCI for CBIR is to provide a high quality interface to allow the user to search for similar images to a given query image, and to display the results properly, allowing further interaction. The present dissertation aims at analyzing the user interaction in CBIR systems specially suited to medical applications, evaluating their usability by applying HCI techniques. To do so, a case study was employed, and the results presented
36

Vad säger bilden? : En utvärdering av återvinningseffektiviteten i ImBrowse / What can an Image tell? : An Evaluation of the Retrieval Performance in ImBrowse

Henrysson, Jennie, Johansson, Kristina, Juhlin, Charlotte January 2006 (has links)
The aim of this master thesis is to evaluate the performance of the content-based image retrieval system ImBrowse from a semantic point of view. Evaluation of retrieval performance is a problem in content-based image retrieval (CBIR). There are many different methods for measuring the performance of content-based image retrieval systems, but no common way for performing the evaluation. The main focus is on image retrieval regarding the extraction of the visual features in the image, from three semantic levels. The thesis tries to elucidate the semantic gap, which is the problem when the systems extraction of the visual features from the image and the user’s interpretation of that same information do not correspond. The method of this thesis is based on similar methods in evaluation studies of CBIR systems. The thesis is an evaluation of ImBrowse’s feature descriptors for 30 topics at three semantic levels and compared the descriptors performance based on our relevance assessment. For the computation of the results the precision at DCV = 20 is used. The results are presented in tables and a chart. The conclusion from this evaluation is that the retrieval effectiveness from a general point of view did not meet the semantic level of our relevance assessed topics. However, since the thesis do not have another system with the same search functions to compare with it is difficult to draw a comprehensive conclusion of the results. / Uppsatsnivå: D
37

Purposeful Integration of Literacy and Science Instruction in a 4th Grade Immersion Program

Overvliet, Emily Nicole 01 April 2018 (has links)
Though learning content in a second language (L2) requires additional time, students in immersion classes are expected to keep up with the curricular pace of traditional classes. One possible way to secure sufficient time for both language and science content learning is to integrate language arts instruction with core curricular content. This action research study investigated the effectiveness of purposefully integrating literacy instruction with the Utah Core Standards for science with 53 fourth-grade French partial immersion students in Utah. The purpose of this study was to discover how such a model might affect students' French reading skills, science knowledge, and attitudes about their immersion experience. Findings revealed statistically significant differences between pre- and post-tests on some measures of student performance, and yielded pedagogical implications regarding the development of reading fluency, science proficiency, and student engagement.
38

Improved Scoring Models for Semantic Image Retrieval Using Scene Graphs

Conser, Erik Timothy 28 September 2017 (has links)
Image retrieval via a structured query is explored in Johnson, et al. [7]. The query is structured as a scene graph and a graphical model is generated from the scene graph's object, attribute, and relationship structure. Inference is performed on the graphical model with candidate images and the energy results are used to rank the best matches. In [7], scene graph objects that are not in the set of recognized objects are not represented in the graphical model. This work proposes and tests two approaches for modeling the unrecognized objects in order to leverage the attribute and relationship models to improve image retrieval performance.
39

Combined map personalisation algorithm for delivering preferred spatial features in a map to everyday mobile device users

Bookwala, Avinash Turab January 2009 (has links)
In this thesis, we present an innovative and novel approach to personalise maps/geo-spatial services for mobile users. With the proposed map personalisation approach, only relevant data will be extracted from detailed maps/geo-spatial services on the fly, based on a user’s current location, preferences and requirements. This would result in dramatic improvements in the legibility of maps on mobile device screens, as well as significant reductions in the amount of data being transmitted; which, in turn, would reduce the download time and cost of transferring the required geo-spatial data across mobile networks. Furthermore, the proposed map personalisation approach has been implemented into a working system, based on a four-tier client server architecture, wherein fully detailed maps/services are stored on the server, and upon a user’s request personalised maps/services, extracted from the fully detailed maps/services based on the user’s current location, preferences, are sent to the user’s mobile device through mobile networks. By using open and standard system development tools, our system is open to everyday mobile devices rather than smart phones and Personal Digital Assistants (PDA) only, as is prevalent in most current map personalisation systems. The proposed map personalisation approach combines content-based information filtering and collaborative information filtering techniques into an algorithmic solution, wherein content-based information filtering is used for regular users having a user profile stored on the system, and collaborative information filtering is used for new/occasional users having no user profile stored on the system. Maps/geo-spatial services are personalised for regular users by analysing the user’s spatial feature preferences automatically collected and stored in their user profile from previous usages, whereas, map personalisation for new/occasional users is achieved through analysing the spatial feature preferences of like-minded users in the system in order to make an inference for the target user. Furthermore, with the use of association rule mining, an advanced inference technique, the spatial features retrieved for new/occasional users through collaborative filtering can be attained. The selection of spatial features through association rule mining is achieved by finding interesting and similar patterns in the spatial features most commonly retrieved by different user groups, based on their past transactions or usage sessions with the system.
40

Topics in Content Based Image Retrieval : Fonts and Color Emotions

Solli, Martin January 2009 (has links)
<p>Two novel contributions to Content Based Image Retrieval are presented and discussed. The first is a search engine for font recognition. The intended usage is the search in very large font databases. The input to the search engine is an image of a text line, and the output is the name of the font used when printing the text. After pre-processing and segmentation of the input image, a local approach is used, where features are calculated for individual characters. The method is based on eigenimages calculated from edge filtered character images, which enables compact feature vectors that can be computed rapidly. A system for visualizing the entire font database is also proposed. Applying geometry preserving linear- and non-linear manifold learning methods, the structure of the high-dimensional feature space is mapped to a two-dimensional representation, which can be reorganized into a grid-based display. The performance of the search engine and the visualization tool is illustrated with a large database containing more than 2700 fonts.</p><p>The second contribution is the inclusion of color-based emotion-related properties in image retrieval. The color emotion metric used is derived from psychophysical experiments and uses three scales: <em>activity</em>, <em>weight </em>and <em>heat</em>. It was originally designed for single-color combinations and later extended to include pairs of colors. A modified approach for statistical analysis of color emotions in images, involving transformations of ordinary RGB-histograms, is used for image classification and retrieval. The methods are very fast in feature extraction, and descriptor vectors are very short. This is essential in our application where the intended use is the search in huge image databases containing millions or billions of images. The proposed method is evaluated in psychophysical experiments, using both category scaling and interval scaling. The results show that people in general perceive color emotions for multi-colored images in similar ways, and that observer judgments correlate with derived values.</p><p>Both the font search engine and the emotion based retrieval system are implemented in publicly available search engines. User statistics gathered during a period of 20 respectively 14 months are presented and discussed.</p>

Page generated in 0.0789 seconds