• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 66
  • 15
  • 12
  • 8
  • 8
  • 7
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 282
  • 282
  • 147
  • 114
  • 70
  • 59
  • 49
  • 49
  • 44
  • 41
  • 38
  • 36
  • 36
  • 36
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Image classification, storage and retrieval system for a 3 u cubesat

Gashayija, Jean Marie January 2014 (has links)
Thesis submitted in fulfillment of the requirements for the degree Master of Technology: Electrical Engineering in the Faculty of Engineering at the Cape Peninsula University of Technology / Small satellites, such as CubeSats are mainly utilized for space and earth imaging missions. Imaging CubeSats are equipped with high resolution cameras for the capturing of digital images, as well as mass storage devices for storing the images. The captured images are transmitted to the ground station and subsequently stored in a database. The main problem with stored images in a large image database, identified by researchers and developers within the last number of years, is the retrieval of precise, clear images and overcoming the semantic gap. The semantic gap relates to the lack of correlation between the semantic categories the user requires and the low level features that a content-based image retrieval system offers. Clear images are needed to be usable for applications such as mapping, disaster monitoring and town planning. The main objective of this thesis is the design and development of an image classification, storage and retrieval system for a CubeSat. This system enables efficient classification, storing and retrieval of images that are received on a daily basis from an in-orbit CubeSat. In order to propose such a system, a specific research methodology was chosen and adopted. This entails extensive literature reviews on image classification techniques and image feature extraction techniques, to extract content embedded within an image, and include studies on image database systems, data mining techniques and image retrieval techniques. The literature study led to a requirement analysis followed by the analyses of software development models in order to design the system. The proposed design entails classifying images using content embedded in the image and also extracting image metadata such as date and time. Specific features extraction techniques are needed to extract required content and metadata. In order to achieve extraction of information embedded in the image, colour feature (colour histogram), shape feature (Mathematical Morphology) and texture feature (GLCM) techniques were used. Other major contributions of this project include a graphical user interface which enables users to search for similar images against those stored in the database. An automatic image extractor algorithm was also designed to classify images according to date and time, and colour, texture and shape features extractor techniques were proposed. These ensured that when a user wishes to query the database, the shape objects, colour quantities and contrast contained in an image are extracted and compared to those stored in the database. Implementation and test results concluded that the designed system is able to categorize images automatically and at the same time provide efficient and accurate results. The features extracted for each image depend on colour, shape and texture methods. Optimal values were also incorporated in order to reduce retrieval times. The mathematical morphological technique was used to compute shape objects using erosion and dilation operators, and the co-occurrence matrix was used to compute the texture feature of the image.
62

Operações de consulta por similaridade em grandes bases de dados complexos / Similarity search operations in large complex databases

Maria Camila Nardini Barioni 04 September 2006 (has links)
Os Sistemas de Gerenciamento de Bases de Dados (SGBD) foram desenvolvidos para armazenar e recuperar de maneira eficiente dados formados apenas por números ou cadeias de caracteres. Entretanto, nas últimas décadas houve um aumento expressivo, não só da quantidade, mas da complexidade dos dados manipulados em bases de dados, dentre eles os de natureza multimídia (como imagens, áudio e vídeo), informações geo-referenciadas, séries temporais, entre outros. Assim, surgiu a necessidade do desenvolvimento de novas técnicas que permitam a manipulação eficiente de tipos de dados complexos. Para atender às buscas necessárias às aplicações de base de dados modernas é preciso que os SGBD ofereçam suporte para buscas por similaridade ? consultas que realizam busca por objetos da base similares a um objeto de consulta, de acordo com uma certa medida de similaridade. Outro fator importante que veio contribuir para a necessidade de suportar a realização de consultas por similaridade em SGBD está relacionado à integração de técnicas de mineração de dados. É fundamental para essa integração o fornecimento de recursos pelos SGBD que permitam a realização de operações básicas para as diversas técnicas de mineração de dados existentes. Uma operação básica para várias dessas técnicas, tais como a técnica de detecção de agrupamentos de dados, é justamente o cálculo de medidas de similaridade entre pares de objetos de um conjunto de dados. Embora haja necessidade de fornecer suporte para a realização desse tipo de consultas em SGBD, o atual padrão da linguagem SQL não prevê a realização de consultas por similaridade. Esta tese pretende contribuir para o fornecimento desse suporte, incorporando ao SQL recursos capazes de permitir a realização de operações de consulta por similaridade sobre grandes bases de dados complexos de maneira totalmente integrada com os demais recursos da linguagem / Database Management Systems (DBMS) were developed to store and efficiently retrieve only data composed by numbers and small strings. However, over the last decades, there was an expressive increase in the volume and complexity of the data being managed, such as multimedia data (images, audio tracks and video), geo-referenced information and time series. Thus, the need to develop new techniques that allow the efficient handling of complex data types also increased. In order to support these data and the corresponding applications, the DBMS needs to support similarity queries, i.e., queries that search for objects similar to a query object according to a similarity measure. The need to support similarity queries in DBMS is also related to the integration of data mining techniques, which requires the DBMS acting as the provider for resources that allow the execution of basic operations for several existing data mining techniques. A basic operation for several of these techniques, such as clustering detection, is again the computation of similarity measures among pairs of objects of a data set. Although there is a need to execute these kind of queries in DBMS, the SQL standard does not allow the specification of similarity queries. Hence, this thesis aims at contributing to support such queries, integrating to the SQL the resources capable to execute similarity query operations over large sets of complex data
63

Content-based search and browsing in semantic multimedia retrieval

Rautiainen, M. (Mika) 04 December 2006 (has links)
Abstract Growth in storage capacity has led to large digital video repositories and complicated the discovery of specific information without the laborious manual annotation of data. The research focuses on creating a retrieval system that is ultimately independent of manual work. To retrieve relevant content, the semantic gap between the searcher's information need and the content data has to be overcome using content-based technology. Semantic gap constitutes of two distinct elements: the ambiguity of the true information need and the equivocalness of digital video data. The research problem of this thesis is: what computational content-based models for retrieval increase the effectiveness of the semantic retrieval of digital video? The hypothesis is that semantic search performance can be improved using pattern recognition, data abstraction and clustering techniques jointly with human interaction through manually created queries and visual browsing. The results of this thesis are composed of: an evaluation of two perceptually oriented colour spaces with details on the applicability of the HSV and CIE Lab spaces for low-level feature extraction; the development and evaluation of low-level visual features in example-based retrieval for image and video databases; the development and evaluation of a generic model for simple and efficient concept detection from video sequences with good detection performance on large video corpuses; the development of combination techniques for multi-modal visual, concept and lexical retrieval; the development of a cluster-temporal browsing model as a data navigation tool and its evaluation in several large and heterogeneous collections containing an assortment of video from educational and historical recordings to contemporary broadcast news, commercials and a multilingual television broadcast. The methods introduced here have been found to facilitate semantic queries for novice users without laborious manual annotation. Cluster-temporal browsing was found to outperform the conventional approach, which constitutes of sequential queries and relevance feedback, in semantic video retrieval by a statistically significant proportion.
64

Auditory-based processing of communication sounds

Walters, Thomas C. January 2011 (has links)
This thesis examines the possible benefits of adapting a biologically-inspired model of human auditory processing as part of a machine-hearing system. Features were generated by an auditory model, and used as input to machine learning systems to determine the content of the sound. Features were generated using the auditory image model (AIM) and were used for speech recognition and audio search. AIM comprises processing to simulate the human cochlea, and a 'strobed temporal integration' process which generates a stabilised auditory image (SAI) from the input sound. The communication sounds which are produced by humans, other animals, and many musical instruments take the form of a pulse-resonance signal: pulses excite resonances in the body, and the resonance following each pulse contains information both about the type of object producing the sound and its size. In the case of humans, vocal tract length (VTL) determines the size properties of the resonance. In the speech recognition experiments, an auditory filterbank was combined with a Gaussian fitting procedure to produce features which are invariant to changes in speaker VTL. These features were compared against standard mel-frequency cepstral coefficients (MFCCs) in a size-invariant syllable recognition task. The VTL-invariant representation was found to produce better results than MFCCs when the system was trained on syllables from simulated talkers of one range of VTLs and tested on those from simulated talkers with a different range of VTLs. The image stabilisation process of strobed temporal integration was analysed. Based on the properties of the auditory filterbank being used, theoretical constraints were placed on the properties of the dynamic thresholding function used to perform strobe detection. These constraints were used to specify a simple, yet robust, strobe detection algorithm. The syllable recognition system described above was then extended to produce features from profiles of the SAI and tested with the same syllable database as before. For clean speech, performance of the features was comparable to that of those generated from the filterbank output. However when pink noise was added to the stimuli, performance dropped more slowly as a function of signal-to-noise ratio when using the SAI-based AIM features, than when using either the filterbank-based features or the MFCCs, demonstrating the noise-robustness properties of the SAI representation. The properties of the auditory filterbank in AIM were also analysed. Three models of the cochlea were considered: the static gammatone filterbank, dynamic compressive gammachirp (dcGC) and the pole-zero filter cascade (PZFC). The dcGC and gammatone are standard filterbank models, whereas the PZFC is a filter cascade, which more accurately models signal propagation in the cochlea. However, while the architecture of the filterbanks is different, they have all been successfully fitted to psychophysical masking data from humans. The abilities of the filterbanks to measure pitch strength were assessed, using stimuli which evoke a weak pitch percept in humans, in order to ascertain whether there is any benefit in the use of the more computationally efficient PZFC.Finally, a complete sound effects search system using auditory features was constructed in collaboration with Google research. Features were computed from the SAI by sampling the SAI space with boxes of different scales. Vector quantization (VQ) was used to convert this multi-scale representation to a sparse code. The 'passive-aggressive model for image retrieval' (PAMIR) was used to learn the relationships between dictionary words and these auditory codewords. These auditory sparse codes were compared against sparse codes generated from MFCCs, and the best performance was found when using the auditory features.
65

Análise e avaliação de técnicas de interação humano-computador para sistemas de recuperação de imagens por conteúdo baseadas em estudo de caso / Evaluating human-computer interaction techniques for content-based image retrieval systems through a case study

Ana Lúcia Filardi 30 August 2007 (has links)
A recuperação de imagens baseada em conteúdo, amplamente conhecida como CBIR (do inglês Content-Based Image Retrieval), é um ramo da área da computação que vem crescendo muito nos últimos anos e vem contribuindo com novos desafios. Sistemas que utilizam tais técnicas propiciam o armazenamento e manipulação de grandes volumes de dados e imagens e processam operações de consultas de imagens a partir de características visuais extraídas automaticamente por meio de métodos computacionais. Esses sistemas devem prover uma interface de usuário visando uma interação fácil, natural e atraente entre o usuário e o sistema, permitindo que o usuário possa realizar suas tarefas com segurança, de modo eficiente, eficaz e com satisfação. Desse modo, o design da interface firma-se como um elemento fundamental para o sucesso de sistemas CBIR. Contudo, dentro desse contexto, a interface do usuário ainda é um elemento constituído de pouca pesquisa e desenvolvimento. Um dos obstáculos para eficácia de design desses sistemas consiste da necessidade em prover aos usuários uma interface de alta qualidade para permitir que o usuário possa consultar imagens similares a uma dada imagem de referência e visualizar os resultados. Para atingir esse objetivo, este trabalho visa analisar a interação do usuário em sistemas de recuperação de imagens por conteúdo e avaliar sua funcionalidade e usabilidade, aplicando técnicas de interação humano-computador que apresentam bons resultados em relação à performance de sistemas com grande complexidade, baseado em um estudo de caso aplicado à medicina / The content-based image retrieval (CBIR) is a challenging area of the computer science that has been growing in a very fast pace in the last years. CBIR systems employ techniques for extracting features from the images, composing the features vectores, and store them together with the images in data bases management system, allowing indexing and querying. CBIR systems deal with large volumes of images. Therefore, the feature vectors are extracted by automatic methods. These systems allow to query the images by content, processing similarity queries, which inherently demands user interaction. Consequently, CBIR systems must pay attention to the user interface, aiming at providing friendly, intuitive and attractive interaction, leading the user to do the tasks efficiently, getting the desired results, feeling safe and fulfilled. From the points highlighted beforehand, we can state that the human-computer interaction (HCI) is a key element of a CBIR system. However, there is still little research on HCI for CBIR systems. One of the requirements of HCI for CBIR is to provide a high quality interface to allow the user to search for similar images to a given query image, and to display the results properly, allowing further interaction. The present dissertation aims at analyzing the user interaction in CBIR systems specially suited to medical applications, evaluating their usability by applying HCI techniques. To do so, a case study was employed, and the results presented
66

A Comparison between Different Recommender System Approaches for a Book and an Author Recommender System

Hedlund, Jesper, Nilsson Tengstrand, Emma January 2020 (has links)
A recommender system is a popular tool used by companies to increase customer satisfaction and to increase revenue. Collaborative filtering and content-based filtering are the two most common approaches when implementing a recommender system, where the former provides recommendations based on user behaviour, and the latter uses the characteristics of the items that are recommended. The aim of the study was to develop and compare different recommender system approaches, for both book and author recommendations and their ability to predict user ratings of an e-book application. The evaluation of the models was done by measuring root mean square error (RMSE) and mean absolute error (MAE). Two pure models were developed, one based on collaborative filtering and one based on content-based filtering. Also, three different hybrid models using a combination of the two pure approaches were developed and compared to the pure models. The study also explored how aggregation of book data to author level could be used to implement an author recommender system. The results showed that the aggregated author data was more difficult to predict. However, it was difficult to draw any conclusions of the performance on author data due to the data aggregation. Although it was clear that it was possible to derive author recommendations based on data from books. The study also showed that the collaborative filtering model performed better than the content-based filtering model according to RMSE but not according to MAE. The lowest RMSE and MAE, however, were achieved by combining the two approaches in a hybrid model.
67

Découverte et exploitation d'objets visuels fréquents dans des collections multimédia / Mining and exploitation of frequent visual objects in multimedia collections

Letessier, Pierre 28 March 2013 (has links)
L’objectif principal de cette thèse est la découverte d’objets visuels fréquents dans de grandes collections multimédias (images ou vidéos). Comme dans de nombreux domaines (finance, génétique, . . .), il s’agit d’extraire une connaissance de manière automatique ou semi-automatique en utilisant la fréquence d’apparition d’un objet au sein d’un corpus comme critère de pertinence. Une première contribution de la thèse est de fournir un formalisme aux problèmes de découverte et de fouille d’instances d’objets visuels fréquents. La deuxième contribution de la thèse est une méthode générique de résolution de ces deux types de problème reposant d’une part sur un processus itératif d’échantillonnage d’objets candidats et d’autre part sur une méthode efficace d’appariement d’objets rigides à large échelle. La troisième contribution de la thèse s’attache à construire une fonction de vraisemblance s’approchant au mieux de la distribution parfaite, tout en restant scalable et efficace. Les expérimentations montrent que contrairement aux méthodes de l’état de l’artnotre approche permet de découvrir efficacement des objets de très petite taille dans des millions d’images. Pour finir, plusieurs scénarios d’exploitation des graphes visuels produits par notre méthode sont proposées et expérimentés. Ceci inclut la détection d’évènements médiatiques transmédias et la suggestion de requêtes visuelles. / The main goal of this thesis is to discover frequent visual objects in large multimedia collections. As in many areas (finance, genetics, . . .), it consists in extracting a knowledge, using the occurence frequency of an object in a collection as a relevance criterion. A first contribution is to provide a formalism to the problems of mining and discovery of frequent visual objects. The second contribution is a generic method to solve these two problems, based on an iterative sampling process, and on an efficient and scalable rigid objects matching. The third contribution of this work focuses on building a likelihood function close to the perfect distribution. Experiments show that contrary to state-of-the-art methods, our approach allows to discover efficiently very small objects in several millions images. Finally, several applications are presented, including trademark logos discovery, transmedia events detection or visual-based query suggestion.
68

A Study on Web Search based on Coordinate Relationships / 同位関係に基づくウェブ検索に関する研究

Meng, Zhao 23 September 2016 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第20030号 / 情博第625号 / 新制||情||109(附属図書館) / 33126 / 京都大学大学院情報学研究科社会情報学専攻 / (主査)教授 田中 克己, 教授 吉川 正俊, 教授 黒橋 禎夫 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
69

Practical Bilingual Education - A case study of teachers and students

Munklinde, Markus January 2008 (has links)
Content-based learning using English as a target language is a method which has been used for decades in Sweden. This thesis focuses on three practical subjects of and how they are taught through the medium of English. The intention was to highlight both benefits and problems using bilingual teaching and to look at language patterns in- and outside of the classroom between teacher and student. This was done using interviews and observations as research methods. Both teachers’ and students’ perceptions have been investigated and analyzed. The research showed that teachers find the teaching rewarding and worthwhile but there are some student issues regarding vocabulary and terminology. Furthermore instructional teaching patterns and code-switching was investigated. This thesis also contains students’ views on their bilingual education.
70

Content Based Addressing : The case for multiple Internet service providers

Mört, Robert January 2012 (has links)
Today's Internet usage is changing from host-to-host communication to user-to-content interaction which proves a challenge for Internet Service Providers (ISPs). Repeated requests lead to transfers of large amounts of traffic containing the same content often over costly inter-ISP connections. Content Distribution Networks (CDNs) contribute to solving this issue, but do not directly address the problem. This thesis project explores how content based addressing could minimize inter-ISP traffic due to repeated requests for content by caching content within the ISP's network. We implemented CCNx 0.6.0 in a network testbed in order to simulate scenarios with multiple ISPs interconnected to each other. This testbed is used to illustrate how caching of popular content minimizes inter-ISP traffic as well as how content independence minimizes the effect of other network problems such as link failures and congestion. These tests shows that the large overhead of the CCNx implementation due to the additional headers brings a 16% performance reduction compared to Hypertext Transfer Protocol (HTTP) transfers. However, these tests also shows that the cost from the inter-ISP traffic of CCNx transfers are constant regardless of the number of repeated requests, due to content caching in the ISP's network. As soon as there is more than one request for the same content there is a gain in using CCNx rather than HTTP for content transfer. / Dagens användning av internet ändrar form från dator-till-dator kommunikation till användaretill- innehålls interaktion vilket innebär nya utmaningar för internetleverantörer vilka måste överföra stora mängder upprepade förfrågningar av innehåll via kostsamma länkar mellan internetleverantörer. Lösningar som innehållsdistribuerande nätverk (Content Distribution Network) hjälper idag till men addresserar inte kärnan av problemet. Det här examensarbetet undersöker hur innehållsbaserad addressering kan minimera mängden trafik mellan internetleverantörer genom att cachning, att lagra kopior av innehåll, i internetleverantörers nätverket. I det här examensarbetet implementerade vi CCNx 0.6.0 i en testbädd för att simulera scenarion med nätverk mellan internetleverantörer. Denna testbädd används för att illustrera hur cachning av populärt innehåll kan minimera trafik mellan internetleverantörer samt hur innehållets oberoende av plats även hjälper till med andra problem i nätverket såsom länkfel och stockning. Dessa test visar att CCNx implementationen har stor overhead information på grund av ytterligare, extra headers vilket medför en 16% reducering i prestanda jämfört med överföringar som använder Hypertext Transfer Protocol (HTTP). Vidare visar dessa tester även att kostnaden från trafik mellan internetleverantörer är konstant oberoende av antalet upprepade förfrågningar, på grund av cachning av innehåll i internetleverantörens nätverk. Så snart det finns fler än en begäran för samma innehåll finns det en vinst i att använda CCNx istället för HTTP för överföringar av innehåll.

Page generated in 0.0454 seconds