• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 244
  • 112
  • 87
  • 25
  • 19
  • 17
  • 14
  • 14
  • 7
  • 6
  • 6
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 626
  • 124
  • 80
  • 76
  • 74
  • 66
  • 63
  • 63
  • 60
  • 60
  • 57
  • 51
  • 50
  • 49
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Learning Language-vision Correspondences

Jamieson, Michael 15 February 2011 (has links)
Given an unstructured collection of captioned images of cluttered scenes featuring a variety of objects, our goal is to simultaneously learn the names and appearances of the objects. Only a small fraction of local features within any given image are associated with a particular caption word, and captions may contain irrelevant words not associated with any image object. We propose a novel algorithm that uses the repetition of feature neighborhoods across training images and a measure of correspondence with caption words to learn meaningful feature configurations (representing named objects). We also introduce a graph-based appearance model that captures some of the structure of an object by encoding the spatial relationships among the local visual features. In an iterative procedure we use language (the words) to drive a perceptual grouping process that assembles an appearance model for a named object. We also exploit co-occurrences among appearance models to learn hierarchical appearance models. Results of applying our method to three data sets in a variety of conditions demonstrate that from complex, cluttered, real-world scenes with noisy captions, we can learn both the names and appearances of objects, resulting in a set of models invariant to translation, scale, orientation, occlusion, and minor changes in viewpoint or articulation. These named models, in turn, are used to automatically annotate new, uncaptioned images, thereby facilitating keyword-based image retrieval.
22

Pixel Based Note Taking through Perceptual Structure Inference

Harris, Mitchell Kent 08 October 2010 (has links) (PDF)
Knowledge workers need effective annotation tools to assimilate information. Unfortunately many digital annotators are limited in the range of document that they accept. Those that do accept many different documents do so by converting documents to images, thus losing any awareness about the original content of the document. We introduce a digital note taker that is both universal and content aware. By constructing a hierarchical context tree of document images, the structure of a document is inferred from the image. This hierarchical context tree is shown to be useful by demonstrating how it facilitates selection of document elements, reflowing documents to accommodate inserted notes, and expanding the context of links. PixelJot, and implementation of these ideas, demonstrates their feasibility.
23

Querying with Ontological Terminologies And their Annotations

Sun, Yi 01 May 2007 (has links)
No description available.
24

A Semantics-based User Interface Model for Content Annotation, Authoring and Exploration

Khalili, Ali 02 February 2015 (has links) (PDF)
The Semantic Web and Linked Data movements with the aim of creating, publishing and interconnecting machine readable information have gained traction in the last years. However, the majority of information still is contained in and exchanged using unstructured documents, such as Web pages, text documents, images and videos. This can also not be expected to change, since text, images and videos are the natural way in which humans interact with information. Semantic structuring of content on the other hand provides a wide range of advantages compared to unstructured information. Semantically-enriched documents facilitate information search and retrieval, presentation, integration, reusability, interoperability and personalization. Looking at the life-cycle of semantic content on the Web of Data, we see quite some progress on the backend side in storing structured content or for linking data and schemata. Nevertheless, the currently least developed aspect of the semantic content life-cycle is from our point of view the user-friendly manual and semi-automatic creation of rich semantic content. In this thesis, we propose a semantics-based user interface model, which aims to reduce the complexity of underlying technologies for semantic enrichment of content by Web users. By surveying existing tools and approaches for semantic content authoring, we extracted a set of guidelines for designing efficient and effective semantic authoring user interfaces. We applied these guidelines to devise a semantics-based user interface model called WYSIWYM (What You See Is What You Mean) which enables integrated authoring, visualization and exploration of unstructured and (semi-)structured content. To assess the applicability of our proposed WYSIWYM model, we incorporated the model into four real-world use cases comprising two general and two domain-specific applications. These use cases address four aspects of the WYSIWYM implementation: 1) Its integration into existing user interfaces, 2) Utilizing it for lightweight text analytics to incentivize users, 3) Dealing with crowdsourcing of semi-structured e-learning content, 4) Incorporating it for authoring of semantic medical prescriptions.
25

Extension automatique de l'annotation d'images pour la recherche et la classification / Automatic image annotation extension for search and classification

Bouzayani, Abdessalem 09 May 2018 (has links)
Cette thèse traite le problème d’extension d’annotation d’images. En effet, la croissance rapide des archives de contenus visuels disponibles a engendré un besoin en techniques d’indexation et de recherche d’information multimédia. L’annotation d’images permet l’indexation et la recherche dans des grandes collections d’images d’une façon facile et rapide. À partir de bases d’images partiellement annotées manuellement, nous souhaitons compléter les annotations de ces bases, grâce à l’annotation automatique, pour pouvoir rendre plus efficaces les méthodes de recherche et/ou classification d’images. Pour l’extension automatique d’annotation d’images, nous avons utilisé les modèles graphiques probabilistes. Le modèle proposé est un mélange de distributions multinomiales et de mélanges de Gaussiennes où nous avons combiné des caractéristiques visuelles et textuelles. Pour réduire le coût de l’annotation manuelle et améliorer la qualité de l’annotation obtenue, nous avons intégré des retours utilisateur dans notre modèle. Les retours utilisateur ont été effectués en utilisant l’apprentissage dans l’apprentissage, l’apprentissage incrémental et l’apprentissage actif. Pour combler le problème du fossé sémantique et enrichir l’annotation d’images, nous avons utilisé une hiérarchie sémantique en modélisant de nombreuses relations sémantiques entre les mots-clés d’annotation. Nous avons donc présenté une méthode semi-automatique pour construire une hiérarchie sémantique à partie d’un ensemble de mots-clés. Après la construction de la hiérarchie, nous l’avons intégré dans notre modèle d’annotation d’images. Le modèle obtenu avec la hiérarchie est un mélange de distributions de Bernoulli et de mélanges de Gaussiennes / This thesis deals the problem of image annotation extension. Indeed, the fast growth of available visual contents has led a need for indexing and searching of multimedia information methods. Image annotation allows indexing and searching in a large collection of images in an easy and fast way. We wish, from partially manually annotated images databases, complete automatically the annotation of these sets, in order to make methods of research and / or classification of images more efficient. For automatic image annotation extension, we use probabilistic graphical models. The proposed model is based on a mixture of multinomial distributions and mixtures of Gaussian where we have combined visual and textual characteristics. To reduce the cost of manual annotation and improve the quality of the annotation obtained, we have incorporated user feedback into our model. User feedback was done using learning in learning, incremental learning and active learning. To reduce the semantic gap problem and to enrich the image annotation, we use a semantic hierarchy by modeling many semantic relationships between keywords. We present a semi-automatic method to build a semantic hierarchy from a set of keywords. After building the hierarchy, we integrate it into our image annotation model. The model obtained with this hierarchy is a mixture of Bernoulli distributions and Gaussian mixtures
26

Le développement de corpus annotés pour la langue arabe / Building annotated corpora for the Arabic language

Zaghouani, Wajdi 06 January 2015 (has links)
L’objectif de cette thèse est de montrer les différentes facettes de l’annotation de corpus dans la langue arabe. Nous présentons nos travaux scientifiques sur l’annotation de corpus et sur la création de ressources lexicales dans la langue arabe. D’abord, nous discutons des méthodes, des difficultés linguistiques, des guides d’annotation, de l’optimisation de l’effort d’annotation, ainsi que de l’adaptation à la langue arabe de procédures d’annotation existantes. Ensuite, nous montrons la complémentarité entre les différentes couches d’annotation. Enfin, nous illustrons l’importance de ces travaux pour le traitement automatique des langues en illustrant quelques exemples de ressources et d’applications. / The goal of this thesis is to show the various aspects of corpus annotation in the Arabic language. We present our publications on corpus annotation and lexical resources creation in the Arabic language. First, we discuss the methods, the language difficulties, the annotation guidelines, the annotation effort optimization limits and how we adapted some of the existing annotation procedures to the Arabic language. Furthermore, we show the complementarity between the different layers of annotations. Finally, we illustrate the importance of our work for natural language processing by illustrating some examples of resources and applications.
27

Vyhledávání a aktualizace fragmentů anotací / Annotation Fragments Searching and Updates

Kubík, Lukáš January 2014 (has links)
This master's thesis analyzes annotation server algorithms for searching and updating annotation fragments. The annotation server is a part of the project Decipher. Analyzed algorithms are improved and replaced by newly designed algorithms in this project. A part of this project also designs a new algorithm for measuring how much is an annotation affected after updating the document.
28

Automatic caption generation for news images

Feng, Yansong January 2011 (has links)
This thesis is concerned with the task of automatically generating captions for images, which is important for many image-related applications. Automatic description generation for video frames would help security authorities manage more efficiently and utilize large volumes of monitoring data. Image search engines could potentially benefit from image description in supporting more accurate and targeted queries for end users. Importantly, generating image descriptions would aid blind or partially sighted people who cannot access visual information in the same way as sighted people can. However, previous work has relied on fine-gained resources, manually created for specific domains and applications In this thesis, we explore the feasibility of automatic caption generation for news images in a knowledge-lean way. We depart from previous work, as we learn a model of caption generation from publicly available data that has not been explicitly labelled for our task. The model consists of two components, namely extracting image content and rendering it in natural language. Specifically, we exploit data resources where images and their textual descriptions co-occur naturally. We present a new dataset consisting of news articles, images, and their captions that we required from the BBC News website. Rather than laboriously annotating images with keywords, we simply treat the captions as the labels. We show that it is possible to learn the visual and textual correspondence under such noisy conditions by extending an existing generative annotation model (Lavrenko et al., 2003). We also find that the accompanying news documents substantially complements the extraction of the image content. In order to provide a better modelling and representation of image content,We propose a probabilistic image annotation model that exploits the synergy between visual and textual modalities under the assumption that images and their textual descriptions are generated by a shared set of latent variables (topics). Using Latent Dirichlet Allocation (Blei and Jordan, 2003), we represent visual and textual modalities jointly as a probability distribution over a set of topics. Our model takes these topic distributions into account while finding the most likely keywords for an image and its associated document. The availability of news documents in our dataset allows us to perform the caption generation task in a fashion akin to text summarization; save one important difference that our model is not solely based on text but uses the image in order to select content from the document that should be present in the caption. We propose both extractive and abstractive caption generation models to render the extracted image content in natural language without relying on rich knowledge resources, sentence-templates or grammars. The backbone for both approaches is our topic-based image annotation model. Our extractive models examine how to best select sentences that overlap in content with our image annotation model. We modify an existing abstractive headline generation model to our scenario by incorporating visual information. Our own model operates over image description keywords and document phrases by taking dependency and word order constraints into account. Experimental results show that both approaches can generate human-readable captions for news images. Our phrase-based abstractive model manages to yield as informative captions as those written by the BBC journalists.
29

Interactive video retrieval using implicit user feedback

Vrochidis, Stefanos January 2013 (has links)
In the recent years, the rapid development of digital technologies and the low cost of recording media have led to a great increase in the availability of multimedia content worldwide. This availability places the demand for the development of advanced search engines. Traditionally, manual annotation of video was one of the usual practices to support retrieval. However, the vast amounts of multimedia content make such practices very expensive in terms of human effort. At the same time, the availability of low cost wearable sensors delivers a plethora of user-machine interaction data. Therefore, there is an important challenge of exploiting implicit user feedback (such as user navigation patterns and eye movements) during interactive multimedia retrieval sessions with a view to improving video search engines. In this thesis, we focus on automatically annotating video content by exploiting aggregated implicit feedback of past users expressed as click-through data and gaze movements. Towards this goal, we have conducted interactive video retrieval experiments, in order to collect click-through and eye movement data in not strictly controlled environments. First, we generate semantic relations between the multimedia items by proposing a graph representation of aggregated past interaction data and exploit them to generate recommendations, as well as to improve content-based search. Then, we investigate the role of user gaze movements in interactive video retrieval and propose a methodology for inferring user interest by employing support vector machines and gaze movement-based features. Finally, we propose an automatic video annotation framework, which combines query clustering into topics by constructing gaze movement-driven random forests and temporally enhanced dominant sets, as well as video shot classification for predicting the relevance of viewed items with respect to a topic. The results show that exploiting heterogeneous implicit feedback from past users is of added value for future users of interactive video retrieval systems.
30

Management of Big Annotations in Relational Database Management Systems

Ibrahim, Karim 24 April 2014 (has links)
Annotations play a key role in understanding and describing the data, and annotation management has become an integral component in most emerging applications such as scientific databases. Scientists need to exchange not only data but also their thoughts, comments and annotations on the data as well. Annotations represent comments, Lineage of data, description and much more. Therefore, several annotation management techniques have been proposed to efficiently and abstractly handle the annotations. However, with the increasing scale of collaboration and the extensive use of annotations among users and scientists, the number and size of the annotations may far exceed the size of the original data itself. However, current annotation management techniques don’t address large scale annotation management. In this work, we propose three chapters to that tackle the Big annotations from three different perspectives (1) User-Centric Annotation Propagation, (2) Proactive Annotation Management and (3) InsightNotes Summary-Based Querying. We capture users' preferences in profiles and personalizes the annotation propagation at query time by reporting the most relevant annotations (per tuple) for each user based on time plan. We provide three Time-Based plans, support static and dynamic profiles for each user. We support a proactive annotation management which suggests data tuples to be annotated in case new annotation has a reference to a data value and user doesn’t annotate the data precisely. Moreover, we provide an extension on the InsightNotes: Summary-Based Annotation Management in Relational Databases by adding query language that enable the user to query the annotation summaries and add predicates on the annotation summaries themselves. Our system is implemented inside PostgreSQL.

Page generated in 0.1116 seconds