• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • Tagged with
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Contribution to concept detection on images using visual and textual descriptors / Contribution à la détection de concepts sur des images utilisant des descripteurs visuels et textuels

Zhang, Yu 15 May 2014 (has links)
Pas de résumé / This thesis is dedicated to the problem of training and integration strategies of several modalities (visual, textual), in order to perform an efficient Visual Concept Detection and Annotation (VCDA) task, which has become a very popular and important research topic in recent years because of its wide range of application such as image/video indexing and retrieval, security access control, video monitoring, etc. Despite a lot of efforts and progress that have been made during the past years, it remains an open problem and is still considered as one of the most challenging problems in computer vision community, mainly due to inter-class similarities and intra-class variations like occlusion, background clutter, changes in viewpoint, pose, scale and illumination. This means that the image content can hardly be described by low-level visual features. In order to address these problems, the text associated with images is used to capture valuable semantic meanings about image content. Moreover, In order to benefit from both visual models and textual models, we propose multimodal approach. As the typical visual models, designing good visual descriptors and modeling these descriptors play an important role. Meanwhile how to organize the text associated with images is also very important. In this context, the objective of this thesis is to propose some innovative contributions for the task of VCDA. For visual models, a novel visual features/descriptors was proposed, which effectively and efficiently represent the visual content of images/videos. In addition, a novel method for encoding local binary descriptors was present. For textual models, we proposed two kinds of novel textual descriptor. The first descriptor is semantic Bag-of-Words(sBoW) using a dictionary. The second descriptor is Image Distance Feature(IDF) based on tags associated with images. Finally, in order to benefit from both visual models and textual models, fusion is carried out by MKL efficiently embed. [...]
2

Improving web multimedia information retrieval using social data

Bracamonte Nole, Teresa Jacqueline January 2018 (has links)
Tesis para optar al grado de Doctora en Ciencias, Mención Computación / Buscar contenido multimedia es una de las tareas más comunes que los usuarios realizan en la Web. Actualmente, los motores de búsqueda en la Web han mejorado la precisión de sus búsquedas de contenido multimedia y ahora brindan una mejor experiencia de usuarios. Sin embargo, estos motores aún no logran obtener resultados precisos para consultas que no son comunes, y consultas que se refieren a conceptos abstractos. En ambos escenarios, la razón principal es la falta de información preliminar. Esta tesis se enfoca en mejorar la recuperación de información multimedia en la Web usando datos generados a partir de la interacción entre usuarios y recursos multimedia. Para eso, se propone mejorar la recuperación de información multimedia desde dos perspectivas: (1) extrayendo conceptos relevantes a los recursos multimedia, y (2) mejorando las descripciones multimedia con datos generados por el usuario. En ambos casos, proponemos sistemas que funcionan independientemente del tipo de multimedia, y del idioma de los datos de entrada. En cuanto a la identificación de conceptos relacionados a objetos multimedia, desarrollamos un sistema que va desde los resultados de búsqueda específicos de la consulta hasta los conceptos detectados para dicha consulta. Nuestro enfoque demuestra que podemos aprovechar la vista parcial de una gran colección de documentos multimedia para detectar conceptos relevantes para una consulta determinada. Además, diseñamos una evaluación basada en usuarios que demuestra que nuestro algoritmo de detección de conceptos es más sólido que otros enfoques similares basados en detección de comunidades. Para mejorar la descripción multimedia, desarrollamos un sistema que combina contenido audio-visual de documentos multimedia con información de su contexto para mejorar y generar nuevas anotaciones para los documentos multimedia. Específicamente, extraemos datos de clicks de los registros de consultas y usamos las consultas como sustitutos para las anotaciones manuales. Tras una primera inspección, demostramos que las consultas proporcionan una descripción concisa de los documentos multimedia. El objetivo principal de esta tesis es demostrar la relevancia del contexto asociado a documentos multimedia para mejorar el proceso de recuperación de documentos multimedia en la Web. Además, mostramos que los grafos proporcionan una forma natural de modelar problemas multimedia. / Fondef D09I-1185, CONICYT-PCHA/Doctorado Nacional/2013-63130260, Apoyo a estadías corta de la Escuela de Postgrado de la U. de Chile, y el Núcleo Milenio CIWS
3

Integrating Deep Learning with Correlation-based Multimedia Semantic Concept Detection

Ha, Hsin-Yu 01 September 2015 (has links)
The rapid advances in technologies make the explosive growth of multimedia data possible and available to the public. Multimedia data can be defined as data collection, which is composed of various data types and different representations. Due to the fact that multimedia data carries knowledgeable information, it has been widely adopted to different genera, like surveillance event detection, medical abnormality detection, and many others. To fulfil various requirements for different applications, it is important to effectively classify multimedia data into semantic concepts across multiple domains. In this dissertation, a correlation-based multimedia semantic concept detection framework is seamlessly integrated with the deep learning technique. The framework aims to explore implicit and explicit correlations among features and concepts while adopting different Convolutional Neural Network (CNN) architectures accordingly. First, the Feature Correlation Maximum Spanning Tree (FC-MST) is proposed to remove the redundant and irrelevant features based on the correlations between the features and positive concepts. FC-MST identifies the effective features and decides the initial layer's dimension in CNNs. Second, the Negative-based Sampling method is proposed to alleviate the data imbalance issue by keeping only the representative negative instances in the training process. To adjust dierent sizes of training data, the number of iterations for the CNN is determined adaptively and automatically. Finally, an Indirect Association Rule Mining (IARM) approach and a correlation-based re-ranking method are proposed to reveal the implicit relationships from the correlations among concepts, which are further utilized together with the classification scores to enhance the re-ranking process. The framework is evaluated using two benchmark multimedia data sets, TRECVID and NUS-WIDE, which contain large amounts of multimedia data and various semantic concepts.
4

Contributions à la détection de concepts et d'événements dans les documents vidéos / Contributions for the concepts and events detection in videos documents

Derbas, Nadia 30 September 2014 (has links)
L'explosion de la quantité de documents multimédias, suite à l'essor des technologies numériques, a rendu leur indexation très coûteuse et manuellement impossible. Par conséquent, le besoin de disposer de systèmes d'indexation capables d'analyser, de stocker et de retrouver les documents multimédias automatiquement, et en se basant sur leur contenu (audio, visuel), s'est fait ressentir dans de nombreux domaines applicatifs. Cependant, les techniques d'indexation actuelles rencontrent encore des problèmes de faisabilité ou de qualité. Leur performance reste très limitée et est dépendante de plusieurs facteurs comme la variabilité et la quantité de données à traiter. En effet, les systèmes d'indexation cherchent à reconnaître des concepts statiques, comme des objets (vélo, chaise,...), ou des événements (mariage, manifestation,...). Ces systèmes se heurtent donc au problème de variabilité de formes, de positions, de poses, d'illuminations, d'orientations des objets. Le passage à l'échelle pour pouvoir traiter de très grands volumes de données tout en respectant des contraintes de temps de calcul et de stockage est également une contrainte.Dans cette thèse, nous nous intéressons à l'amélioration de la performance globale de ces systèmes d'indexation de documents multimédias par le contenu. Pour cela nous abordons le problème sous différents angles et apportons quatre contributions à divers stades du processus d'indexation. Nous proposons tout d'abord une nouvelle méthode de fusion "doublement précoce " entre différentes modalités ou différentes sources d'informations afin d'exploiter au mieux la corrélation entre les modalités. Cette méthode est ensuite appliquée à la détection de scènes violentes dans les films. Nous développons ensuite une méthode faiblement supervisée pour la localisation des concepts basiques (comme les objets) dans les images qui pourra être utilisé plus tard comme un descripteur et une information supplémentaire pour la détection de concepts plus complexes (comme des événements). Nous traitons également la problématique de réduction du bruit généré par des annotations ambiguës sur les données d'apprentissage en proposant deux méthodes: une génération de nouvelles annotations au niveau des plans et une méthode de pondération des plans. Enfin, nous avons mis en place une méthode d'optimisation des représentations du contenu multimédia qui combine une réduction de dimension basée sur une ACP et des transformations non linéaires.Les quatre contributions sont testées et évaluées sur les collections de données faisant référence dans le domaine, comme TRECVid ou MediaEval. Elles ont participé au bon classement de nos soumissions dans ces campagnes. / A consequence of the rise of digital technology is that the quantity of available collections of multimedia documents is permanently and strongly increasing. The indexing of these documents became both very costly and impossible to do manually. In order to be able to analyze, classify and search multimedia documents, indexing systems have been defined. However, most of these systems suffer quality or practicability issues. Their performance is limited and depends on the data volume and data variability. Indexing systems analyze multimedia documents, looking for static concepts (bicycle, chair,...), or events (wedding, protest,...). Therefore, the variability in shapes, positions, lighting or orientation of objects hinders the process. Another aspect is that systems must be scalable. They should be able to handle big data while using reasonable amount of computing time and memory.The aim of this thesis is to improve the general performance of content-based multimedia indexing systems. Four main contributions are brought in this thesis for improving different stages of the indexing process. The first one is an "early-early fusion method" that merges different information sources in order to extract their deep correlations. This method is used for violent scenes detection in movies. The second contribution is a weakly supervised method for basic concept (objects) localization in images. This can be used afterwards as a new descriptor to help detecting complex concepts (events). The third contribution tackles the noise reduction problem on ambiguously annotated data. Two methods are proposed: a shot annotation generator, and a shot weighing method. The last contribution is a generic descriptor optimization method, based on PCA and non-linear transforms.These four contributions are tested and evaluated using reference data collections, including TRECVid and MediaEval. These contributions helped our submissions achieving very good rankings in those evaluation campaigns.

Page generated in 0.1132 seconds