• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • 1
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Fusing Semantic Information Extracted From Visual, Auditory And Textual Data Of Videos

Gulen, Elvan 01 July 2012 (has links) (PDF)
In recent years, due to the increasing usage of videos, manual information extraction is becoming insufficient to users. Therefore, extracting semantic information automatically turns out to be a serious requirement. Today, there exists some systems that extract semantic information automatically by using visual, auditory and textual data separately but the number of studies that uses more than one data source is very limited. As some studies on this topic have already shown, using multimodal video data for automatic information extraction ensures getting better results by guaranteeing increase in the accuracy of semantic information that is retrieved from visual, auditory and textual sources. In this thesis, a complete system which fuses the semantic information that is obtained from visual, auditory and textual video data is introduced. The fusion system carries out the following procedures / analyzing and uniting the semantic information that is extracted from multimodal data by utilizing concept interactions and consequently generating a semantic dataset which is ready to be stored in a database. Besides, experiments are conducted to compare results obtained from the proposed multimodal fusion operation with results obtained as an outcome of semantic information extraction from just one modality and other fusion methods. The results indicate that fusing all available information along with concept relations yields better results than any unimodal approaches and other traditional fusion methods in overall.
2

Exploring Hidden Coherent Feature Groups and Temporal Semantics for Multimedia Big Data Analysis

Yang, Yimin 31 August 2015 (has links)
Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.
3

CoreKTV - Uma infraestrutura baseada em conhecimento para TV Digital Interativa: um estudo de caso para o middleware Ginga

Araujo, Jônatas Pereira Cabral de 23 September 2011 (has links)
Made available in DSpace on 2015-05-14T12:36:30Z (GMT). No. of bitstreams: 1 arquivototal.pdf: 3139610 bytes, checksum: 2e35b9df588a5149cf314e0b66a66da5 (MD5) Previous issue date: 2011-09-23 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The advent of Digital TV has promoted a scenario of increasing number of channels, content and services available to the user. However, these changes were not reflected in the way multimedia content is described. The traditional model presents a syntactic structure, which have no semantic information. In addition, its representation format for transmission does not allow interoperability with other systems. With the convergence between TV and web platforms that scenario becomes even more problematic, since there is a movement in the Web, leaded by the World Wide Web Consortium, in order to make systems interoperable and based on homogeneous representation format. In this scenario, this work proposes a knowledge based infrastructure for modeling multimedia content in TVDI, aligned with the concepts and standards of the Semantic Web aiming to integrate the platform and Web TVDI, applied to the Brazilian middleware standard the Ginga middleware. / O surgimento da TV Digital promoveu um cenário de aumento da quantidade de canais, serviços e conteúdo disponíveis ao usuário. Entretanto, tais mudanças não foram refletidas na forma como o conteúdo multimídia é descrito. O modelo tradicional apresenta uma estrutura sintática, no qual as informações não possuem semântica, além do seu formato de representação para transmissão que não permite interoperabilidade com outros sistemas. Com a convergência entre as plataformas de TV e Web esse cenário passa a ser ainda mais problemático, uma vez que na Web já há um movimento em curso com o objetivo de tornar os sistemas interoperáveis e o formato de representação homogêneo. Neste cenário, este trabalho propõe uma infraestrutura para modelagem de conteúdo multimídia em TVDI baseada em conhecimento alinhado aos conceitos e padrões da Web Semântica, visando à integração entre as plataformas de TVDI e Web, aplicada ao middleware do padrão brasileiro, o Ginga.

Page generated in 0.0641 seconds