• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 24
  • 24
  • 11
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Recuperação de vídeos médicos baseada em conteúdo utilizando extratores de características visuais e sonoras / Content-based medical video retrieval using visual and sound feature extractors

Vagner Mendonça Gonçalves 12 December 2016 (has links)
A evolução dos dispositivos de armazenamento e das redes de computadores permitiram que os vídeos digitais assumissem um importante papel no desenvolvimento de sistemas de informação multimídia. Com a finalidade de aproveitar todo o potencial dos vídeos digitais no desenvolvimento desses sistemas, técnicas automatizadas eficientes para análise, interpretação e recuperação são necessárias. A recuperação de vídeos baseada em conteúdo (CBVR, do inglês content-based video retrieval) permite o processamento e a análise do conteúdo de vídeos digitais visando à extração de informações relevantes que viabilizem indexação e recuperação. Trabalhos científicos têm proposto a aplicação de CBVR em bases de vídeos médicos a fim de proporcionar diferentes contribuições como diagnóstico auxiliado por computador, suporte à tomada de decisão e disponibilização de bases de vídeos para utilização em treinamento e educação médica. Em geral, características visuais são as principais informações utilizadas no contexto de CBVR aplicada em vídeos médicos. No entanto, muitos diagnósticos são realizados por meio da análise dos sons produzidos em diferentes estruturas e órgãos do corpo humano. Um exemplo é o diagnóstico cardíaco que, além de exames de imagem como ecocardiografia e ressonância magnética, também pode empregar a análise dos sons provenientes do coração por meio da auscultação. O objetivo deste trabalho consistiu em aplicar e avaliar extratores de características de som em conjunto com extratores de características visuais para viabilizar CBVR e, então, inferir se a abordagem resultou em ganhos com relação ao desempenho de recuperação quando comparada à utilização apenas das características visuais. Vídeos médicos constituíram nosso principal interesse, porém o trabalho considerou também vídeos não relacionados à área médica para a validação da abordagem. Justifica-se o objetivo, pois a análise do som, visando a obter descritores relevantes para melhorar os resultados de recuperação, ainda é pouco explorada na literatura científica. Essa afirmação foi evidenciada com a condução de uma revisão sistemática sobre o tema. Dois conjuntos de experimentos foram conduzidos visando a validar a abordagem de CBVR mencionada. O primeiro conjunto de experimentos foi aplicado sobre uma base de vídeos sintetizados para validação da abordagem. Já o segundo, foi aplicado em uma base de vídeos construídos utilizando-se imagens provenientes de exames de ressonância magnética em conjunto com sons provenientes de auscultação do coração. Os resultados foram analisados utilizando-se as métricas de revocação e precisão, bem como o gráfico que as relaciona. Demonstrou-se que a abordagem é promissora por meio da melhora significativa dos resultados de recuperação nos diferentes cenários de combinação entre características visuais e sonoras experimentados / Advance of storage devices and computer networks has contributed to digital videos assume an important role in the development of multimedia information systems. In order to take advantage of the full potential of digital videos in the development of these systems, it is necessary the development of efficient techniques for automated data analysis, interpretation and retrieval. Content-based video retrieval (CBVR) allows processing and analysis of content in digital videos to extract relevant information and enable indexing and retrieval. Scientific studies have proposed the application of CBVR in medical video databases in order to provide different contributions like computer-aided diagnosis, decision-making support or availability of video databases for use in medical training and education. In general, visual characteristics are the main information used in the context of CBVR applied in medical videos. However, many diagnoses are performed by analysing the sounds produced in different structures and organs of the human body. An example is the cardiac diagnosis which, in addition to images generated by echocardiography and magnetic resonance imaging, for example, may also employ the analysis of sounds from the heart by means of auscultation. The objective of this work was evaluating combination between audio signal and visual features to enable CBVR and investigating how much this approach can improve retrieval results comparing to using only visual features. Medical videos are the main data of interest in this work, but video segments not related to the medical field were also used to validate the approach. The objectives of this work are justifiable because audio signal analysis, in order to get relevant descriptors to improve retrieval results, is still little explored in the scientific literature. This statement was evidenced by results of a systematic review. Two experiment sets were conducted to validate the CBVR approach described. The first experiment set was applied to a synthetic images database specially built to validate the approach, while the second experiment was applied to a database composed of digital videos created from magnetic resonance imaging and heart sounds from auscultation. Results were analyzed using the recall and precision metrics, as well as the graph which relates these metrics. Results showed that this approach is promising due the significantly improvement obtained in retrieval results to different scenarios of combination between visual and audio signal features
22

Efficient Index Structures For Video Databases

Acar, Esra 01 February 2008 (has links) (PDF)
Content-based retrieval of multimedia data has been still an active research area. The efficient retrieval of video data is proven a difficult task for content-based video retrieval systems. In this thesis study, a Content-Based Video Retrieval (CBVR) system that adapts two different index structures, namely Slim-Tree and BitMatrix, for efficiently retrieving videos based on low-level features such as color, texture, shape and motion is presented. The system represents low-level features of video data with MPEG-7 Descriptors extracted from video shots by using MPEG-7 reference software and stored in a native XML database. The low-level descriptors used in the study are Color Layout (CL), Dominant Color (DC), Edge Histogram (EH), Region Shape (RS) and Motion Activity (MA). Ordered Weighted Averaging (OWA) operator in Slim-Tree and BitMatrix aggregates these features to find final similarity between any two objects. The system supports three different types of queries: exact match queries, k-NN queries and range queries. The experiments included in this study are in terms of index construction, index update, query response time and retrieval efficiency using ANMRR performance metric and precision/recall scores. The experimental results show that using BitMatrix along with Ordered Weighted Averaging method is superior in content-based video retrieval systems.
23

Efficient index structures for video databases

Acar, Esra 01 February 2008 (has links) (PDF)
Content-based retrieval of multimedia data has been still an active research area. The efficient retrieval of video data is proven a difficult task for content-based video retrieval systems. In this thesis study, a Content-Based Video Retrieval (CBVR) system that adapts two different index structures, namely Slim-Tree and BitMatrix, for efficiently retrieving videos based on low-level features such as color, texture, shape and motion is presented. The system represents low-level features of video data with MPEG-7 Descriptors extracted from video shots by using MPEG-7 reference software and stored in a native XML database. The low-level descriptors used in the study are Color Layout (CL), Dominant Color (DC), Edge Histogram (EH), Region Shape (RS) and Motion Activity (MA). Ordered Weighted Averaging (OWA) operator in Slim-Tree and BitMatrix aggregates these features to find final similarity between any two objects. The system supports three different types of queries: exact match queries, k-NN queries and range queries. The experiments included in this study are in terms of index construction, index update, query response time and retrieval efficiency using ANMRR performance metric and precision/recall scores. The experimental results show that using BitMatrix along with Ordered Weighted Averaging method is superior in content-based video retrieval systems.
24

Holistic Representations For Activities And Crowd Behaviors

Solmaz, Berkan 01 January 2013 (has links)
In this dissertation, we address the problem of analyzing the activities of people in a variety of scenarios, this is commonly encountered in vision applications. The overarching goal is to devise new representations for the activities, in settings where individuals or a number of people may take a part in specific activities. Different types of activities can be performed by either an individual at the fine level or by several people constituting a crowd at the coarse level. We take into account the domain specific information for modeling these activities. The summary of the proposed solutions is presented in the following. The holistic description of videos is appealing for visual detection and classification tasks for several reasons including capturing the spatial relations between the scene components, simplicity, and performance [1, 2, 3]. First, we present a holistic (global) frequency spectrum based descriptor for representing the atomic actions performed by individuals such as: bench pressing, diving, hand waving, boxing, playing guitar, mixing, jumping, horse riding, hula hooping etc. We model and learn these individual actions for classifying complex user uploaded videos. Our method bypasses the detection of interest points, the extraction of local video descriptors and the quantization of local descriptors into a code book; it represents each video sequence as a single feature vector. This holistic feature vector is computed by applying a bank of 3-D spatio-temporal filters on the frequency spectrum of a video sequence; hence it integrates the information about the motion and scene structure. We tested our approach on two of the most challenging datasets, UCF50 [4] and HMDB51 [5], and obtained promising results which demonstrates the robustness and the discriminative power of our holistic video descriptor for classifying videos of various realistic actions. In the above approach, a holistic feature vector of a video clip is acquired by dividing the video into spatio-temporal blocks then concatenating the features of the individual blocks together. However, such a holistic representation blindly incorporates all the video regions regardless of iii their contribution in classification. Next, we present an approach which improves the performance of the holistic descriptors for activity recognition. In our novel method, we improve the holistic descriptors by discovering the discriminative video blocks. We measure the discriminativity of a block by examining its response to a pre-learned support vector machine model. In particular, a block is considered discriminative if it responds positively for positive training samples, and negatively for negative training samples. We pose the problem of finding the optimal blocks as a problem of selecting a sparse set of blocks, which maximizes the total classifier discriminativity. Through a detailed set of experiments on benchmark datasets [6, 7, 8, 9, 5, 10], we show that our method discovers the useful regions in the videos and eliminates the ones which are confusing for classification, which results in significant performance improvement over the state-of-the-art. In contrast to the scenes where an individual performs a primitive action, there may be scenes with several people, where crowd behaviors may take place. For these types of scenes the traditional approaches for recognition will not work due to severe occlusion and computational requirements. The number of videos is limited and the scenes are complicated, hence learning these behaviors is not feasible. For this problem, we present a novel approach, based on the optical flow in a video sequence, for identifying five specific and common crowd behaviors in visual scenes. In the algorithm, the scene is overlaid by a grid of particles, initializing a dynamical system which is derived from the optical flow. Numerical integration of the optical flow provides particle trajectories that represent the motion in the scene. Linearization of the dynamical system allows a simple and practical analysis and classification of the behavior through the Jacobian matrix. Essentially, the eigenvalues of this matrix are used to determine the dynamic stability of points in the flow and each type of stability corresponds to one of the five crowd behaviors. The identified crowd behaviors are (1) bottlenecks: where many pedestrians/vehicles from various points in the scene are entering through one narrow passage, (2) fountainheads: where many pedestrians/vehicles are emerging from a narrow passage only to separate in many directions, (3) lanes: where many pedestrians/vehicles are moving at the same speeds in the same direction, (4) arches or rings: where the iv collective motion is curved or circular, and (5) blocking: where there is a opposing motion and desired movement of groups of pedestrians is somehow prohibited. The implementation requires identifying a region of interest in the scene, and checking the eigenvalues of the Jacobian matrix in that region to determine the type of flow, that corresponds to various well-defined crowd behaviors. The eigenvalues are only considered in these regions of interest, consistent with the linear approximation and the implied behaviors. Since changes in eigenvalues can mean changes in stability, corresponding to changes in behavior, we can repeat the algorithm over clips of long video sequences to locate changes in behavior. This method was tested on over real videos representing crowd and traffic scenes.

Page generated in 0.0827 seconds