• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 6
  • 5
  • 2
  • Tagged with
  • 30
  • 30
  • 12
  • 10
  • 8
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Segmentação de sentenças e detecção de disfluências em narrativas transcritas de testes neuropsicológicos / Sentence Segmentation and Disfluency Detection in Narrative Transcripts from Neuropsychological Tests

Marcos Vinícius Treviso 20 December 2017 (has links)
Contexto: Nos últimos anos, o Comprometimento Cognitivo Leve (CCL) tem recebido uma grande atenção, pois pode representar um estágio pré-clínico da Doença de Alzheimer (DA). Em termos de distinção entre idosos saudáveis (CTL) e pacientes com CCL, vários estudos têm mostrado que a produção de discurso é uma tarefa sensível para detectar efeitos de envelhecimento e para diferenciar indivíduos com CCL dos saudáveis. Ferramentas de Processamento de Língua Natural (PLN) têm sido aplicadas em transcrições de narrativas em inglês e também em português brasileiro, por exemplo, o ambiente Coh-Metrix-Dementia. Lacunas: No entanto, a ausência de informações de limites de sentenças e a presença de disfluências em transcrições impedem a aplicação direta de ferramentas que dependem de um texto bem formado, como taggers e parsers. Objetivos: O objetivo principal deste trabalho é desenvolver métodos para segmentar as transcrições em sentenças e detectar/remover as disfluências presentes nelas, de modo que sirvam como uma etapa de pré-processamento para ferramentas subsequentes de PLN. Métodos e Avaliação: Propusemos um método baseado em redes neurais recorrentes convolucionais (RCNNs) com informações prosódicas, morfossintáticas e word embeddings para a tarefa de segmentação de sentenças (SS). Já para a detecção de disfluências (DD), dividimos o método e a avaliação de acordo com as categorias de disfluências: (i) para preenchimentos (pausas preenchidas e marcadores discursivos), propusemos a mesma RCNN com as mesmas features de SS em conjunto com uma lista pré-determinada de palavras; (ii) para disfluências de edição (repetições, revisões e recomeços), adicionamos features tradicionalmente empregadas em trabalhos relacionados e introduzimos um modelo de CRF na camada de saída da RCNN. Avaliamos todas as tarefas intrinsecamente, analisando as features mais importantes, comparando os métodos propostos com métodos mais simples, e identificando os principais acertos e erros. Além disso, um método final, chamado DeepBonDD, foi criado combinando todas as tarefas, e foi avaliado extrinsecamente com 9 métricas sintáticas do Coh-Metrix-Dementia. Conclusão: Para SS, obteve-se F1 = 0:77 em transcrições de CTL e F1 = 0:74 de CCL, caracterizando o estado-da-arte para esta tarefa em fala comprometida. Para detecção de preenchimentos, obtevese em média F1 = 0:90 para CTL e F1 = 0:92 para CCL, resultados que estão dentro da margem de trabalhos relacionados da língua inglesa. Ao serem ignorados os recomeços na detecção de disfluências de edição, obteve-se em média F1 = 0:70 para CTL e F1 = 0:75 para CCL. Na avaliação extrínseca, apenas 3 métricas mostraram diferença significativa entre as transcrições de CCL manuais e as geradas pelo DeepBonDD, sugerindo que, apesar das variações de limites de sentença e de disfluências, o DeepBonDD é capaz de gerar transcrições para serem processadas por ferramentas de PLN. / Background: In recent years, mild cognitive impairment (MCI) has received great attention because it may represent a pre-clinical stage of Alzheimers Disease (AD). In terms of distinction between healthy elderly (CTL) and MCI patients, several studies have shown that speech production is a sensitive task to detect aging effects and to differentiate individuals with MCI from healthy ones. Natural language procesing tools have been applied to transcripts of narratives in English and also in Brazilian Portuguese, for example, Coh-Metrix-Dementia. Gaps: However, the absence of sentence boundary information and the presence of disfluencies in transcripts prevent the direct application of tools that depend on well-formed texts, such as taggers and parsers. Objectives: The main objective of this work is to develop methods to segment the transcripts into sentences and to detect the disfluencies present in them (independently and jointly), to serve as a preprocessing step for the application of subsequent Natural Language Processing (NLP) tools. Methods and Evaluation: We proposed a method based on recurrent convolutional neural networks (RCNNs) with prosodic, morphosyntactic and word embeddings features for the sentence segmentation (SS) task. For the disfluency detection (DD) task, we divided the method and the evaluation according to the categories of disfluencies: (i) for fillers (filled pauses and discourse marks), we proposed the same RCNN with the same SS features along with a predetermined list of words; (ii) for edit disfluencies (repetitions, revisions and restarts), we added features traditionally employed in related works and introduced a CRF model after the RCNN output layer. We evaluated all the tasks intrinsically, analyzing the most important features, comparing the proposed methods to simpler ones, and identifying the main hits and misses. In addition, a final method, called DeepBonDD, was created combining all tasks and was evaluated extrinsically using 9 syntactic metrics of Coh-Metrix-Dementia. Conclusion: For SS, we obtained F1 = 0:77 in CTL transcripts and F1 = 0:74 in MCI, achieving the state of the art for this task on impaired speech. For the filler detection, we obtained, on average, F1 = 0:90 for CTL and F1 = 0:92 for MCI, results that are similar to related works of the English language. When restarts were ignored in the detection of edit disfluencies, F1 = 0:70 was obtained for CTL and F1 = 0:75 for MCI. In the extrinsic evaluation, only 3 metrics showed a significant difference between the manual MCI transcripts and those generated by DeepBonDD, suggesting that, despite result differences in sentence boundaries and disfluencies, DeepBonDD is able to generate transcriptions to be properly processed by NLP tools.
22

[pt] GERAÇÃO SEMIAUTOMÁTICA DE FUNÇÃO DE TRANSFERÊNCIA PARA REALCE DE FRONTEIRAS BASEADA EM DERIVADAS MÉDIAS / [en] SEMI-AUTOMATIC GENERATION OF TRANSFER FUNCTION FOR BOUNDARY HIGHLIGHT BASED ON AVERAGE DERIVATIVES

RUSTAM CAMARA MESQUITA 14 June 2018 (has links)
[pt] Encontrar manualmente uma boa função de transferência para visualização volumétrica é uma tarefa difícil que exige um conhecimento prévio sobre os dados sendo visualizados. Por isso, muitas pesquisas têm sido desenvolvidas nos últimos anos, com o objetivo de facilitar esse processo. No entanto, poucos trabalhos se esforçaram em obter métodos automáticos para a detecção de funções de transferência. A grande maioria busca melhorar o controle do usuário sobre a função de transferência indicando regiões potencialmente interessantes em histogramas e facilitando sua manipulação através de interfaces. Além disso, os resultados encontrados são geralmente apresentados na área médica, buscando melhorar a visualização dos exames de ressonância magnética, tomografia computadorizada ou ultrassom. Assim, visando mostrar que os conceitos utilizados nesses trabalhos podem ser explorados na área de petróleo e gás, este trabalho propõem um novo método para detecção automática de funções de transferência com o intuito de visualizar as interfaces entre regiões de um reservatório de petróleo. A abordagem proposta também é avaliada na detecção de fronteiras entre diferentes materiais de volumes médicos e outros volumes científicos amplamente utilizados. / [en] Finding a good transfer function for volume rendering is a difficult task that requires previous knowledge about the data domain itself. Therefore, many researches have been developed in the past few years aiming to overcome this barrier. However, only a few of them have concentrated forces into obtaining an automatic transfer function detector. Most of them focus on improving user control over transfer function domain, indicating potentially interesting regions and easing its manipulation through different histograms. Also, the results are often presented in medical field, through MRI, CT scan or ultrasound images. Thus, with the purpose of showing that the concepts used in these works can be exploited on oil and gas research field, this work proposes a novel method to automatically detect transfer functions, aiming to visualize the interfaces between different regions in the reservoir. The proposed approach is also tested in detecting boundaries between different materials of medical datasets and other datasets widely used.
23

Content-based digital video processing : digital videos segmentation, retrieval and interpretation

Chen, Juan January 2009 (has links)
Recent research approaches in semantics based video content analysis require shot boundary detection as the first step to divide video sequences into sections. Furthermore, with the advances in networking and computing capability, efficient retrieval of multimedia data has become an important issue. Content-based retrieval technologies have been widely implemented to protect intellectual property rights (IPR). In addition, automatic recognition of highlights from videos is a fundamental and challenging problem for content-based indexing and retrieval applications. In this thesis, a paradigm is proposed to segment, retrieve and interpret digital videos. Five algorithms are presented to solve the video segmentation task. Firstly, a simple shot cut detection algorithm is designed for real-time implementation. Secondly, a systematic method is proposed for shot detection using content-based rules and FSM (finite state machine). Thirdly, the shot detection is implemented using local and global indicators. Fourthly, a context awareness approach is proposed to detect shot boundaries. Fifthly, a fuzzy logic method is implemented for shot detection. Furthermore, a novel analysis approach is presented for the detection of video copies. It is robust to complicated distortions and capable of locating the copy of segments inside original videos. Then, iv objects and events are extracted from MPEG Sequences for Video Highlights Indexing and Retrieval. Finally, a human fighting detection algorithm is proposed for movie annotation.
24

Rapid numerical simulation and inversion of nuclear borehole measurements acquired in vertical and deviated wells

Mendoza Chávez, Alberto 10 August 2012 (has links)
The conventional approach for estimation of in-situ porosity is the combined use of neutron and density logs. These nuclear borehole measurements are influenced by fundamental petrophysical, fluid, and geometrical properties of the probed formation including saturating fluids, matrix composition, mud-filtrate invasion and shoulder beds. Advanced interpretation methods that include numerical modeling and inversion are necessary to reduce environmental effects and non-uniqueness in the estimation of porosity. The objective of this dissertation is two-fold: (1) to develop a numerical procedure to rapidly and accurately simulate nuclear borehole measurements, and (2) to simulate nuclear borehole measurements in conjunction with inversion techniques. Of special interest is the case of composite rock formations of sand-shale laminations penetrated by high-angle and horizontal (HA/HZ) wells. In order to quantify shoulder-bed effects on neutron and density borehole measurements, we perform Monte Carlo simulations across formations of various thicknesses and borehole deviation angles with the multiple-particle transport code MCNP. In so doing, we assume dual-detector tool configurations that are analogous to those of commercial neutron and density wireline measuring devices. Simulations indicate significant variations of vertical (axial) resolution of neutron and density measurements acquired in HA/HZ wells. In addition, combined azimuthal- and dip-angle effects can originate biases on porosity estimation and bed boundary detection, which are critical for the assessment of hydrocarbon reserves. To enable inversion and more quantitative integration with other borehole measurements, we develop and successfully test a linear iterative refinement approximation to rapidly simulate neutron, density, and passive gamma-ray borehole measurements. Linear iterative refinement accounts for spatial variations of Monte Carlo-derived flux sensitivity functions (FSFs) used to simulate nuclear measurements acquired in non-homogeneous formations. We use first-order Born approximations to simulate variations of a detector response due to spatial variations of formation energy-dependent cross-section. The method incorporates two- (2D) and three-dimensional (3D) capabilities of FSFs to simulate neutron and density measurements acquired in vertical and HA/HZ wells, respectively. We calculate FSFs for a wide range of formation cross-section variations and for borehole environmental effects to quantify the spatial sensitivity and resolution of neutron and density measurements. Results confirm that the spatial resolution limits of neutron measurements can be significantly influenced by the proximity of layers with large contrasts in porosity. Finally, we implement 2D sector-based inversion of azimuthal logging-while-drilling (LWD) density field measurements with the fast simulation technique. Results indicate that inversion improves the petrophysical interpretation of density measurements acquired in HA/HZ wells. Density images constructed with inversion yield improved porosity-feet estimations compared to standard and enhanced compensation techniques used commercially to post-process mono-sensor densities. / text
25

Diff pro multimediální dokumenty / Multimedia Document Type Diff

Lang, Jozef January 2012 (has links)
Development of Internet and its massive spread resulted in increased volume of multimedia data. The increase in the amount of multimedia data raises the need for efficient similarity detection between multimedia files for the purpose of preventing and detecting violations of copyright licenses or for detection of similar or duplicate files. This thesis discusses the current options in the field of the content-based image and video comparison and focuses on the feature extraction techniques, distance metrics, design and implementation of the mediaDiff application module for the content-based comparison of video files.
26

Semantic content analysis for effective video segmentation, summarisation and retrieval.

Ren, Jinchang January 2009 (has links)
This thesis focuses on four main research themes namely shot boundary detection, fast frame alignment, activity-driven video summarisation, and highlights based video annotation and retrieval. A number of novel algorithms have been proposed to address these issues, which can be highlighted as follows. Firstly, accurate and robust shot boundary detection is achieved through modelling of cuts into sub-categories and appearance based modelling of several gradual transitions, along with some novel features extracted from compressed video. Secondly, fast and robust frame alignment is achieved via the proposed subspace phase correlation (SPC) and an improved sub-pixel strategy. The SPC is proved to be insensitive to zero-mean-noise, and its gradient-based extension is even robust to non-zero-mean noise and can be used to deal with non-overlapped regions for robust image registration. Thirdly, hierarchical modelling of rush videos using formal language techniques is proposed, which can guide the modelling and removal of several kinds of junk frames as well as adaptive clustering of retakes. With an extracted activity level measurement, shot and sub-shot are detected for content-adaptive video summarisation. Fourthly, highlights based video annotation and retrieval is achieved, in which statistical modelling of skin pixel colours, knowledge-based shot detection, and improved determination of camera motion patterns are employed. Within these proposed techniques, one important principle is to integrate various kinds of feature evidence and to incorporate prior knowledge in modelling the given problems. High-level hierarchical representation is extracted from the original linear structure for effective management and content-based retrieval of video data. As most of the work is implemented in the compressed domain, one additional benefit is the achieved high efficiency, which will be useful for many online applications. / EU IST FP6 Project
27

Content-based Digital Video Processing. Digital Videos Segmentation, Retrieval and Interpretation.

Chen, Juan January 2009 (has links)
Recent research approaches in semantics based video content analysis require shot boundary detection as the first step to divide video sequences into sections. Furthermore, with the advances in networking and computing capability, efficient retrieval of multimedia data has become an important issue. Content-based retrieval technologies have been widely implemented to protect intellectual property rights (IPR). In addition, automatic recognition of highlights from videos is a fundamental and challenging problem for content-based indexing and retrieval applications. In this thesis, a paradigm is proposed to segment, retrieve and interpret digital videos. Five algorithms are presented to solve the video segmentation task. Firstly, a simple shot cut detection algorithm is designed for real-time implementation. Secondly, a systematic method is proposed for shot detection using content-based rules and FSM (finite state machine). Thirdly, the shot detection is implemented using local and global indicators. Fourthly, a context awareness approach is proposed to detect shot boundaries. Fifthly, a fuzzy logic method is implemented for shot detection. Furthermore, a novel analysis approach is presented for the detection of video copies. It is robust to complicated distortions and capable of locating the copy of segments inside original videos. Then, iv objects and events are extracted from MPEG Sequences for Video Highlights Indexing and Retrieval. Finally, a human fighting detection algorithm is proposed for movie annotation.
28

Semantic content analysis for effective video segmentation, summarisation and retrieval

Ren, Jinchang January 2009 (has links)
This thesis focuses on four main research themes namely shot boundary detection, fast frame alignment, activity-driven video summarisation, and highlights based video annotation and retrieval. A number of novel algorithms have been proposed to address these issues, which can be highlighted as follows. Firstly, accurate and robust shot boundary detection is achieved through modelling of cuts into sub-categories and appearance based modelling of several gradual transitions, along with some novel features extracted from compressed video. Secondly, fast and robust frame alignment is achieved via the proposed subspace phase correlation (SPC) and an improved sub-pixel strategy. The SPC is proved to be insensitive to zero-mean-noise, and its gradient-based extension is even robust to non-zero-mean noise and can be used to deal with non-overlapped regions for robust image registration. Thirdly, hierarchical modelling of rush videos using formal language techniques is proposed, which can guide the modelling and removal of several kinds of junk frames as well as adaptive clustering of retakes. With an extracted activity level measurement, shot and sub-shot are detected for content-adaptive video summarisation. Fourthly, highlights based video annotation and retrieval is achieved, in which statistical modelling of skin pixel colours, knowledge-based shot detection, and improved determination of camera motion patterns are employed. Within these proposed techniques, one important principle is to integrate various kinds of feature evidence and to incorporate prior knowledge in modelling the given problems. High-level hierarchical representation is extracted from the original linear structure for effective management and content-based retrieval of video data. As most of the work is implemented in the compressed domain, one additional benefit is the achieved high efficiency, which will be useful for many online applications.
29

Computer vision and machine learning methods for the analysis of brain and cardiac imagery

Mohan, Vandana 06 December 2010 (has links)
Medical imagery is increasingly evolving towards higher resolution and throughput. The increasing volume of data and the usage of multiple and often novel imaging modalities necessitates the use of mathematical and computational techniques for quicker, more accurate and more robust analysis of medical imagery. The fields of computer vision and machine learning provide a rich set of techniques that are useful in medical image analysis, in tasks ranging from segmentation to classification and population analysis, notably by integrating the qualitative knowledge of experts in anatomy and the pathologies of various disorders and making it applicable to the analysis of medical imagery going forward. The object of the proposed research is exactly to explore various computer vision and machine learning methods with a view to the improved analysis of multiple modalities of brain and cardiac imagery, towards enabling the clinical goals of studying schizophrenia, brain tumors (meningiomas and gliomas in particular) and cardiovascular disorders. In the first project, a framework is proposed for the segmentation of tubular, branched anatomical structures. The framework uses the tubular surface model which yields computational advantages and further incorporates a novel automatic branch detection algorithm. It is successfully applied to the segmentation of neural fiber bundles and blood vessels. In the second project, a novel population analysis framework is built using the shape model proposed as part of the first project. This framework is applied to the analysis of neural fiber bundles towards the detection and understanding of schizophrenia. In the third and final project, the use of mass spectrometry imaging for the analysis of brain tumors is motivated on two fronts, towards the offline classification analysis of the data, as well as the end application of intraoperative detection of tumor boundaries. SVMs are applied for the classification of gliomas into one of four subtypes towards application in building appropriate treatment plans, and multiple statistical measures are studied with a view to feature extraction (or biomarker detection). The problem of intraoperative tumor boundary detection is formulated as a detection of local minima of the spatial map of tumor cell concentration which in turn is modeled as a function of the mass spectra, via regression techniques.
30

Detekce výrobků na pásovém dopravníku / Detection of Objects on Belt Conveyer

Láník, Aleš January 2008 (has links)
In this master thesis, object's detection in image and tracking these objects in temporal area will be presented. First, theoretical background of the image's preprocessing, image filtration, the foreground extraction, and many others various image's features will be described. Next, design and implementation of detector will be processed. This part of my master thesis containes mainly information about detection of objects on belt conveyer Finally,results, conclusion and many supplementary data such as a photography camera's location will be shown.

Page generated in 0.124 seconds