• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 6
  • 5
  • 2
  • Tagged with
  • 30
  • 30
  • 12
  • 10
  • 8
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A Frequency-Modulated Continuous Wave-Based Boundary Detection System in a Small PCB Profile

Asgarian, Hamid R 01 December 2012 (has links) (PDF)
Falls are a cause of concern for the elderly because it can make them unable to call for help. A monitoring system can detect automatically their immobility and provide help to the elderly if they fall. Ultra-wide band signals for a monitoring system is an excellent choice since it has low enough power to not interfere with other medical and household electronics as well as being able to transmit data to a central monitoring unit. One part of this monitoring system is a boundary detection system used to verify that the monitoring system is not capturing events outside the monitoring region such as an event outside the house or in a neighboring room. The work presented in the paper, “A Frequency-Modulated Continuous Wave-Based Boundary Detection System for Determination of Monitoring Region for an Indoor Ultra-Wideband Short Range Radar-Based Eldercare Monitoring System” has determined that a frequency modulated continuous wave (FMCW) based system is an acceptable solution for boundary detection. A FMCW system can measure distance with less than 10cm accuracy if the chosen spectrum bandwidth is 1GHz or more. This thesis presents the design of a low cost approach to small PCB footprint distance detection circuitry for the boundary detection system.
12

Dynamic Deployment strategies in Ad-Hoc Sensor networks to optimize Coverage and Connectivity in Unknown Event Boundary detection

Venkataraman, Aparna 23 September 2011 (has links)
No description available.
13

Learning object boundary detection from motion data

Ross, Michael G., Kaelbling, Leslie P. 01 1900 (has links)
A significant barrier to applying the techniques of machine learning to the domain of object boundary detection is the need to obtain a large database of correctly labeled examples. Inspired by developmental psychology, this paper proposes that boundary detection can be learned from the output of a motion tracking algorithm that separates moving objects from their static surroundings. Motion segmentation solves the database problem by providing cheap, unlimited, labeled training data. A probabilistic model of the textural and shape properties of object boundaries can be trained from this data and then used to efficiently detect boundaries in novel images via loopy belief propagation. / Singapore-MIT Alliance (SMA)
14

Video Shot Boundary Detection By Graph Theoretic Approaches

Asan, Emrah 01 September 2008 (has links) (PDF)
This thesis aims comparative analysis of the state of the art shot boundary detection algorithms. The major methods that have been used for shot boundary detection such as pixel intensity based, histogram-based, edge-based, and motion vectors based, are implemented and analyzed. A recent method which utilizes &ldquo / graph partition model&rdquo / together with the support vector machine classifier as a shot boundary detection algorithm is also implemented and analyzed. Moreover, a novel graph theoretic concept, &ldquo / dominant sets&rdquo / , is also successfully applied to the shot boundary detection problem as a contribution to the solution domain.
15

Using Alignment Methods to Reduce Translation of Changes in Structured Information

Resman, Daniel January 2012 (has links)
In this thesis I present an unsupervised approach that can be made supervised in order to reducetranslation of changes in structured information, stored in XML-documents. By combining a sentenceboundary detection algorithm and a sentence alignment algorithm, a translation memory is createdfrom the old version of the information in different languages. This translation memory can then beused to translate sentences that are not changed. The structure of the XML is used to improve theperformance. Two implementations were made and evaluated in three steps: sentence boundary detection,sentence alignment and correspondence. The last step evaluates the using of the translation memoryon a new version in the source language. The second implementation was an improvement, using theresults of the evaluation of the first implementation. The evaluation was done using 100 XML-documents in English, German and Swedish. There was a significant difference between the results ofthe implementations in the first two steps. The errors were reduced by each step and in the last stepthere were only three errors by first implementation and no errors by the second implementation. The evaluation of the implementations showed that it was possible to reduce text that requires re-translation by about 80%. Similar information can and is used by the translators to achieve higherproductivity, but this thesis shows that it is possible to reduce translation even before the textsreaches the translators.
16

Multi-Task Learning SegNet Architecture for Semantic Segmentation

Sorg, Bradley R. January 2018 (has links)
No description available.
17

ACCELERATED CELLULAR TRACTION CALCULATION BY PREDICTIONS USING DEEP LEARNING

Ibn Shafi, Md. Kamal 01 December 2023 (has links) (PDF)
This study presents a novel approach for predicting future cellular traction in a time series. The proposed method leverages two distinct look-ahead Long Short-Term Memory (LSTM) models—one for cell boundary and the other for traction data—to achieve rapid and accurate predictions. These LSTM models are trained using real Fourier Transform Traction Cytometry (FTTC) output data, ensuring consistency and reliability in the underlying calculations. To account for variability among cells, each cell is trained separately, mitigating generalized errors. The predictive performance is demonstrated by accurately forecasting tractions for the next 30-time instances, with an error rate below 7%. Moreover, a strategy for real-time traction calculations is proposed, involving the capture of a bead reference image before cell placement in a controlled environment. By doing so, we eliminate the need for cell removal and enable real-time calculation of tractions. Combining these two ideas, our tool speeds up the traction calculations 1.6 times, leveraging from limiting TFM use. As a walk forward, prediction method is implemented by combining prediction values with real data for future prediction, it is indicative of more speedup. The predictive capabilities of this approach offer valuable insights, with potential applications in identifying cancerous cells based on their traction behavior over time.Additionally, we present an advanced cell boundary detection algorithm that autonomously identifies cell boundaries from obscure cell images, reducing human intervention and bias. This algorithm significantly streamlines data collection, enhancing the efficiency and accuracy of our methodology.
18

Segmentação de sentenças e detecção de disfluências em narrativas transcritas de testes neuropsicológicos / Sentence Segmentation and Disfluency Detection in Narrative Transcripts from Neuropsychological Tests

Treviso, Marcos Vinícius 20 December 2017 (has links)
Contexto: Nos últimos anos, o Comprometimento Cognitivo Leve (CCL) tem recebido uma grande atenção, pois pode representar um estágio pré-clínico da Doença de Alzheimer (DA). Em termos de distinção entre idosos saudáveis (CTL) e pacientes com CCL, vários estudos têm mostrado que a produção de discurso é uma tarefa sensível para detectar efeitos de envelhecimento e para diferenciar indivíduos com CCL dos saudáveis. Ferramentas de Processamento de Língua Natural (PLN) têm sido aplicadas em transcrições de narrativas em inglês e também em português brasileiro, por exemplo, o ambiente Coh-Metrix-Dementia. Lacunas: No entanto, a ausência de informações de limites de sentenças e a presença de disfluências em transcrições impedem a aplicação direta de ferramentas que dependem de um texto bem formado, como taggers e parsers. Objetivos: O objetivo principal deste trabalho é desenvolver métodos para segmentar as transcrições em sentenças e detectar/remover as disfluências presentes nelas, de modo que sirvam como uma etapa de pré-processamento para ferramentas subsequentes de PLN. Métodos e Avaliação: Propusemos um método baseado em redes neurais recorrentes convolucionais (RCNNs) com informações prosódicas, morfossintáticas e word embeddings para a tarefa de segmentação de sentenças (SS). Já para a detecção de disfluências (DD), dividimos o método e a avaliação de acordo com as categorias de disfluências: (i) para preenchimentos (pausas preenchidas e marcadores discursivos), propusemos a mesma RCNN com as mesmas features de SS em conjunto com uma lista pré-determinada de palavras; (ii) para disfluências de edição (repetições, revisões e recomeços), adicionamos features tradicionalmente empregadas em trabalhos relacionados e introduzimos um modelo de CRF na camada de saída da RCNN. Avaliamos todas as tarefas intrinsecamente, analisando as features mais importantes, comparando os métodos propostos com métodos mais simples, e identificando os principais acertos e erros. Além disso, um método final, chamado DeepBonDD, foi criado combinando todas as tarefas, e foi avaliado extrinsecamente com 9 métricas sintáticas do Coh-Metrix-Dementia. Conclusão: Para SS, obteve-se F1 = 0:77 em transcrições de CTL e F1 = 0:74 de CCL, caracterizando o estado-da-arte para esta tarefa em fala comprometida. Para detecção de preenchimentos, obtevese em média F1 = 0:90 para CTL e F1 = 0:92 para CCL, resultados que estão dentro da margem de trabalhos relacionados da língua inglesa. Ao serem ignorados os recomeços na detecção de disfluências de edição, obteve-se em média F1 = 0:70 para CTL e F1 = 0:75 para CCL. Na avaliação extrínseca, apenas 3 métricas mostraram diferença significativa entre as transcrições de CCL manuais e as geradas pelo DeepBonDD, sugerindo que, apesar das variações de limites de sentença e de disfluências, o DeepBonDD é capaz de gerar transcrições para serem processadas por ferramentas de PLN. / Background: In recent years, mild cognitive impairment (MCI) has received great attention because it may represent a pre-clinical stage of Alzheimers Disease (AD). In terms of distinction between healthy elderly (CTL) and MCI patients, several studies have shown that speech production is a sensitive task to detect aging effects and to differentiate individuals with MCI from healthy ones. Natural language procesing tools have been applied to transcripts of narratives in English and also in Brazilian Portuguese, for example, Coh-Metrix-Dementia. Gaps: However, the absence of sentence boundary information and the presence of disfluencies in transcripts prevent the direct application of tools that depend on well-formed texts, such as taggers and parsers. Objectives: The main objective of this work is to develop methods to segment the transcripts into sentences and to detect the disfluencies present in them (independently and jointly), to serve as a preprocessing step for the application of subsequent Natural Language Processing (NLP) tools. Methods and Evaluation: We proposed a method based on recurrent convolutional neural networks (RCNNs) with prosodic, morphosyntactic and word embeddings features for the sentence segmentation (SS) task. For the disfluency detection (DD) task, we divided the method and the evaluation according to the categories of disfluencies: (i) for fillers (filled pauses and discourse marks), we proposed the same RCNN with the same SS features along with a predetermined list of words; (ii) for edit disfluencies (repetitions, revisions and restarts), we added features traditionally employed in related works and introduced a CRF model after the RCNN output layer. We evaluated all the tasks intrinsically, analyzing the most important features, comparing the proposed methods to simpler ones, and identifying the main hits and misses. In addition, a final method, called DeepBonDD, was created combining all tasks and was evaluated extrinsically using 9 syntactic metrics of Coh-Metrix-Dementia. Conclusion: For SS, we obtained F1 = 0:77 in CTL transcripts and F1 = 0:74 in MCI, achieving the state of the art for this task on impaired speech. For the filler detection, we obtained, on average, F1 = 0:90 for CTL and F1 = 0:92 for MCI, results that are similar to related works of the English language. When restarts were ignored in the detection of edit disfluencies, F1 = 0:70 was obtained for CTL and F1 = 0:75 for MCI. In the extrinsic evaluation, only 3 metrics showed a significant difference between the manual MCI transcripts and those generated by DeepBonDD, suggesting that, despite result differences in sentence boundaries and disfluencies, DeepBonDD is able to generate transcriptions to be properly processed by NLP tools.
19

Event Boundary Detection Using Web-cating Texts And Audio-visual Features

Bayar, Mujdat 01 September 2011 (has links) (PDF)
We propose a method to detect events and event boundaries in soccer videos by using web-casting texts and audio-visual features. The events and their inaccurate time information given in web-casting texts need to be aligned with the visual content of the video. Most match reports presented by popular organizations such as uefa.com (the official site of Union of European Football Associations) provide the time information in minutes rather than seconds. We propose a robust method which is able to handle uncertainties in the time points of the events. As a result of our experiments, we claim that our method detects event boundaries satisfactorily for uncertain web-casting texts, and that the use of audio-visual features improves the performance of event boundary detection.
20

Video Segmentation Using Partially Decoded Mpeg Bitstream

Kayaalp, Isil Burcun 01 December 2003 (has links) (PDF)
In this thesis, a mixed type video segmentation algorithm is implemented to find the scene cuts in MPEG compressed video data. The main aim is to have a computationally efficient algorithm for real time applications. Due to this reason partial decoding of the bitstream is used in segmentation. As a result of partial decoding, features such as bitrate, motion vector type, and DC images are implemented to find both continuous and discontinuous scene cuts on a MPEG-2 coded general TV broadcast data. The results are also compared with techniques found in literature.

Page generated in 0.0897 seconds