• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 132
  • 42
  • 24
  • 14
  • 14
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 286
  • 286
  • 286
  • 63
  • 44
  • 42
  • 42
  • 35
  • 33
  • 33
  • 31
  • 31
  • 30
  • 28
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

MYOP: um arcabouço para predição de genes ab initio\" / MYOP: A framework for building ab initio gene predictors

Andre Yoshiaki Kashiwabara 23 March 2007 (has links)
A demanda por abordagens eficientes para o problema de reconhecer a estrutura de cada gene numa sequência genômica motivou a implementação de um grande número de programas preditores de genes. Fizemos uma análise dos programas de sucesso com abordagem probabilística e reconhecemos semelhanças na implementação dos mesmos. A maior parte desses programas utiliza a cadeia oculta generalizada de Markov (GHMM - generalized hiddenMarkov model) como um modelo de gene. Percebemos que muitos preditores têm a arquitetura da GHMM fixada no código-fonte, dificultando a investigação de novas abordagens. Devido a essa dificuldade e pelas semelhanças entre os programas atuais, implementamos o sistema MYOP (Make Your Own Predictor) que tem como objetivo fornecer um ambiente flexível o qual permite avaliar rapidamente cada modelo de gene. Mostramos a utilidade da ferramenta através da implementação e avaliação de 96 modelos de genes em que cada modelo é formado por um conjunto de estados e cada estado tem uma distribuição de duração e um outro modelo probabilístico. Verificamos que nem sempre um modelo probabilísticomais sofisticado fornece um preditor melhor, mostrando a relevância das experimentações e a importância de um sistema como o MYOP. / The demand for efficient approaches for the gene structure prediction has motivated the implementation of different programs. In this work, we have analyzed successful programs that apply the probabilistic approach. We have observed similarities between different implementations, the same mathematical framework called generalized hidden Markov chain (GHMM) is applied. One problem with these implementations is that they maintain fixed GHMM architectures that are hard-coded. Due to this problem and similarities between the programs, we have implemented the MYOP framework (Make Your Own Predictor) with the objective of providing a flexible environment that allows the rapid evaluation of each gene model. We have demonstrated the utility of this tool through the implementation and evaluation of 96 gene models in which each model has a set of states and each state has a duration distribution and a probabilistic model. We have shown that a sophisticated probabilisticmodel is not sufficient to obtain better predictor, showing the experimentation relevance and the importance of a system as MYOP.
122

End-to-End Available Bandwidth Estimation and Monitoring

Guerrero Santander, Cesar Dario 20 February 2009 (has links)
Available Bandwidth Estimation Techniques and Tools (ABETTs) have recently been envisioned as a supporting mechanism in areas such as compliance of service level agreements, network management, traffic engineering and real-time resource provisioning, flow and congestion control, construction of overlay networks, fast detection of failures and network attacks, and admission control. However, it is unknown whether current ABETTs can run efficiently in any type of network, under different network conditions, and whether they can provide accurate available bandwidth estimates at the timescales needed by these applications. This dissertation investigates techniques and tools able to provide accurate, low overhead, reliable, and fast available bandwidth estimations. First, it shows how it is that the network can be sampled to get information about the available bandwidth. All current estimation tools use either the probe gap model or the probe rate model sampling techniques. Since the last technique introduces high additional traffic to the network, the probe gap model is the sampling method used in this work. Then, both an analytical and experimental approach are used to perform an extensive performance evaluation of current available bandwidth estimation tools over a flexible and controlled testbed. The results of the evaluation highlight accuracy, overhead, convergence time, and reliability performance issues of current tools that limit their use by some of the envisioned applications. Single estimations are affected by the bursty nature of the cross traffic and by errors generated by the network infrastructure. A hidden Markov model approach to end-to-end available bandwidth estimation and monitoring is investigated to address these issues. This approach builds a model that incorporates the dynamics of the available bandwidth. Every sample that generates an estimation is adjusted by the model. This adjustment makes it possible to obtain acceptable estimation accuracy with a small number of samples and in a short period of time. Finally, the new approach is implemented in a tool called Traceband. The tool, written in ANSI C, is evaluated and compared with Pathload and Spruce, the best estimation tools belonging to the probe rate model and the probe gap model, respectively. The evaluation is performed using Poisson, bursty, and self-similar synthetic cross traffic and real traffic from a network path at University of South Florida. Results show that Traceband provides more estimations per unit time with comparable accuracy to Pathload and Spruce and introduces minimum probing traffic. Traceband also includes an optional moving average technique that smooths out the estimations and improves its accuracy even further.
123

A First Study on Hidden Markov Models and one Application in Speech Recognition

Servitja Robert, Maria January 2016 (has links)
Speech is intuitive, fast and easy to generate, but it is hard to index and easy to forget. What is more, listening to speech is slow. Text is easier to store, process and consume, both for computers and for humans, but writing text is slow and requires some intention. In this thesis, we study speech recognition which allows converting speech into text, making it easier both to create and to use information. Our tool of study is Hidden Markov Models which is one of the most important machine learning models in speech and language processing. The aim of this thesis is to do a rst study in Hidden Markov Models and understand their importance, particularly in speech recognition. We will go through three fundamental problems that come up naturally with Hidden Markov Models: to compute a likelihood of an observation sequence, to nd an optimal state sequence given an observation sequence and the model, and to adjust the model parameters. A solution to each problem will be given together with an example and the corresponding simulations using MatLab. The main importance lies in the last example, in which a rst approach to speech recognition will be done.
124

Wavelet-Based Non-Homogeneous Hidden Markov Chain Model For Hyperspectral Signature Classification

Feng, Siwei 18 March 2015 (has links)
Hyperspectral signature classification is a kind of quantitative analysis approach for hyperspectral imagery which performs detection and classification of the constituent materials at pixel level in the scene. The classification procedure can be operated directly on hyperspectral data or performed by using some features extracted from corresponding hyperspectral signatures containing information like signature energy or shape. In this paper, we describe a technique that applies non-homogeneous hidden Markov chain (NHMC) models to hyperspectral signature classification. The basic idea is to use statistical models (NHMC models) to characterize wavelet coefficients which capture the spectrum structural information at multiple levels. Experimental results show that the approach based on NHMC models outperforms existing approaches relevant in classification tasks.
125

Object Tracking based on Eye Tracking Data : A comparison with a state-of-the-art video tracker

Ejnestrand, Ida, Jakobsson, Linnéa January 2020 (has links)
The process of locating moving objects through video sequences is a fundamental computer vision problem. This process is referred to as video tracking and has a broad range of applications. Even though video tracking is an open research topic that have received much attention during recent years, developing accurate and robust algorithms that can handle complicated tracking tasks and scenes is still challenging. One challenge in computer vision is to develop systems that like humans can understand, interpret and recognize visual information in different situations. In this master thesis work, a tracking algorithm based on eye tracking data is proposed. The aim was to compare the tracking performance of the proposed algorithm with a state-of-the-art video tracker. The algorithm was tested on gaze signals from five participants recorded with an eye tracker while the participants were exposed to dynamic stimuli. The stimuli were moving objects displayed on a stationary computer screen. The proposed algorithm is working offline meaning that all data is collected before analysis. The results show that the overall performance of the proposed eye tracking algorithm is comparable to the performance of a state-of-the-art video tracker. The main weaknesses are low accuracy for the proposed eye tracking algorithm and handling of occlusion for the video tracker. We also suggest a method for using eye tracking as a complement to object tracking methods. The results show that the eye tracker can be used in some situations to improve the tracking result of the video tracker. The proposed algorithm can be used to help the video tracker to redetect objects that have been occluded or for some other reason are not detected correctly. However, ATOM brings higher accuracy.
126

Detecting Changes During the Manipulation of an Object Jointly Held by Humans and RobotsDetektera skillnader under manipulationen av ett objekt som gemensamt hålls av människor och robotar

Reynaga Barba, Valeria January 2015 (has links)
In the last decades research and development in the field of robotics has grown rapidly. This growth has resulted in the emergence of service robots that need to be able to physically interact with humans for different applications. One of these applications involves robots and humans cooperating in handling an object together. In such cases, there is usually an initial arrangement of how the robot and the humans hold the object and the arrangement stays the same throughout the manipulation task. Real-world scenarios often require that the initial arrangement changes throughout the task, therefore, it is important that the robot is able to recognize these changes and act accordingly. We consider a setting where a robot holds a large flat object with one or two humans. The aim of this research project is to detect the change in the number of agents grasping the object using only force and torque information measured at the robot's wrist. The proposed solution involves defining a transition sequence of four steps that the humans should perform to go from the initial scenario to the final one. The force and torque information is used to estimate the grasping point of the agents with a Kalman filter. While the humans are going from one scenario to the other, the estimated point changes according to the step of the transition the humans are in. These changes are used to track the steps in the sequence using a hidden Markov model (HMM). Tracking the steps in the sequence means knowing how many agents are grasping the object. To evaluate the method, humans that were not involved in the training of the HMM were asked to perform two tasks: a) perform the previously defined sequence as is, and b) perform a deviation of the sequence. The results of the method show that it is possible to detect the change between one human and two humans holding the object using only force and torque information.
127

Approche générique appliquée à l'indexation audio par modélisation non supervisée / Unified data-driven approach for audio indexing, retrieval and recognition

Khemiri, Houssemeddine 27 September 2013 (has links)
La quantité de données audio disponibles, telles que les enregistrements radio, la musique, les podcasts et les publicités est en augmentation constance. Par contre, il n'y a pas beaucoup d'outils de classification et d'indexation, qui permettent aux utilisateurs de naviguer et retrouver des documents audio. Dans ces systèmes, les données audio sont traitées différemment en fonction des applications. La diversité de ces techniques d'indexation rend inadéquat le traitement simultané de flux audio où différents types de contenu audio coexistent. Dans cette thèse, nous présentons nos travaux sur l'extension de l'approche ALISP, développé initialement pour la parole, comme une méthode générique pour l'indexation et l'identification audio. La particularité des outils ALISP est qu'aucune transcription textuelle ou annotation manuelle est nécessaire lors de l'étape d'apprentissage. Le principe de cet outil est de transformer les données audio en une séquence de symboles. Ces symboles peuvent être utilisés à des fins d'indexation. La principale contribution de cette thèse est l'exploitation de l'approche ALISP comme une méthode générique pour l'indexation audio. Ce système est composé de trois modules: acquisition et modélisation des unités ALISP d'une manière non supervisée, transcription ALISP des données audio et comparaison des symboles ALISP avec la technique BLAST et la distance de Levenshtein. Les évaluations du système proposé pour les différentes applications sont effectuées avec la base de données YACAST et avec d'autres corpus disponibles publiquement pour différentes tâche de l'indexation audio. / The amount of available audio data, such as broadcast news archives, radio recordings, music and songs collections, podcasts or various internet media is constantly increasing. Therefore many audio indexing techniques are proposed in order to help users to browse audio documents. Nevertheless, these methods are developed for a specific audio content which makes them unsuitable to simultaneously treat audio streams where different types of audio document coexist. In this thesis we report our recent efforts in extending the ALISP approach developed for speech as a generic method for audio indexing, retrieval and recognition. The particularity of ALISP tools is that no textual transcriptions are needed during the learning step. Any input speech data is transformed into a sequence of arbitrary symbols. These symbols can be used for indexing purposes. The main contribution of this thesis is the exploitation of the ALISP approach as a generic method for audio indexing. The proposed system consists of three steps; an unsupervised training to model and acquire the ALISP HMM models, ALISP segmentation of audio data using the ALISP HMM models and a comparison of ALISP symbols using the BLAST algorithm and Levenshtein distance. The evaluations of the proposed systems are done on the YACAST and other publicly available corpora for several tasks of audio indexing.
128

Development and Testing of a Haptic Interface to Assist and Improve the Manipulation Functions in Virtual Environments for Persons with Disabilities

Tammana, Rohit 12 November 2003 (has links)
Robotics in rehabilitation provides considerable opportunities to improve the quality of life for persons with disabilities. Computerized and Virtual Environment (VE) training systems for persons with disabilities, many of which utilize the haptic feedback, have gained increasing acceptance in the recent years. Our methodology here is based on creating virtual environments connected to a haptic interface as an input device. This robotic setup introduces the advantages of the haptic rendering features in the environment and also provides tactile feedback to the patients. This thesis aims to demonstrate the efficacy of assistance function algorithms in rehabilitation robotics in virtual environments. Assist functions are used to map limited human input to motions required to perform complex tasks. The purpose is to train individuals in task-oriented applications to insure that they can be incorporated into the workplace. Further, Hidden Markov Model (HMM) based motion recognition and skill learning are used for improving the skill levels of the users. For the Hidden Markov Model based motion recognition, the user's motion intention is combined with environment information to apply an appropriate assistance function. We used this algorithm to perform a commonly used vocational therapy test referred to as the box and the blocks test. The Hidden Markov Model based skill approach can be used for learning human skill and transferring the skill to persons with disabilities. A relatively complex task of moving along a labyrinth is chosen as the task to be modeled by HMM. This kind of training allows a person with disability to learn the skill and improve it through practice. Its application to motion therapy system using a haptic interface helps in improving their motion control capabilities, tremor reduction and upper limb coordination. The results obtained from all the tests demonstrated that various forms of assistance provided reduced the execution times and increased the motion performance in chosen tasks. Two persons with disabilities volunteered to perform the above tasks and both of the disabled subjects expressed an interest and satisfaction with the philosophy behind these concepts.
129

Detection and Classification of Heart Sounds Using a Heart-Mobile Interface

Thiyagaraja, Shanti 12 1900 (has links)
An early detection of heart disease can save lives, caution individuals and also help to determine the type of treatment to be given to the patients. The first test of diagnosing a heart disease is through auscultation - listening to the heart sounds. The interpretation of heart sounds is subjective and requires a professional skill to identify the abnormalities in these sounds. A medical practitioner uses a stethoscope to perform an initial screening by listening for irregular sounds from the patient's chest. Later, echocardiography and electrocardiography tests are taken for further diagnosis. However, these tests are expensive and require specialized technicians to operate. A simple and economical way is vital for monitoring in homecare or rural hospitals and urban clinics. This dissertation is focused on developing a patient-centered device for initial screening of the heart sounds that is both low cost and can be used by the users on themselves, and later share the readings with the healthcare providers. An innovative mobile health service platform is created for analyzing and classifying heart sounds. Certain properties of heart sounds have to be evaluated to identify the irregularities such as the number of heart beats and gallops, intensity, frequency, and duration. Since heart sounds are generated in low frequencies, human ears tend to miss certain sounds as the high frequency sounds mask the lower ones. Therefore, this dissertation provides a solution to process the heart sounds using several signal processing techniques, identifies the features in the heart sounds and finally classifies them. This dissertation enables remote patient monitoring through the integration of advanced wireless communications and a customized low-cost stethoscope. It also permits remote management of patients' cardiac status while maximizing patient mobility. The smartphone application facilities recording, processing, visualizing, listening, and classifying heart sounds. The application also generates an electronic medical record, which is encrypted using the efficient elliptic curve cryptography and sent to the cloud, facilitating access to physicians for further analysis. Thus, this dissertation results in a patient-centered device that is essential for initial screening of the heart sounds, and could be shared for further diagnosis with the medical care practitioners.
130

Markov Approximations: The Characterization of Undermodeling Errors

Lei, Lei 04 July 2006 (has links) (PDF)
This thesis is concerned with characterizing the quality of Hidden Markov modeling when learning from limited data. It introduces a new perspective on different sources of errors to describe the impact of undermodeling. Our view is that modeling errors can be decomposed into two primary sources of errors: the approximation error and the estimation error. This thesis takes a first step towards exploring the approximation error of low order HMMs that best approximate the true system of a HMM. We introduce the notion minimality and show that best approximations of the true system with complexity greater or equal to the order of a minimal system are actually equivalent realizations. Understanding this further allows us to explore integer lumping and to present a new way named weighted lumping to find realizations. We also show that best approximations of order strictly less than that of a minimal realization are truly approximations; they are incapable of mimicking the true system exactly. Our work then proves that the resulting approximation error is non-decreasing as the model order decreases, verifying the intuitive idea that increasingly simplified models are less and less descriptive of the true system.

Page generated in 0.1365 seconds