• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5512
  • 1072
  • 768
  • 625
  • 541
  • 355
  • 145
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 83
  • Tagged with
  • 11494
  • 6047
  • 2543
  • 1989
  • 1676
  • 1419
  • 1350
  • 1317
  • 1217
  • 1136
  • 1075
  • 1037
  • 1011
  • 891
  • 877
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Automatic detection and classification of events on power wheelchairs using embedded sensors

Kardehi Moghaddam, Athena January 2013 (has links)
Using power wheelchairs (PW) is a difficult task which needs special motor control trainings for their users. The objective of this thesis is to develop computational tools to automatically identify user driving behaviors in order to design user-specific training methods. There are many research projects on human activity recognition using wearable sensors such as accelerometers; however, PW event recognition is very rare. Moreover, for many PW applications the decision must be made with very low time complexity since accident consequences can be serious. In this thesis, we propose a machine learning framework for PW activity recognition. The framework contains three main steps: datalogging, feature extraction and event classification. In the first step, PWs are outfitted with a datalogging platform that records movement data such as acceleration. In the next step, 4 different types of features have been extracted from the preprocessed movement data and in the last step, a classifier has been trained to classify 35 different types of wheelchair activities. The classification accuracy from four different types of features has been compared: time-delay embeddings, time-domain characterization, frequency-domain features, and wavelet transforms. In a first analysis, the classification accuracy obtained when distinguishing between safe and unsafe events is compared. In a second analysis, classifcation accuracy when distinguishing between 35 different types of events is analyzed. We show that using time-delay embedding features a large proportion of activities can be detected. Specially this method, shows great performance in the detection of unsafe events. / L'utilisation de fauteuils roulants motorisés (FRM) est une tâche difficile qui requiert un apprentissage moteur particulier. L'objectif de cette thèse vise à développer des outils informatiques permettant d'identifier automatiquement le profil comportemental d'un usager de manière à fournir des méthodes d'apprentissage sur-mesure. Plusieurs projets de recherche se sont penchés sur la reconnaissance d'activité humaine utilisant des capteurs portables tels que des accéléromètres; toutefois, la reconnaissance d'événements survenant sur les FRM est rare. De plus, dans la plupart des applications pour FRM les décisions de contrôle doivent être prises rapidement, sans débordement des ressource de calcul, puisque les conséquences d'un incidents peuvent être graves. Dans cette thèse, nous proposons une approche d'apprentissage machine pour la reconnaissance d'activité des FRM. Elle consiste en trois étapes principales: capture des données, extraction de caractéristiques, et classification. À la première étape, une plate-forme de capture de donnée est installée sur le FRM. Dans un second temps, quatre types de caractéristiques sont extraites à partir de données pré-traitées. Finalement, un classifieur est entraîné à distinguer 35 types d'activités pouvant survenir sur le fauteuil roulant. L'exactitude des quatre types de caractéristiques a été comparée: plongements retardés dans le temps, caractérisation dans le domaine temporel, caractérisation dans le domaine fréquentiel, et décomposition en ondelettes. Dans une première analyse, l'exactitude du classifieur à distinguer les événements sécuritaires et non-sécuritaires est comparée. Ensuite, nous nous attardons à analyser la capacité à classifier les 35 types d'événements. Nous démontrons que les plongements retardés dans le temps permettent de détecter une proportion importante des activités. De plus, cette méthode s'avère très efficace à détecter les événements non-sécuritaires.
402

Mining hospital admission-discharge data to discover the chance of readmission

Hosseinzadeh, Arian January 2013 (has links)
The rising cost of unplanned hospital readmissions has sparked calls for identifying medical system failures, best practices, and interventions in order to reduce the incidence of avoidable readmission. Readmissions currently account for 18% of total hospital admissions among Medicare patients in the United States. Distinguishing avoidable from unavoidable readmissions is a complex problem, but tackling it can shed light on readmission determinants and contributing factors. The objective of this thesis is to gain knowledge about the role that dispensed drugs, medical procedures, and diagnostic information play in predicting the chance of readmission within thirty days from a hospital discharge, using machine learning techniques. The prediction of hospital readmission is formulated as a supervised learning problem. Two supervised learning models, Naïve Bayes and Decision Tree, are used in the thesis to predict the chance of readmission based on patients' demographic information, prescription drugs, diagnosis and procedure codes extracted from hospital discharge summaries. The empirical analysis improves the understanding of hospital readmission prediction and identifies patient subpopulations for which the readmission prediction is naturally more difficult. Comparing the performance of different methods, using AUC as the measure of performance, we found that the combination of Naïve Bayes classifier and Gini Index feature selection performs slightly better than other methods on this dataset. We also found that some diagnostic features play an important role in distinguishing outliers. Removing outliers from the entire data results in significant performance gains in the prediction of readmission. / La hausse des côuts associés avec les re-admissions non-planifiées à l'hôpital suggère que c'est très important d'identifier les détérminants de ces événements. Les re-admissions causent 18% des côuts de Medicare aux États-Unis, ce qui fait l'identification des re-admissions qui peuvent être évitées très importante. Nous formulons ce problème comme une tâche d'apprentissage supervisé. Nous utilisons deus méthodes, Naive Bayes et les Arbres de Décision, pour la prédiction des patients qui vont être re-admis, en fonctions de leurs données démographiques, les médicaments de préscription, et les codes de diagnostique et des procédures que les patients ont subis en hôpital. Nôtre analyse ameliore nos connaissances sur les facteurs détérminants pour les re-admissions non-planifiées et identifie de sous-populations de patients pour lesquelles la prédiction est plus difficile. Nous performons des comparaisons de différentes méthodes de prédiction. La combinaison de Naïve Bayes et séléction d'attributes basée sur l'index Gini donne les meilleurs résultats sur nos données. Nous avons aussi trouvé que certains attributs sont utiles pour distinguer les patients pour lesquels la prédiction est difficile. Si on élimine ces patients du jeu de données, les résultats de l'aprentissage sont meilleurs. La hausse des côuts associés avec les re-admissions non-planifiées à l'hôpital suggère que c'est très important d'identifier les détérminants de ces événements. Les re-admissions causent 18% des côuts de Medicare aux États-Unis, ce qui fait l'identification des re-admissions qui peuvent être évitées très importante. Nous formulons ce problème comme une tâche d'apprentissage supervisé. Nous utilisons deus méthodes, Naive Bayes et les Arbres de Décision, pour la prédiction des patients qui vont être re-admis, en fonctions de leurs données démographiques, les médicaments de préscription, et les codes de diagnostique et des procédures que les patients ont subis en hôpital. Nôtre analyse ameliore nos connaissances sur les facteurs détérminants pour les re-admissions non-planifiées et identifie de sous-populations de patients pour lesquelles la prédiction est plus difficile. Nous performons des comparaisons de différentes méthodes de prédiction. La combinaison de Naïve Bayes et séléction d'attributes basée sur l'index Gini donne les meilleurs résultats sur nos données. Nous avons aussi trouvé que certains attributs sont utiles pour distinguer les patients pour lesquels la prédiction est difficile. Si on élimine ces patients du jeu de données, les résultats de l'aprentissage sont meilleurs.
403

Concealed intelligence : a description of highly emotionally intelligent students with learning disabilities

King, Clea Larissa 11 1900 (has links)
This multiple case study describes students who are highly emotionally competent yet have learning disabilities. The study sheds light on how such students perceive their educational experience and begins to answer inter-related questions, such as how emotional strengths assist with learning disabilities. A multiple case study design was used. The participant group ranged from 11 to 16 years of age and came from two separate schools which actively work with students diagnosed with learning disabilities. The study was divided into two phases. In the first phase, the Mayer—Salovey—Caruso Emotional Intelligence Test-Youth Version (MSCEIT-YV) was given to students in the two participating classes. The two students from each class who achieved the highest scores on the MSCEIT-YV were then asked to participate in the second phase of the study. Here, the researcher conducted observations of the participants within the school environment. Additionally, the participants attended a semi-structured interview, with interview questions based on the MSCEIT-YV and school related scenarios. Themes that emerged were then analyzed and compared within and between cases as well as with emotional intelligence research. Case study descriptions emerged from this analysis and a brief follow up interview was conducted with one family member and the participating student as a means of sharing and verifying findings. Participants revealed varying ability with emotional intelligence. However, all students demonstrated strong abilities with the ‘Strategic Emotional Reasoning’ Skills associated with Mayer, Salovey and Caruso’s (2004) theory of emotional intelligence. Moreover, all students showed a strong ability to use their emotional intelligence to improve academic functioning, with one student in particular displaying outstanding abilities and insights into emotional intelligence. The study contributes to our understanding of the complexity of ability and disability that can exist within students diagnosed with learning disabilities; this understanding, in turn, may be reflected in how these students are perceived and understood by researchers and teachers alike.
404

Grammatical methods in computer vision

Purdy, Eric 03 May 2013 (has links)
<p> In computer vision, grammatical models are models that represent objects hierarchically as compositions of sub-objects. This allows us to specify rich object models in a standard Bayesian probabilistic framework. In this thesis, we formulate shape grammars, a probabilistic model of curve formation that allows for both continuous variation and structural variation. We derive an EM-based training algorithm for shape grammars. We demonstrate the effectiveness of shape grammars for modeling human silhouettes, and also demonstrate their effectiveness in classifying curves by shape. We also give a general method for heuristically speeding up a large class of dynamic programming algorithms. We provide a general framework for discussing coarse-to-fine search strategies, and provide proofs of correctness. Our method can also be used with inadmissible heuristics. </p><p> Finally, we give an algorithm for doing approximate context-free parsing of long strings in linear time. We define a notion of approximate parsing in terms of restricted families of decompositions, and construct small families which can approximate arbitrary parses.</p>
405

An N-gram enhanced learning classifier for Chinese character recognition

Ayer, Eliot William 21 November 2013 (has links)
<p> Fast and accurate recognition of offline Chinese characters is a problem significantly more difficult than the recognition of the English alphabet. The vastly larger set of characters and noise in handwriting require more sophisticated normalization, feature extraction, and classification methods. This thesis explores the feasibility of a fast and accurate classification and translation retrieval system. An ensemble classifier composed of k-nearest neighbors and support vector machines is used as the basis of a fast classifier to recognize Chinese and Japanese characters. In contrast to other models, this classifier incorporates contextual N-gram information directly into the classification task to increase the accuracy of the classifier.</p>
406

A study of neural networks in thermal systems

Penaranda, Guillermo January 1994 (has links)
Neural networks have been found to be useful as a technique for the modeling of non-linear functions or processes that involve several variables. The primary goal of this thesis is to explore the feasibility of applying feedforward backpropagation neural networks in the optimization of multistage thermal systems. Basically, the idea consists of using neural networks as a function approximation technique for each stage of a multistage process. After the successful approximation, existing optimization methods are used to obtain the parameters that optimize the system. In addition, it is shown how feedforward backpropagation neural networks can be used in solving calculus of variation problems, by separating the process into discrete stages, thus forming a multistage process problem. Finally, parallel work was done in developing a faster deterministic training algorithm, as an alternative to the time consuming backpropagation training algorithm.
407

Application of back-propagation neural networks to the modeling and control of multiple-input, multiple-output processes

Takasu, Shinji January 1991 (has links)
Certain properties of back-propagation neural networks have been found to be useful in structuring models for multiple-input, multiple-output (MIMO) processes. The network's simplicity and its ability to identify the non-linearity can have wide impacts on the construction of model-based control system. Care must be taken to train the network with consistent data that contains sufficient dynamic information. A predictive control system based on the network model was proposed. Although the controller is relatively simple in terms of concept and computation, it shows excellent performances both in servo and regulator problems. Model prediction error sometimes causes a cyclic behavior in process responses; however, it can be stabilized by imposing certain constraints of controller action. The constraints are also effective for noisy measurements. Use of neural networks for modeling and control of MIMO system appears to be very promising with its ability to treat non-linearity and process interactions.
408

Control of serial and parallel robots: Analysis and implementation

Gunawardana, Ruvinda Vipul January 1999 (has links)
The research presented in this thesis is categorized into two areas. In the first part we address the issue of uniform boundedness of the elements of the equations of motion of serial robots, an important issue for the control of robots in this class. The second part is dedicated to the dynamic modeling and model based control of parallel robots. The field of serial robot control experienced tremendous growth over the past few decades resulting in a rigorous body of control results. An important assumption that is frequently made in establishing stability properties of these control laws is that the terms associated with the equations of motion of serial robots such as the inertia matrix, the Coriolis/centrifugal terms, and the Hessian of potential energy are uniformly bounded. This assumption however, is not valid for all serial robots. Since the stability conclusions of many control laws become local for robots that violate this assumption, it's important to be able to determine whether the terms in question are indeed uniformly bounded for a given robot. In the first part of this research we examine this issue and characterize the class of serial robots for which each of these terms are uniformly bounded. We also derive explicit uniform bounds for these terms which become important in control synthesis since the uniform bounds appear in the expressions of many control laws. The second part of this research is dedicated to parallel robots. Unlike in the case of serial robots, in parallel robots the independent generalized coordinates corresponding to the actuated joints do not uniquely determine the configuration of the robot. Therefore, an important issue that must be resolved in order to derive the dynamics of parallel robots is the existence of a transformation from the independent coordinates to a set of dependent coordinates that completely determine the robot configuration. The existence of such a transformation will enable the extension of most results in serial robots to parallel robots. In this research we characterize a region with specified boundaries where such a transformation exists and derive a numerical scheme for implementing the transformation in real time. Another contribution of this research is the design and construction of the Rice Planar Delta Robot which will serve as a test bed for results on parallel robots. This robot was used to experimentally verify the above result in a trajectory tracking experiment and a fast pick and place experiment.
409

Stochastic instruction scheduling

Schielke, Philip John January 2000 (has links)
Instruction scheduling is a code reordering transformation used to hide latencies present in modern day microprocessors. Scheduling is often critical in achieving peak performance from these processors. The designer of a compiler's instruction scheduler has many choices to make, including the scope of scheduling, the underlying scheduling algorithm, and handling interactions between scheduling and other transformations. List scheduling algorithms, and variants thereof, have been the dominate algorithms used by instruction schedulers for years. In this work we explore the strengths and weaknesses of this algorithm aided by the use of stochastic scheduling techniques. These new techniques we call RBF (randomized backward and forward scheduling) and iterative repair or IR. We examine how the algorithms perform in a variety of contexts, including different scheduling scope, different scheduling problem instances, different architectural features, and scheduling in the presence of register allocation. IR is a search framework enjoying a lot of attention in the artificial intelligence community. IR scheduling techniques have shown promise on other scheduling problems such as shuttle mission scheduling. In this work we describe how to target the framework for compiler instruction scheduling. We describe the evolution of our algorithm, how to integrate register pressure concerns, and the technique's performance. We evaluate our alternative algorithms based on a set of real applications and random instruction scheduling problems. Not surprisingly, list scheduling performs very well when scheduling basic blocks of machine instructions. However, there is some opportunity for alternative techniques when scheduling over larger scopes and targeting more complicated architectures. We describe an interesting link between list scheduling efficacy and the amount of parallelism available in a random problem instance. Increasingly we see complex microprocessors being used in embedded systems applications. Often the price of such systems is affected by the amount of on-board memory needed to store executable code for the microprocessor. Much work on instruction scheduling has improved the running time of scheduled code while sacrificing code size. Thus, finding scheduling techniques that do not increase code size is another focus of this work. We also develop techniques to decrease code size without increasing running time using genetic algorithms-another stochastic search method.
410

Tracking evolution of learning on a visualmotor task

Siruguri, Sameer Anand January 2001 (has links)
We construct models of the evolution of human learning on a visualmotor task by analysing a large sequential corpus of low-level performance data generated from it. The performance data is drawn sparsely from a large, high-dimensional space, is non-stationary---slowly evolving control policies are punctuated by radical conceptual shifts---and has non-Gaussian noise, which is difficult to model. We develop novel, data-driven algorithms for identifying the conceptual shifts, and for constructing compact representations of the subjects' stationary control policies. The policy models are "local" and use a novel extension to locally weighted regression. The closeness of fit of model performance to human learning curves experimentally demonstrates the effectiveness of our methods. In contrast to previous modeling work, we make no a priori assumptions about the underlying cognitive architecture required to duplicate subject behavior. By comparing the performance of our methods to decision trees, we demonstrate the superiority of local models for learning compact representations of high-dimensional, noisy, non-stationary sequential data.

Page generated in 0.0933 seconds