• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4567
  • 576
  • 275
  • 248
  • 156
  • 126
  • 83
  • 46
  • 44
  • 40
  • 21
  • 20
  • 19
  • 17
  • 12
  • Tagged with
  • 7738
  • 7738
  • 2455
  • 1396
  • 1307
  • 1231
  • 1178
  • 1144
  • 1096
  • 1094
  • 992
  • 952
  • 910
  • 907
  • 868
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Learning Accurate Regressors for Predicting Survival Times of Individual Cancer Patients

Lin, Hsiu-Chin Unknown Date
No description available.
272

Assisting Failure Diagnosis through Filesystem Instrumentation

Huang, Liang Unknown Date
No description available.
273

A general framework for reducing variance in agent evaluation

White, Martha Unknown Date
No description available.
274

Modelling motor cortex using neural network control laws

Lillicrap, Timothy Paul 31 January 2014 (has links)
The ease with which our brains learn to control our bodies belies intricate neural processing which remains poorly understood. We know that a network of brain regions work together in a carefully coordinated fashion to allow us to move from one place to another. In mammals, we know that the motor cortex plays a central role in this process, but precisely how its activity contributes to control is a matter of long and continued debate. In this thesis we demonstrate the need for developing mechanistic neural network models to address this question. Using such models, we show that contentious response properties of non-human primate primary motor cortex (M1) neurons can be understood as reflecting control processes which take into account the physics of the body. And we develop new computational techniques for teaching neural network models how to execute control. In the first study (Chapter 2), we critically examined a recently developed correlation-based descriptive model for characterizing the activity of M1 neuron activity. In the second study (Chapter 3), we developed neural network control laws which performed reaching and postural tasks using a physics model of the upper limb. We show that the population of artificial neurons in these networks exhibit preferences for certain directions of movement and certain forces applied during posture. These patterns parallel empirical observations in M1, and the model shows that the patterns reflect particular features of the biomechanics of the arm. The final study (Chapter 4) develops new techniques for building network models. To understand how the brain solves difficult control tasks we need to be able to construct mechanistic models which can do the same. And, we need to be able to construct controllers that compute via simple neuron-like units. In this study, we combine tools for automatic computation of derivatives with recently developed ideas about second-order approaches to optimization to build better neural network control laws. Taken together, this thesis helps develop arguments for, and the tools to build mechanistic neural network models to understand how motor cortex contributes to control of the body. / Thesis (Ph.D, Neuroscience) -- Queen's University, 2014-01-31 10:34:43.816
275

Single Microphone Tap Localization

Chowdhury, Tusi 21 November 2013 (has links)
This thesis explores a single microphone tap localization interface for smartphones - Extended Touch(ET), that detects user-tapped locations on any neighboring surface. The algorithm combines accelerometer and microphone detection making it robust to noise, and does not require knowledge of surface parameters or sensor positioning. It uses acoustic signal as the feature vector and solves for tap inference in two phases - training and detection. The training phase builds a prior-model of the system by storing one or more templates of known tap locations. These templates are used in the detection phase to carry out a k-nearest neighbor classification to detect new tap locations. The algorithm achieves a 92% detection rate on knock taps. A method to detect contiguous tap locations is also proposed.
276

Single Microphone Tap Localization

Chowdhury, Tusi 21 November 2013 (has links)
This thesis explores a single microphone tap localization interface for smartphones - Extended Touch(ET), that detects user-tapped locations on any neighboring surface. The algorithm combines accelerometer and microphone detection making it robust to noise, and does not require knowledge of surface parameters or sensor positioning. It uses acoustic signal as the feature vector and solves for tap inference in two phases - training and detection. The training phase builds a prior-model of the system by storing one or more templates of known tap locations. These templates are used in the detection phase to carry out a k-nearest neighbor classification to detect new tap locations. The algorithm achieves a 92% detection rate on knock taps. A method to detect contiguous tap locations is also proposed.
277

An evaluation system for intelligent smart badges

Liu, Yi January 2006 (has links)
In this thesis we develop and test a software algorithm for an electronic smart badge system. The smart badge system we have developed has the ability to figure out the interests of people who wear the badge by using time and position information collected by the badge. The badge can also present feedback to the wearer, so that users may be guided to people will similar interests and so may have more effective conversations. The smart badge system is based on an inference system which uses a Bayesian network. Evaluation of the system was challenging because there were no completed badges that could be used. To overcome this, we developed a simulation of crowd behaviour in a conference setting. We tuned the parameters of the model using several test situations and the final simulated behavior appeared realistic. Compared to other smart badge systems, our work is unique because it is able to enhance conversation by the real time inference of common ideas or interests of the conversion participants.
278

Semi-supervised lexical acquisition for wide-coverage parsing

Thomforde, Emily Jane January 2013 (has links)
State-of-the-art parsers suffer from incomplete lexicons, as evidenced by the fact that they all contain built-in methods for dealing with out-of-lexicon items at parse time. Since new labelled data is expensive to produce and no amount of it will conquer the long tail, we attempt to address this problem by leveraging the enormous amount of raw text available for free, and expanding the lexicon offline, with a semi-supervised word learner. We accomplish this with a method similar to self-training, where a fully trained parser is used to generate new parses with which the next generation of parser is trained. This thesis introduces Chart Inference (CI), a two-phase word-learning method with Combinatory Categorial Grammar (CCG), operating on the level of the partial parse as produced by a trained parser. CI uses the parsing model and lexicon to identify the CCG category type for one unknown word in a context of known words by inferring the type of the sentence using a model of end punctuation, then traversing the chart from the top down, filling in each empty cell as a function of its mother and its sister. We first specify the CI algorithm, and then compare it to two baseline wordlearning systems over a battery of learning tasks. CI is shown to outperform the baselines in every task, and to function in a number of applications, including grammar acquisition and domain adaptation. This method performs consistently better than self-training, and improves upon the standard POS-backoff strategy employed by the baseline StatCCG parser by adding new entries to the lexicon. The first learning task establishes lexical convergence over a toy corpus, showing that CI’s ability to accurately model a target lexicon is more robust to initial conditions than either of the baseline methods. We then introduce a novel natural language corpus based on children’s educational materials, which is fully annotated with CCG derivations. We use this corpus as a testbed to establish that CI is capable in principle of recovering the whole range of category types necessary for a wide-coverage lexicon. The complexity of the learning task is then increased, using the CCGbank corpus, a version of the Penn Treebank, and showing that CI improves as its initial seed corpus is increased. The next experiment uses CCGbank as the seed and attempts to recover missing question-type categories in the TREC question answering corpus. The final task extends the coverage of the CCGbank-trained parser by running CI over the raw text of the Gigaword corpus. Where appropriate, a fine-grained error analysis is also undertaken to supplement the quantitative evaluation of the parser performance with deeper reasoning as to the linguistic points of the lexicon and parsing model.
279

Learning matrix and functional models in high-dimensions

Balasubramanian, Krishnakumar 27 August 2014 (has links)
Statistical machine learning methods provide us with a principled framework for extracting meaningful information from noisy high-dimensional data sets. A significant feature of such procedures is that the inferences made are statistically significant, computationally efficient and scientifically meaningful. In this thesis we make several contributions to such statistical procedures. Our contributions are two-fold. We first address prediction and estimation problems in non-standard situations. We show that even when given no access to labeled samples, one can still consistently estimate error rate of predictors and train predictors with respect to a given (convex) loss function. We next propose an efficient procedure for predicting with large output spaces, that scales logarithmically in the dimensionality of the output space. We further propose an asymptotically optimal procedure for sparse multi-task learning when the tasks share a joint support. We show consistency of the proposed method and derive rates of convergence. We next address the problem of learning meaningful representations of data. We propose a method for learning sparse representations that takes into account the structure of the data space and demonstrate how it enables one to obtain meaningful features. We establish sample complexity results for the proposed approach. We then propose a model-free feature selection procedure and establish its sure-screening property in the high dimensional regime. Furthermore we show that with a slight modification, the approach previously proposed for sparse multi-task learning enables one to obtain sparse representations for multiple related tasks simultaneously.
280

Modeling Mobile User Behavior for Anomaly Detection

Buthpitiya, Senaka 01 April 2014 (has links)
As ubiquitous computing (ubicomp) technologies reach maturity, smart phones and context-based services are gaining mainstream popularity. A smart phone accompanies its user throughout (nearly) all aspects of his life, becoming an indispensable assistant the busy user relies on to help navigate his life, using map applications to navigate the physical world, email and instant messaging applications to keep in touch, media player applications to be entertained, etc. As a smart phone is capable of sensing the physical and virtual context of the user with an array of “hard” sensors (e.g., GPS, accelerometer) and “soft” sensors (e.g., email, social network, calendar),it is well-equipped to tailor the assistance it provides to the user. Over the life of a smart phone, it is entrusted with an enormous amount of personal information, everything from context-information sensed by the phone to contact lists to call-logs to passwords. Based on this rich set of information it is possible to model the behavior of the user, and use the models to detect anomalies (i.e., significant variations) in the user’s behavior. Anomaly detection capabilities enable a variety of application domains such as device theft detection, improved authentication mechanisms, impersonation, prevention, physical emergency detection, remote elder-care monitoring, and other proactive services. There has been extensive prior research on anomaly detection in various application domain areas (e.g., fraud detection, intrusion detection). Yet these approaches cannot be used in ubicomp environments as 1) they are very application-specific and not versatile enough to learn complex day to day behavior of users, 2) they work with a very small number of information sources with a relatively uniform stream of information (unlike sensor data from mobile devices), and 3) most approaches require labeled or semi-labeled data about anomalies (in ubicomp environments, it is very costly to create labeled datasets). Existing work in the field of anomaly detection in ubicomp environments is quite sparse. Most of the existing work focuses on using a single sensor information stream (GPS in most cases) to detect anomalies in the user’s behavior. However there exists a somewhat richer vein of prior work in modeling user behavior with the goal of behavior prediction; this is again limited mostly to a single sensor stream or single type of prediction (mostly location). This dissertation presents the notion of modeling mobile user behavior as a collection of models each capturing an aspect of the user’s behavior such as indoor mobility, typing patterns, calling patterns. A novel mechanism is developed for combining these models (i.e.,CobLE), which operate on asynchronous information sources from the mobile device, taking into consider how well each model is estimated to perform in the current context. These ideas are concretely implemented in an extensible framework, McFAD. Evaluations carried out using real-world datasets on this framework in contrast to prior work show that the framework for detecting anomalous behavior, 1) vastly reduces the training data requirement, 2) increases coverage, and 3) dramatically increases performance.

Page generated in 0.0887 seconds