• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 31824
  • 8646
  • 6248
  • 2723
  • 1510
  • 976
  • 454
  • 441
  • 431
  • 382
  • 226
  • 226
  • 226
  • 226
  • 226
  • Tagged with
  • 65552
  • 14049
  • 10995
  • 9400
  • 7517
  • 6494
  • 6468
  • 5955
  • 5689
  • 5585
  • 5269
  • 5208
  • 4961
  • 4794
  • 3811
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Importance Sampling for Bayesian Networks: Principles, Algorithms, and Performance

Yuan, Changhe 02 October 2006 (has links)
Bayesian networks (BNs) offer a compact, intuitive, and efficient graphical representation of uncertain relationships among the variables in a domain and have proven their value in many disciplines over the last two decades. However, two challenges become increasingly critical in practical applications of Bayesian networks. First, real models are reaching the size of hundreds or even thousands of nodes. Second, some decision problems are more naturally represented by hybrid models which contain mixtures of discrete and continuous variables and may represent linear or nonlinear equations and arbitrary probability distributions. Both challenges make building Bayesian network models and reasoning with them more and more difficult. In this dissertation, I address the challenges by developing representational and computational solutions based on importance sampling. I First develop a more solid understanding of the properties of importance sampling in the context of Bayesian networks. Then, I address a fundamental question of importance sampling in Bayesian networks, the representation of the importance function. I derive an exact representation for the optimal importance function and propose an approximation strategy for the representation when it is too complex. Based on these theoretical analysis, I propose a suite of importance sampling-based algorithms for (hybrid) Bayesian networks. I believe the new algorithms significantly extend the efficiency, applicability, and scalability of approximate inference methods for Bayesian networks. The ultimate goal of this research is to help users to solve difficult reasoning problems emerging from complex decision problems in the most general settings.
52

Scaffolding Problem Solving with Embedded Examples to Promote Deep Learning

Ringenberg, Michael Aleksandr 23 January 2007 (has links)
This study compared the relative utility of an intelligent tutoring system that uses procedure-based hints to a version that uses worked-out examples. The system, Andes, taught college level physics. In order to test which strategy produced better gains in competence, two versions of Andes were used: one offered participants graded hints and the other offered annotated, worked-out examples in response to their help requests. We found that providing examples was at least as effective as the hint sequences and was more efficient in terms of the number of problems it took to obtain the same level of mastery.
53

Planning in Hybrid Structured Stochastic Domains

Kveton, Branislav 30 January 2007 (has links)
Efficient representations and solutions for large structured decision problems with continuous and discrete variables are among the important challenges faced by the designers of automated decision support systems. In this work, we describe a novel hybrid factored Markov decision process (MDP) model that allows for a compact representation of these problems, and a hybrid approximate linear programming (HALP) framework that permits their efficient solutions. The central idea of HALP is to approximate the optimal value function of an MDP by a linear combination of basis functions and optimize its weights by linear programming. We study both theoretical and practical aspects of this approach, and demonstrate its scale-up potential on several hybrid optimization problems.
54

Building Bayesian Networks: Elicitation, Evaluation, and Learning

Wang, Haiqin 15 October 2007 (has links)
As a compact graphical framework for representation of multivariate probability distributions, Bayesian networks are widely used for efficient reasoning under uncertainty in a variety of applications, from medical diagnosis to computer troubleshooting and airplane fault isolation. However, construction of Bayesian networks is often considered the main difficulty when applying this framework to real-world problems. In real world domains, Bayesian networks are often built by knowledge engineering approach. Unfortunately, eliciting knowledge from domain experts is a very time-consuming process, and could result in poor-quality graphical models when not performed carefully. Over the last decade, the research focus is shifting more towards learning Bayesian networks from data, especially with increasing volumes of data available in various applications, such as biomedical, internet, and e-business, among others. Aiming at solving the bottle-neck problem of building Bayesian network models, this research work focuses on elicitation, evaluation and learning Bayesian networks. Specifically, the contribution of this dissertation involves the research in the following five areas: a) graphical user interface tools for efficient elicitation and navigation of probability distributions, b) systematic and objective evaluation of elicitation schemes for probabilistic models, c) valid evaluation of performance robustness, i.e., sensitivity, of Bayesian networks, d) the sensitivity inequivalent characteristic of Markov equivalent networks, and the appropriateness of using sensitivity for model selection in learning Bayesian networks, e) selective refinement for learning probability parameters of Bayesian networks from limited data with availability of expert knowledge. In addition, an efficient algorithm for fast sensitivity analysis is developed based on relevance reasoning technique. The implemented algorithm runs very fast and makes d) and e) more affordable for real domain practice.
55

Learning Patient-Specific Models From Clinical Data

Visweswaran, Shyam 29 January 2008 (has links)
A key purpose of building a model from clinical data is to predict the outcomes of future individual patients. This work introduces a Bayesian patient-specific predictive framework for constructing predictive models from data that are optimized to predict well for a particular patient case. The construction of such <i>patient-specific models</i> is influenced by the particular history, symptoms, laboratory results, and other features of the patient case at hand. This approach is in contrast to the commonly used <i>population-wide models</i> that are constructed to perform well on average on all future cases. The new patient-specific method described in this research uses Bayesian network models, carries out Bayesian model averaging over a set of models to predict the outcome of interest for the patient case at hand, and employs a patient-specific heuristic to locate a set of suitable models to average over. Two versions of the method are developed that differ in the representation used for the conditional probability distributions in the Bayesian networks. One version uses a representation that captures only the so called <i>global structure</i> among the variables of a Bayesian network and the second representation captures additional <i>local structure</i> among the variables. The patient-specific methods were experimentally evaluated on one synthetic dataset, 21 UCI datasets and three medical datasets. Their performance was measured using five different performance measures and compared to that of several commonly used methods for constructing predictive models including naïve Bayes, C4.5 decision tree, logistic regression, neural networks, k-Nearest Neighbor and Lazy Bayesian Rules. Over all the datasets, both patient-specific methods performed better on average on all performance measures and against all the comparison algorithms. The <i>global structure</i> method that performs Bayesian model averaging in conjunction with the patient-specific search heuristic had better performance than either model selection with the patient-specific heuristic or non-patient-specific Bayesian model averaging. However, the additional learning of local structure by the <i>local structure</i> method did not lead to significant improvements over the use of global structure alone. The specific implementation limitations of the local structure method may have limited its performance.
56

Fine-grained Subjectivity and Sentiment Analysis: Recognizing the intensity, polarity, and attitudes of private states

Wilson, Theresa Ann 16 June 2008 (has links)
Private states (mental and emotional states) are part of the information that is conveyed in many forms of discourse. News articles often report emotional responses to news stories; editorials, reviews, and weblogs convey opinions and beliefs. This dissertation investigates the manual and automatic identification of linguistic expressions of private states in a corpus of news documents from the world press. A term for the linguistic expression of private states is subjectivity. The conceptual representation of private states used in this dissertation is that of Wiebe et al. (2005). As part of this research, annotators are trained to identify expressions of private states and their properties, such as the source and the intensity of the private state. This dissertation then extends the conceptual representation of private states to better model the attitudes and targets of private states. The inter-annotator agreement studies conducted for this dissertation show that the various concepts in the original and extended representation of private states can be reliably annotated. Exploring the automatic recognition of various types of private states is also a large part of this dissertation. Experiments are conducted that focus on three types of fine-grained subjectivity analysis: recognizing the intensity of clauses and sentences, recognizing the contextual polarity of words and phrases, and recognizing the attribution levels where sentiment and arguing attitudes are expressed. Various supervised machine learning algorithms are used to train automatic systems to perform each of these tasks. These experiments result in automatic systems for performing fine-grained subjectivity analysis that significantly outperform baseline systems.
57

A STUDY OF SOCIAL NAVIGATION SUPPORT UNDER DIFFERENT SITUATIONAL AND PERSONAL FACTORS

Farzan, Rosta 15 June 2009 (has links)
"Social Navigation" for the Web has been created as a response to the problem of disorientation in information space. It helps by visualizing traces of behavior of other users and adding social affordance to the information space. Despite the popularity of social navigation ideas, very few studies of social navigation systems can be found in the research literature. In this dissertation, I designed and carried out an experiment to explore the effect of several factors on social navigation support (SNS). The purpose of the experiment was to identify situations under which social navigation is most useful and to investigate the effect of personal factors, e.g., interpersonal trust, and gender on the likelihood of following social navigation cues. To gain a deeper insight into the effect of SNS on users' information seeking behavior, traditional evaluation methodologies were supplemented with eye tracking. The results of the study show that social navigation cues affect subjects' search behavior; specifically, while under time pressure subjects were more likely to use SNS. SNS was successful in guiding them to relevant documents and allowed them to achieve higher search performance. Reading abilities and interpersonal trust had a reliable effect on the SNS-following behavior and on subjects' subjective opinion about SNS. The effect of the gender was less pronounced than expected, contrary to the evidence in the literature.
58

BAYESIAN MODELING OF ANOMALIES DUE TO KNOWN AND UNKNOWN CAUSES

Shen, Yanna 01 October 2009 (has links)
Bayesian modeling of unknown causes of events is an important and pervasive problem. However, it has received relatively little research attention. In general, an intelligent agent (or system) has only limited causal knowledge of the world. Therefore, the agent may well be experiencing the influences of causes outside its model. For example, a clinician may be seeing a patient with a virus that is new to humans; the HIV virus was at one time such an example. It is important that clinicians be able to recognize that a patient is presenting with an unknown disease. In general, intelligent agents (or systems) need to recognize under uncertainty when they are likely to be experiencing influences outside their realm of knowledge. This dissertation investigates Bayesian modeling of unknown causes of events in the context of disease-outbreak detection. The dissertation introduces a Bayesian approach that models and detects (1) known diseases (e.g., influenza and anthrax) by using informative prior probabilities, (2) unknown diseases (e.g., a new, highly contagious respiratory virus that has never been seen before) by using relatively non-informative prior probabilities and (3) partially-known diseases (e.g., a disease that has characteristics of an influenza-like illness) by using semi-informative prior probabilities. I report the results of simulation experiments which support that this modeling method can improve the detection of new disease outbreaks in a population. A key contribution of this dissertation is that it introduces a Bayesian approach for jointly modeling both known and unknown causes of events. Such modeling has broad applicability in artificial intelligence in general and biomedical informatics applications in particular, where the space of known causes of outcomes of interest is seldom complete.
59

ROLES OF VISUAL WORKING MEMORY, GLOBAL PERCEPTION AND EYE-MOVEMENT IN VISUAL COMPLEX PROBLEM SOLVING

Kong, Xiaohui 30 September 2009 (has links)
In this dissertation, I explore roles of visual working memory, global perception and eye-movement in complex visual problem solving. Four experiments were conducted and two models were built and tested. Experiment one and model one showed that global information plays an important role and there is an interaction between external representation and internal VWM on global information representation. Experiment two and model two showed that this interaction is achieved by encoding global information with eye-movements throughout the duration of solving a problem. A very regular eye-movement pattern is observed in experiment two. Experiment three further tested the hypothesis that this eye-movement pattern is a result of the individuals VWM limitation by measuring the correlation between individual differences in the quantitative features of the eye-movement pattern and VWM size. The second model assumes that global and local information share a unified VWM capacity limitation. In the fourth experiment, I tested this hypothesis along with several alternative hypotheses. Results of the fourth experiment support the unified capacity hypothesis best and thus make a complete story for the interaction between VWM, global information processing and eye-movements in complex visual problem solving. Even with such a limited amount of VWM capacity, human visual cognition is able to solve complex visual problems by keeping a balanced amount of global and local information in VWM. This balance is achieved by eye-movements that encode both types of information into a unified VWM. Thus, although VWM has such a limited capacity, through frequent eye-movements, visual cognition is able to encode complex visual information in a temporal manner. At each instance, the amount of information encoded is limited by the capacity limitation of VWM but the global information encoded can further guide eye-movements to acquire information that is needed to make the next decision.
60

User Simulation for Spoken Dialog System Development

Ai, Hua 26 January 2010 (has links)
A user simulation is a computer program which simulates human user behaviors. Recently, user simulations have been widely used in two spoken dialog system development tasks. One is to generate large simulated corpora for applying machine learning to learn new dialog strategies, and the other is to replace human users to test dialog system performance. Although previous studies have shown successful examples of applying user simulations in both tasks, it is not clear what type of user simulation is most appropriate for a specific task because few studies compare different user simulations in the same experimental setting. In this research, we investigate how to construct user simulations in a specific task for spoken dialog system development. Since most current user simulations generate user actions based on probabilistic models, we identify two main factors in constructing such user simulations: the choice of user simulation model and the approach to set up user action probabilities. We build different user simulation models which differ in their efforts in simulating realistic user behaviors and exploring more user actions. We also investigate different manual and trained approaches to set up user action probabilities. We introduce both task-dependent and task-independent measures to compare these simulations. We show that a simulated user which mimics realistic user behaviors is not always necessary for the dialog strategy learning task. For the dialog system testing task, a user simulation which simulates user behaviors in a statistical way can generate both objective and subjective measures of dialog system performance similar to human users. Our research examines the strengths and weaknesses of user simulations in spoken dialog system development. Although our results are constrained to our task domain and the resources available, we provide a general framework for comparing user simulations in a task-dependent context. In addition, we summarize and validate a set of evaluation measures that can be used in comparing different simulated users as well as simulated versus human users.

Page generated in 0.0783 seconds