• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 855
  • 403
  • 113
  • 89
  • 24
  • 19
  • 13
  • 10
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1885
  • 660
  • 330
  • 234
  • 219
  • 216
  • 212
  • 212
  • 208
  • 203
  • 189
  • 182
  • 169
  • 150
  • 143
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Language and causal understanding : there's something about Mary

Majid, Asifa January 2001 (has links)
No description available.
62

Bayes linear covariance matrix adjustment

Wilkinson, Darren James January 1995 (has links)
In this thesis, a Bayes linear methodology for the adjustment of covariance matrices is presented and discussed. A geometric framework for quantifying uncertainties about covariance matrices is set up, and an inner-product for spaces of random matrices is motivated and constructed. The inner-product on this space captures aspects of belief about the relationships between covariance matrices of interest, providing a structure rich enough to adjust beliefs about unknown matrices in the light of data such as sample covariance matrices, exploiting second-order exchangeability and related specifications to obtain representations allowing analysis. Adjustment is associated with orthogonal projection, and illustrated by examples for some common problems. The difficulties of adjusting the covariance matrices underlying exchangeable random vectors is tackled and discussed. Learning about the covariance matrices associated with multivariate time series dynamic linear models is shown to be amenable to a similar approach. Diagnostics for matrix adjustments are also discussed.
63

Type inference with bounded quantification

Sequeira, Dilip January 1998 (has links)
In this thesis we study some of the problems which occur when type inference is used in a type system with subtyping. An underlying poset of atomic types is used as a basis for our subtyping systems. We argue that the class of Helly posets is of significant interest, as it includes lattices and trees, and is closed under type formation not only with structural constructors such as function space and list, but also records, tagged variants, Abadi-Cardelli object constructors, top and bottom. We develop a general theory relating consistency, solvability, and solution of sets of constraints between regular types built over Helly posets with these constructors, and introduce semantic notions of simplification and entailment for sets of constraints over Helly posets of base types. We extend Helly posets with inequalities of the form a <= tau, where tau is not necessarily atomic, and show how this enables us to deal with bounded quantification. Using bounded quantification we define a subtyping system which combines structural subtype polymorphism and predicative parametric polymorphism, and use this to extend with subtyping the type system of Laufer and Odersky for ML with type annotations. We define a complete algorithm which infers minimal types for our extension, using factorisations, solutions of subtyping problems analogous to principal unifiers for unification problems. We give some examples of typings computed by a prototype implementation.
64

That Seems Right: Reasoning, Inference, and the Feeling of Correctness

Wolos, Jeremy David January 2016 (has links)
In my dissertation, I advance and defend a broad account of reasoning, including both the nature of inference and the structure of our reasoning systems. With respect to inference, I argue that we have good reason to consider a unified account of the cognitive transitions through which we attempt to figure things out. This view turns out to be highly inflationary relative to previous philosophical accounts of inference, which, I argue, fail to accommodate many instances of everyday reasoning. I argue that a cognitive transition’s status as an inference, in this broad sense, depends on the subject’s taking the conclusion of the inference— a new, revised, or supposed belief— to be the output of a rational thought process. Furthermore, taking such a belief to be the output of a rational thought process consists in its accompaniment by the feeling of correctness to the subject, which I call the assent affect. With respect to the structure of our reasoning systems, I defend a dual process model of reasoning by addressing certain alleged deficiencies with such accounts. I argue that the assent affect— or more precisely its absence— is a strong candidate to serve as the triggering condition of our more deliberate type 2 reasoning processes. That is, a subject’s more effortful reasoning processes engage with a problem when the output of a type 1 intuition is not accompanied by the assent affect. A subject will think harder about a problem, in other words, when they do not feel confident that they have gotten to the bottom of it. This account, I argue, fits well with both empirical and theoretical claims about the interaction of dual reasoning processes. In this dissertation, I use the assent affect to solve puzzles about both the nature of inferences and the structure of our reasoning systems. Puzzles in rationality become easier to solve when our intellectual feelings are not excluded from the picture.
65

Essays on Matching and Weighting for Causal Inference in Observational Studies

Resa Juárez, María de los Angeles January 2017 (has links)
This thesis consists of three papers on matching and weighting methods for causal inference. The first paper conducts a Monte Carlo simulation study to evaluate the performance of multivariate matching methods that select a subset of treatment and control observations. The matching methods studied are the widely used nearest neighbor matching with propensity score calipers, and the more recently proposed methods, optimal matching of an optimally chosen subset and optimal cardinality matching. The main findings are: (i) covariate balance, as measured by differences in means, variance ratios, Kolmogorov-Smirnov distances, and cross-match test statistics, is better with cardinality matching since by construction it satisfies balance requirements; (ii) for given levels of covariate balance, the matched samples are larger with cardinality matching than with the other methods; (iii) in terms of covariate distances, optimal subset matching performs best; (iv) treatment effect estimates from cardinality matching have lower RMSEs, provided strong requirements for balance, specifically, fine balance, or strength-k balance, plus close mean balance. In standard practice, a matched sample is considered to be balanced if the absolute differences in means of the covariates across treatment groups are smaller than 0.1 standard deviations. However, the simulation results suggest that stronger forms of balance should be pursued in order to remove systematic biases due to observed covariates when a difference in means treatment effect estimator is used. In particular, if the true outcome model is additive then marginal distributions should be balanced, and if the true outcome model is additive with interactions then low-dimensional joints should be balanced. The second paper focuses on longitudinal studies, where marginal structural models (MSMs) are widely used to estimate the effect of time-dependent treatments in the presence of time-dependent confounders. Under a sequential ignorability assumption, MSMs yield unbiased treatment effect estimates by weighting each observation by the inverse of the probability of their observed treatment sequence given their history of observed covariates. However, these probabilities are typically estimated by fitting a propensity score model, and the resulting weights can fail to adjust for observed covariates due to model misspecification. Also, these weights tend to yield very unstable estimates if the predicted probabilities of treatment are very close to zero, which is often the case in practice. To address both of these problems, instead of modeling the probabilities of treatment, a design-based approach is taken and weights of minimum variance that adjust for the covariates across all possible treatment histories are directly found. For this, the role of weighting in longitudinal studies of treatment effects is analyzed, and a convex optimization problem that can be solved efficiently is defined. Unlike standard methods, this approach makes evident to the investigator the limitations imposed by the data when estimating causal effects without extrapolating. A simulation study shows that this approach outperforms standard methods, providing less biased and more precise estimates of time-varying treatment effects in a variety of settings. The proposed method is used on Chilean educational data to estimate the cumulative effect of attending a private subsidized school, as opposed to a public school, on students’ university admission tests scores. The third paper is centered on observational studies with multi-valued treatments. Generalizing methods for matching and stratifying to accommodate multi-valued treatments has proven to be a complex task. A natural way to address confounding in this case is by weighting the observations, typically by the inverse probability of treatment weights (IPTW). As in the MSMs case, these weights can be highly variable and produce unstable estimates due to extreme weights. In addition, model misspecification, small sample sizes, and truncation of extreme weights can cause the weights to fail to adjust appropriately for observed confounders. The conditions the weights need to satisfy in order to provide close to unbiased treatment effect estimates with a reduced variability are determined and the convex optimization problem that can be solved in polynomial time to obtain them is defined. A simulation study with different settings is conducted to compare the proposed weighting scheme to IPTW, including generalized propensity score estimation methods that also consider explicitly the covariate balance problem in the probability estimation process. The applicability of the methods to continuous treatments is also tested. The results show that directly targeting balance with the weights, instead of focusing on estimating treatment assignment probabilities, provides the best results in terms of bias and root mean square error of the treatment effect estimator. The effects of the intensity level of the 2010 Chilean earthquake on posttraumatic stress disorder are estimated using the proposed methodology.
66

Individualised modelling using transductive inference and genetic algorithms

Mohan, Nisha Unknown Date (has links)
While inductive modeling is used to develop a model (function) from data of the whole problem space and then to recall it on new data, transductive modeling is concerned with the creation of single model for every new input vector based on some closest vectors from the existing problem space. This individual model approximates the output value only for this input vector. However, deciding on the appropriate distance measure, number of nearest neighbours and a minimum set of important features/variables is a challenge and is usually based on prior knowledge or exhaustive trial and test experiments.Proposed algorithm - This thesis proposes a Genetic Algorithm (GA) method for optimising these three factors using a transductive approach. This novel approach called Individualised Modeling using Transductive Inference and Genetic Algorithms (IMTIGA) is tested on several datasets from UCI repository for classification task and real world scenario for pest establishment prognosis and results show that it outperforms conventional, inductive approaches of global and local modelling.
67

Learning and decision processes in classification and feature inference.

Sweller, Naomi, Psychology, Faculty of Science, UNSW January 2007 (has links)
This thesis examined how task demands shape the category representations formed through classification, inference and incidental learning. Experiments 1 to 3 examined the claim that the representations formed through inference learning are based only on the encoding of prototypical features (e.g., Yamauchi & Markman, 1998, 2000). Adults learned artificial categories through exemplar classification or feature inference. Inference learning either did or did not require attention to prototypical features. At test, all participants classified exemplars and inferred the values of missing features. Classification learning resulted in the encoding of both prototypical and atypical features. Inference learning also led to the representation of both prototypical and atypical features when attention to both was required during learning. Experiment 4 extended these results to inferences about novel items varying in similarity to training items. Inference learners required to attend to prototypical and atypical features during training were more sensitive to exemplar similarity when making novel inferences than those who attended only to prototypical features. Experiment 5 examined developmental change in the impact of noun and feature labels on feature inferences. Adults, 7-year-olds, and 5-yearolds were shown pairs of base and target exemplars. The base was given a noun or feature label. Participants were asked to predict the value of a missing feature of the target, when it was given the same or a different label as the base. Both adults and children were more likely to make inferences based on noun than feature labels. Hence, by five years of age, children grasp the inductive potential of noun labels. Experiments 6 to 9 compared incidental category learning with intentional classification. Adults classified categories of geometric shapes or learned the categories through an incidental task. Incidental recognition learning resulted in a broader allocation of attention than classification learning. Performing recognition before classification resulted in a broader attentional allocation than performing recognition after classification. Together with the results from mathematical modelling, these findings support a view of category learning in which the specific attentional demands of different learning tasks determine the nature of the category representations that are acquired.
68

PrOntoLearn: Unsupervised Lexico-Semantic Ontology Generation using Probabilistic Methods

Abeyruwan, Saminda Wishwajith 01 January 2010 (has links)
An ontology is a formal, explicit specification of a shared conceptualization. Formalizing an ontology for a domain is a tedious and cumbersome process. It is constrained by the knowledge acquisition bottleneck (KAB). There exists a large number of text corpora that can be used for classification in order to create ontologies with the intention to provide better support for the intended parties. In our research we provide a novel unsupervised bottom-up ontology generation method. This method is based on lexico-semantic structures and Bayesian reasoning to expedite the ontology generation process. This process also provides evidence to domain experts to build ontologies based on top-down approaches.
69

Evidentials and relevance

Ifantidou, Elly January 1994 (has links)
Evidentials are expressions used to indicate the source of evidence and strength of speaker commitment to information conveyed. They include sentence adverbials such as 'obviously', parenthetical constructions such as 'I think', and hearsay expressions such as 'allegedly'. This thesis argues against the speech-act and Gricean accounts of evidentials and defends a Relevance-theoretic account Chapter 1 surveys general linguistic work on evidentials, with particular reference to their semantic and pragmatic status, and raises the following issues: for linguistically encoded evidentials, are they truth-conditional or non-truth-conditional, and do they contribute to explicit or implicit communication. For pragmatically inferred evidentials, is there a pragmatic framework in which they can be adequately accounted for? Chapters 2-4 survey the three main semantic/pragmatic frameworks for the study of evidentials. Chapter 2 argues that speech-act theory fails to give an adequate account of pragmatic inference processes. Chapter 3 argues that while Grice's theory of meaning and communication addresses all the central issues raised in the first chapter, evidentials fall outside Grice's basic categories of meaning and communication. Chapter 4 outlines the assumptions of Relevance Theory that bear on the study of evidentials. I sketch an account of pragmatically inferred evidentials, and introduce three central distinctions: between explicit and implicit communication, truth-conditional and non-truth-conditional meaning, and conceptual and procedural meaning. These distinctions are applied to a variety of linguistically encoded evidentials in chapters 5-7. Chapter 5 deals with sentence adverbials, chapter 6 focuses on parenthetical constructions, and chapter 7 looks at hearsay particles. My main concern is with how these expressions pattern with respect to the three distinctions developed in chapter 4. 1 show that although all three types of expression contribute to explicit rather than implicit communication, they exhibit important differences with respect to both the truth conditional/ non-truth-conditional and the conceptual/procedural distinctions. Chapter 8 is a brief conclusion.
70

Learning and decision processes in classification and feature inference.

Sweller, Naomi, Psychology, Faculty of Science, UNSW January 2007 (has links)
This thesis examined how task demands shape the category representations formed through classification, inference and incidental learning. Experiments 1 to 3 examined the claim that the representations formed through inference learning are based only on the encoding of prototypical features (e.g., Yamauchi & Markman, 1998, 2000). Adults learned artificial categories through exemplar classification or feature inference. Inference learning either did or did not require attention to prototypical features. At test, all participants classified exemplars and inferred the values of missing features. Classification learning resulted in the encoding of both prototypical and atypical features. Inference learning also led to the representation of both prototypical and atypical features when attention to both was required during learning. Experiment 4 extended these results to inferences about novel items varying in similarity to training items. Inference learners required to attend to prototypical and atypical features during training were more sensitive to exemplar similarity when making novel inferences than those who attended only to prototypical features. Experiment 5 examined developmental change in the impact of noun and feature labels on feature inferences. Adults, 7-year-olds, and 5-yearolds were shown pairs of base and target exemplars. The base was given a noun or feature label. Participants were asked to predict the value of a missing feature of the target, when it was given the same or a different label as the base. Both adults and children were more likely to make inferences based on noun than feature labels. Hence, by five years of age, children grasp the inductive potential of noun labels. Experiments 6 to 9 compared incidental category learning with intentional classification. Adults classified categories of geometric shapes or learned the categories through an incidental task. Incidental recognition learning resulted in a broader allocation of attention than classification learning. Performing recognition before classification resulted in a broader attentional allocation than performing recognition after classification. Together with the results from mathematical modelling, these findings support a view of category learning in which the specific attentional demands of different learning tasks determine the nature of the category representations that are acquired.

Page generated in 0.0549 seconds