Spelling suggestions: "subject:"inference"" "subject:"lnference""
251 |
Infinite-word topic models for digital mediaWaters, Austin Severn 02 July 2014 (has links)
Digital media collections hold an unprecedented source of knowledge and data about the world. Yet, even at current scales, the data exceeds by many orders of magnitude the amount a single user could browse through in an entire lifetime. Making use of such data requires computational tools that can index, search over, and organize media documents in ways that are meaningful to human users, based on the meaning of their content. This dissertation develops an automated approach to analyzing digital media content based on topic models. Its primary contribution, the Infinite-Word Topic Model (IWTM), helps extend topic modeling to digital media domains by removing model assumptions that do not make sense for them -- in particular, the assumption that documents are composed of discrete, mutually-exclusive words from a fixed-size vocabulary. While conventional topic models like Latent Dirichlet Allocation (LDA) require that media documents be converted into bags of words, IWTM incorporates clustering into its probabilistic model and treats the vocabulary size as a random quantity to be inferred based on the data. Among its other benefits, IWTM achieves better performance than LDA while automating the selection of the vocabulary size. This dissertation contributes fast, scalable variational inference methods for IWTM that allow the model to be applied to large datasets. Furthermore, it introduces a new method, Incremental Variational Inference (IVI), for training IWTM and other Bayesian non-parametric models efficiently on growing datasets. IVI allows such models to grow in complexity as the dataset grows, as their priors state that they should. Finally, building on IVI, an active learning method for topic models is developed that intelligently samples new data, resulting in models that train faster, achieve higher performance, and use smaller amounts of labeled data. / text
|
252 |
The lexical inferencing of Chinese learners of English as a foreign languageYin, Zhaochun., 尹照春. January 2011 (has links)
The primary purpose of this study is to explore the lexical inferencing of Chinese learners of English as foreign language in terms of the intent, the clue use, the procedure, the processing type, the adaptability, and the success of lexical inferencing as well as the subsequent lexical knowledge acquisition. All together 781 Chinese EFL learners at four stages of English learning (senior secondary year-2, tertiary beginning, tertiary middle, and tertiary final) participated in this study. 726 respondents answered a questionnaire of lexical strategies to unknown words in reading and clue use in lexical inferencing. 55 participants thought aloud the process of inferring the meaning of 12 target words while reading an article, and reported their knowledge of target words in a surprise test one week after the think-aloud activity.
Data collected from the questionnaire were analyzed quantitatively to rank various lexical strategies and types of clue use. The think-aloud protocols of lexical inferencing were analyzed qualitatively to identify the type and amount of clue use, the event sequence of lexical inferencing, the processing type & adaptability, and the outcome of lexical inferencing. Their subsequent knowledge of target words was coded and analyzed. All these items of lexical inferencing also were processed quantitatively to explore the overall view of Chinese EFL learners‘ lexical inferencing, and the similarities & differences of learners at different stages.
The findings reveal that Chinese EFL learners frequently used a number of lexical strategies, and lexical inferencing was the most frequently used. They used various types of clues, especially sentence meaning, morphology, and discourse meaning in their lexical inferencing. Some features of clue use, such as abundant imagined morphological clue and L1 grammar clue, revealed the impact of the Chinese language. There were also some variations in the clue use of learners at different stages. The results of this study show that major lexical inferencing procedure was ‘Guess > Accept’ at senior secondary stage and ‘Guess > Evaluate > Accept’ at three tertiary stages. There was an obvious upward shift of processing type from the ‘pure top processing’ of senior secondary to more advanced processing of tertiary stages. The overall adaptability of Chinese EFL learners‘ lexical inferencing was not high. There was an increasing tendency of high adaptability from the stage of senior secondary to tertiary final. The findings show that one fourth of lexical inferencing outcomes were ?Correct‘, while one third were ?Partially Correct‘. There was an increase tendency of ‘Correct‘ or ?Partially correct’ inferences and vocabulary knowledge acquisition from senior secondary stage to tertiary final stage. Measurable vocabulary knowledge was acquired in lexical inferencing.
Further explorations reveal that Chinese EFL learners‘ procedural & declarative knowledge might potentially explain the performances of their lexical inferencing.
This study culminates with some pedagogical implications for vocabulary learning and reading, and some suggestions for further research on lexical inferencing. / published_or_final_version / Education / Doctoral / Doctor of Philosophy
|
253 |
INFERENTIAL OPERATIONS IN THE READING COMPREHENSION OF EDUCABLE MENTALLY RETARDED AND AVERAGE STUDENTSBos, Candace Sue January 1979 (has links)
No description available.
|
254 |
Advanced mathematics and deductive reasoning skills : testing the Theory of Formal DisciplineAttridge, Nina January 2013 (has links)
This thesis investigates the Theory of Formal Discipline (TFD): the idea that studying mathematics develops general reasoning skills. This belief has been held since the time of Plato (2003/375B.C), and has been cited in recent policy reports (Smith, 2004; Walport, 2010) as an argument for why mathematics should hold a privileged place in the UK's National Curriculum. However, there is no rigorous research evidence that justifies the claim. The research presented in this thesis aims to address this shortcoming. Two questions are addressed in the investigation of the TFD: is studying advanced mathematics associated with development in reasoning skills, and if so, what might be the mechanism of this development? The primary type of reasoning measured is conditional inference validation (i.e. `if p then q; not p; therefore not q'). In two longitudinal studies it is shown that the conditional reasoning behaviour of mathematics students at AS level and undergraduate level does change over time, but that it does not become straightforwardly more normative. Instead, mathematics students reason more in line with the `defective' interpretation of the conditional, under which they assume p and reason about q. This leads to the assumption that not-p cases are irrelevant, which results in the rejection of two commonly-endorsed invalid inferences, but also in the rejection of the valid modus tollens inference. Mathematics students did not change in their reasoning behaviour on a thematic syllogisms task or a thematic version of the conditional inference task. Next, it is shown that mathematics students reason significantly less in line with a defective interpretation of the conditional when it is phrased `p only if q' compared to when it is phrased `if p then q', despite the two forms being logically equivalent. This suggests that their performance is determined by linguistic features rather than the underlying logic. The final two studies investigated the heuristic and algorithmic levels of Stanovich's (2009a) tri-process model of cognition as potential mechanisms of the change in conditional reasoning skills. It is shown that mathematicians' defective interpretation of the conditional stems in part from heuristic level processing and in part from effortful processing, and that the executive function skills of inhibition and shifting at the algorithmic level are correlated with its adoption. It is suggested that studying mathematics regularly exposes students to implicit `if then' statements where they are expected to assume p and reason about q, and that this encourages them to adopt a defective interpretation of conditionals. It is concluded that the TFD is not supported by the evidence; while mathematics does seem to develop abstract conditional reasoning skills, the result is not more normative reasoning.
|
255 |
The Evolutionary and Cognitive Basis of the Perception and Production of DanceBrady, Adena Michelle January 2012 (has links)
Dance is a universal and ancient human behavior; however, our understanding of the basis of this behavior is surprisingly weak. In this dissertation, I explore the cognitive and evolutionary foundations of human dance, providing evidence of two ways in which the production and perception of dance actions are rooted in the functions of more general cognitive systems.In doing so, I aim to both inform our understanding of dance, and use the study of dance to elucidate broader issues in cognition. Chapter 1 demonstrates that the ability to entrain, or move in time with an auditory beat, is not unique to humans. In addition, across hundreds of species, I find that all animals able to entrain can also vocally imitate sound. This supports the hypothesis that entrainment relies on cognitive machinery that originally evolved to support vocal imitation. Chapter 2 investigates the perception of dance-like actions. Previous work shows that we infer the goals of observed actions by calculating their efficiency as a means to external effects, like reaching an object or location. However, dance actions typically lack an external effect or external goal. In two experiments, I show that for dance-like actions, adults infer that the agents’ goal is simply to produce the movements themselves. Furthermore, this inference is driven by the actions’ inefficiency as a means to external goals. This inefficiency effectively rules out external goals, making movement-based goals the best explanation. Thus, perception of both dance and non-dance actions appears to rely the same type of efficiency-based goal inference. Chapter 3 builds on these findings, showing that the inference that the movements are the goal is closely related to our concept of dance. First, I find that participants view movement-based goals as more consistent with dance than with other activities. Second, I find that simply construing actions as having movement-based goals leads participants to view the actions as more dancelike, even when all participants have seen the exact same actions. Thus, even our categorization of actions as dance versus non-dance is rooted in the same cognitive processes as support our understanding of other intentional actions. / Psychology
|
256 |
Value of information and the accuracy of discrete approximationsRamakrishnan, Arjun 03 January 2011 (has links)
Value of information is one of the key features of decision analysis. This work deals with providing a consistent and functional methodology to determine VOI on proposed well tests in the presence of uncertainties. This method strives to show that VOI analysis with the help of discretized versions of continuous probability distributions with conventional decision trees can be very accurate if the optimal method of discrete approximation is chosen rather than opting for methods such as Monte Carlo simulation to determine the VOI. This need not necessarily mean loss of accuracy at the cost of simplifying probability calculations. Both the prior and posterior probability distributions are assumed to be continuous and are discretized to find the VOI. This results in two steps of discretizations in the decision tree. Another interesting feature is that there lies a level of decision making between the two discrete approximations in the decision tree. This sets it apart from conventional discretized models since the accuracy in this case does not follow the rules and conventions that normal discrete models follow because of the decision between the two discrete approximations.
The initial part of the work deals with varying the number of points chosen in the discrete model to test their accuracy against different correlation coefficients between the information and the actual values. The latter part deals more with comparing different methods of existing discretization methods and establishing conditions under which each is optimal. The problem is comprehensively dealt with in the cases of both a risk neutral and a risk averse decision maker. / text
|
257 |
Parametric inference for time series based upon goodness-of-fit胡寶璇, Woo, Pao-sun. January 2001 (has links)
published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
|
258 |
An analysis of the patterns of plausible inference proposed by George PolyaPope, Milton Frank, 1937- January 1963 (has links)
No description available.
|
259 |
Compression and Classification of ImageryTabesh, Ali January 2006 (has links)
Problems at the intersection of compression and statistical inference recur frequently due to the concurrent use of signal and image compression and classification algorithms in many applications. This dissertation addresses two such problems: statistical inference on compressed data, and rate-allocation for joint compression and classification.Features of the JPEG2000 standard make possible the development of computationally efficient algorithms to achieve such a goal for imagery compressed using this standard. We propose the use of the information content (IC) of wavelet subbands, defined as the number of bytes that the JPEG2000 encoder spends to compress the subbands, for content analysis. Applying statistical learning frameworks for detection and classification, we present experimental results for compressed-domain texture image classification and cut detection in video. Our results indicate that reasonable performance can be achieved, while saving computational and bandwidth resources. IC features can also be used for preliminary analysis in the compressed domain to identify candidates for further analysis in the decompressed domain.In many applications of image compression, the compressed image is to be presented to human observers and statistical decision-making systems. In such applications, the fidelity criterion with respect to which the image is compressed must be selected to strike an appropriate compromise between the (possibly conflicting) image quality criteria for the human and machine observers. We present tractable distortion measures based on the Bhattacharyya distance (BD) and a new upper bound on the quantized probability of error that make possible closed form expressions for rate allocation to image subbands and show their efficacy in maintaining the aforementioned balance between compression and classification. The new bound offers two advantages over the BD in that it yields closed-form solutions for rate-allocation in problems involving correlated sources and more than two classes.
|
260 |
Sensitivity Analysis of Untestable Assumptions in Causal InferenceLundin, Mathias January 2011 (has links)
This thesis contributes to the research field of causal inference, where the effect of a treatment on an outcome is of interest is concerned. Many such effects cannot be estimated through randomised experiments. For example, the effect of higher education on future income needs to be estimated using observational data. In the estimation, assumptions are made to make individuals that get higher education comparable with those not getting higher education, to make the effect estimable. Another assumption often made in causal inference (both in randomised an nonrandomised studies) is that the treatment received by one individual has no effect on the outcome of others. If this assumption is not met, the meaning of the causal effect of the treatment may be unclear. In the first paper the effect of college choice on income is investigated using Swedish register data, by comparing graduates from old and new Swedish universities. A semiparametric method of estimation is used, thereby relaxing functional assumptions for the data. One assumption often made in causal inference in observational studies is that individuals in different treatment groups are comparable, given that a set of pretreatment variables have been adjusted for in the analysis. This so called unconfoundedness assumption is in principle not possible to test and, therefore, in the second paper we propose a Bayesian sensitivity analysis of the unconfoundedness assumption. This analysis is then performed on the results from the first paper. In the third paper of the thesis, we study profile likelihood as a tool for semiparametric estimation of a causal effect of a treatment. A semiparametric version of the Bayesian sensitivity analysis of the unconfoundedness assumption proposed in Paper II is also performed using profile likelihood. The last paper of the thesis is concerned with the estimation of direct and indirect causal effects of a treatment where interference between units is present, i.e., where the treatment of one individual affects the outcome of other individuals. We give unbiased estimators of these direct and indirect effects for situations where treatment probabilities vary between individuals. We also illustrate in a simulation study how direct and indirect causal effects can be estimated when treatment probabilities need to be estimated using background information on individuals.
|
Page generated in 0.0587 seconds