• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 856
  • 403
  • 113
  • 89
  • 24
  • 19
  • 13
  • 10
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1886
  • 660
  • 330
  • 234
  • 220
  • 216
  • 212
  • 212
  • 208
  • 204
  • 189
  • 182
  • 169
  • 150
  • 144
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

The eyes as a window to the mind: inferring cognitive state from gaze patterns

Boisvert, Jonathan 22 March 2016 (has links)
In seminal work, Yarbus examined the characteristic scanpaths that result when viewing an image, observing that scanpaths varied significantly depending on the question posed to the observer. While early efforts examining this hypothesis were equivocal, it has since been established that aspects of an observer’s assigned task may be inferred from their gaze. In this thesis we examine two datasets that have not previously been considered involving prediction of task and observer sentiment respectively. The first of these involves predicting general tasks assigned to observers viewing images, and the other predicting subjective ratings recorded after viewing advertisements. The results present interesting observations on task groupings and affective dimensions of images, and the value of various measurements (gaze or image based) in making these predictions. Analysis also demonstrates the importance of how data is partitioned for predictive analysis, and the complementary nature of gaze specific and image derived features. / May 2016
442

Verification, Validation and Completeness Support for Metadata Traceability

Darr, Timothy, Fernandes, Ronald, Hamilton, John, Jones, Charles 10 1900 (has links)
ITC/USA 2010 Conference Proceedings / The Forty-Sixth Annual International Telemetering Conference and Technical Exhibition / October 25-28, 2010 / Town and Country Resort & Convention Center, San Diego, California / The complexity of modern test and evaluation (T&E) processes has resulted in an explosion of the quantity and diversity of metadata used to describe end-to-end T&E processes. Ideally, it would be possible to integrate metadata in such a way that disparate systems can seamlessly access the metadata and easily interoperate with other systems. Unfortunately, there are several barriers to achieving this goal: metadata is often designed for use with specific tools or specific purposes; metadata exists in a variety of formats (legacy, non-legacy, structured and unstructured metadata); and the same information is represented in multiple ways across different metadata formats.
443

New regression methods for measures of central tendency

Aristodemou, Katerina January 2014 (has links)
Measures of central tendency have been widely used for summarising statistical data, with the mean being the most popular summary statistic. However, in reallife applications it is not always the most representative measure of central location, especially when dealing with data which is skewed or contains outliers. Alternative statistics with less bias are the median and the mode. Median and quantile regression has been used in different fields to examine the effect of factors at different points of the distribution. Mode estimation, on the other hand, has found many applications in cases where the analysis focuses on obtaining information about the most typical value or pattern. This thesis demonstrates that mode also plays an important role in the analysis of big data, which is becoming increasingly important in many sectors of the global economy. However, mode regression has not been widely applied, even though there is a clear conceptual benefit, due to the computational and theoretical limitations of the existing estimators. Similarly, despite the popularity of the binary quantile regression model, computational straight forward estimation techniques do not exist. Driven by the demand for simple, well-found and easy to implement inference tools, this thesis develops a series of new regression methods for mode and binary quantile regression. Chapter 2 deals with mode regression methods from the Bayesian perspective and presents one parametric and two non-parametric methods of inference. Chapter 3 demonstrates a mode-based, fast pattern-identification method for big data and proposes the first fully parametric mode regression method, which effectively uncovers the dependency of typical patterns on a number of covariates. The proposed approach is demonstrated through the analysis of a decade-long dataset on the Body Mass Index and associated factors, taken from the Health Survey for England. Finally, Chapter 4 presents an alternative binary quantile regression approach, based on the nonlinear least asymmetric weighted squares, which can be implemented using standard statistical packages and guarantees a unique solution.
444

Exact sampling and optimisation in statistical machine translation

Aziz, Wilker Ferreira January 2014 (has links)
In Statistical Machine Translation (SMT), inference needs to be performed over a high-complexity discrete distribution de ned by the intersection between a translation hypergraph and a target language model. This distribution is too complex to be represented exactly and one typically resorts to approximation techniques either to perform optimisation { the task of searching for the optimum translation { or sampling { the task of nding a subset of translations that is statistically representative of the goal distribution. Beam-search is an example of an approximate optimisation technique, where maximisation is performed over a heuristically pruned representation of the goal distribution. For inference tasks other than optimisation, rather than nding a single optimum, one is really interested in obtaining a set of probabilistic samples from the distribution. This is the case in training where one wishes to obtain unbiased estimates of expectations in order to t the parameters of a model. Samples are also necessary in consensus decoding where one chooses from a sample of likely translations the one that minimises a loss function. Due to the additional computational challenges posed by sampling, n-best lists, a by-product of optimisation, are typically used as a biased approximation to true probabilistic samples. A more direct procedure is to attempt to directly draw samples from the underlying distribution rather than rely on n-best list approximations. Markov Chain Monte Carlo (MCMC) methods, such as Gibbs sampling, o er a way to overcome the tractability issues in sampling, however their convergence properties are hard to assess. That is, it is di cult to know when, if ever, an MCMC sampler is producing samples that are compatible iii with the goal distribution. Rejection sampling, a Monte Carlo (MC) method, is more fundamental and natural, it o ers strong guarantees, such as unbiased samples, but is typically hard to design for distributions of the kind addressed in SMT, rendering an intractable method. A recent technique that stresses a uni ed view between the two types of inference tasks discussed here | optimisation and sampling | is the OS approach. OS can be seen as a cross between Adaptive Rejection Sampling (an MC method) and A optimisation. In this view the intractable goal distribution is upperbounded by a simpler (thus tractable) proxy distribution, which is then incrementally re ned to be closer to the goal until the maximum is found, or until the sampling performance exceeds a certain level. This thesis introduces an approach to exact optimisation and exact sampling in SMT by addressing the tractability issues associated with the intersection between the translation hypergraph and the language model. The two forms of inference are handled in a uni ed framework based on the OS approach. In short, an intractable goal distribution, over which one wishes to perform inference, is upperbounded by tractable proposal distributions. A proposal represents a relaxed version of the complete space of weighted translation derivations, where relaxation happens with respect to the incorporation of the language model. These proposals give an optimistic view on the true model and allow for easier and faster search using standard dynamic programming techniques. In the OS approach, such proposals are used to perform a form of adaptive rejection sampling. In rejection sampling, samples are drawn from a proposal distribution and accepted or rejected as a function of the mismatch between the proposal and the goal. The technique is adaptive in that rejected samples are used to motivate a re nement of the upperbound proposal that brings it closer to the goal, improving the rate of acceptance. Optimisation can be connected to an extreme form of sampling, thus the framework introduced here suits both exact optimisation and exact iv sampling. Exact optimisation means that the global maximum is found with a certi cate of optimality. Exact sampling means that unbiased samples are independently drawn from the goal distribution. We show that by using this approach exact inference is feasible using only a fraction of the time and space that would be required by a full intersection, without recourse to pruning techniques that only provide approximate solutions. We also show that the vast majority of the entries (n-grams) in a language model can be summarised by shorter and optimistic entries. This means that the computational complexity of our approach is less sensitive to the order of the language model distribution than a full intersection would be. Particularly in the case of sampling, we show that it is possible to draw exact samples compatible with distributions which incorporate a high-order language model component from proxy distributions that are much simpler. In this thesis, exact inference is performed in the context of both hierarchical and phrase-based models of translation, the latter characterising a problem that is NP-complete in nature.
445

Instrumental variable and longitudinal structural equation modelling methods for causal mediation : the PACE trial of treatments for chronic fatigue syndrome

Goldsmith, Kimberley January 2014 (has links)
Background: Understanding complex psychological treatment mechanisms is important in order to refine and improve treatment. Mechanistic theories can be evaluated using mediation analysis methods. The Pacing, Graded Activity, and Cognitive Behaviour Therapy: A Randomised Evaluation (PACE) trial studied complex therapies for the treatment of chronic fatigue syndrome. The aim of the project was to study different mediation analysis methods using PACE trial data, and to make trial design recommendations based upon the findings. Methods: PACE trial data were described using summary statistics and correlation analyses. Mediation estimates were derived using: the product of coefficients approach, instrumental variable (IV) methods with randomisation by baseline variables interactions as IVs, and dual process longitudinal structural equation models (SEM). Monte Carlo simulation studies were done to further explore the behaviour of IV estimators and to examine aspects of the SEM. Results: Cognitive and behavioural measures were mediators of the cognitive behavioural and graded exercise therapies in PACE. Results were robust when accounting for correlated measurement error and different SEM structures. Randomisation by baseline IVs were weak, giving imprecise and sometimes extreme estimates, leaving their utility unclear. A flexible version of a latent change SEM with contemporaneous mediation effects and contemporaneous correlated measurement errors was the most appropriate longitudinal model. Conclusions: IV methods using interaction IVs are unlikely to be useful; designs with randomised IV might be more suitable. Longitudinal SEM for mediation in clinical trials seems a promising approach. Mediation estimates from SEM were generally robust when allowing for correlated measurement error and for different model classes. Mediation analysis in trials should be longitudinal and should consider the number and timing of measures at the design stage. Using appropriate methods for studying mediation in trials will help clarify treatment mechanisms of action and allow for their refinement, which would maximize the information gained from trials and benefit patients.
446

Expert Systems in Data Acquisition

McCauley, Bob 10 1900 (has links)
International Telemetering Conference Proceedings / October 26-29, 1987 / Town and Country Hotel, San Diego, California / In an Independent Research and Development (IR&D) effort, the Telemetry Systems Operation (TSO) of Computer Sciences Corporation (CSC) sought to determine the feasibility of using Artificial Intelligence (AI) techniques in a real-time processing environment. Specifically, the use of an expert system to assist in telemetry data acquisition processing was studied. A prototype expert system was implemented with the purpose of monitoring F15 Vertical Short Take Off and Landing (VSTOL) aircraft engine tests in order to predict engine stalls. This prototype expert system was implemented on a Symbolics 3670 symbolic processor using Inference Corporation's Artificial Reasoning Tool (ART) expert system compiler/generator. The Symbolics computer was connected to a Gould/SEL 32/6750 real-time processor using a Flavors, Inc. Bus Link for real-time data transfer.
447

Branching Gaussian Process Models for Computer Vision

Simek, Kyle January 2016 (has links)
Bayesian methods provide a principled approach to some of the hardest problems in computer vision—low signal-to-noise ratios, ill-posed problems, and problems with missing data. This dissertation applies Bayesian modeling to infer multidimensional continuous manifolds (e.g., curves, surfaces) from image data using Gaussian process priors. Gaussian processes are ideal priors in this setting, providing a stochastic model over continuous functions while permitting efficient inference. We begin by introducing a formal mathematical representation of branch curvilinear structures called a curve tree and we define a novel family of Gaussian processes over curve trees called branching Gaussian processes. We define two types of branching Gaussian properties and show how to extend them to branching surfaces and hypersurfaces. We then apply Gaussian processes in three computer vision applications. First, we perform 3D reconstruction of moving plants from 2D images. Using a branching Gaussian process prior, we recover high quality 3D trees while being robust to plant motion and camera calibration error. Second, we perform multi-part segmentation of plant leaves from highly occluded silhouettes using a novel Gaussian process model for stochastic shape. Our method obtains good segmentations despite highly ambiguous shape evidence and minimal training data. Finally, we estimate 2D trees from microscope images of neurons with highly ambiguous branching structure. We first fit a tree to a blurred version of the image where structure is less ambiguous. Then we iteratively deform and expand the tree to fit finer images, using a branching Gaussian process regularizing prior for deformation. Our method infers natural tree topologies despite ambiguous branching and image data containing loops. Our work shows that Gaussian processes can be a powerful building block for modeling complex structure, and they perform well in computer vision problems having significant noise and ambiguity.
448

High fidelity micromechanics-based statistical analysis of composite material properties

Mustafa, Ghulam 08 April 2016 (has links)
Composite materials are being widely used in light weight structural applications due to their high specific stiffness and strength properties. However, predicting their mechanical behaviour accurately is a difficult task because of the complicated nature of these heterogeneous materials. This behaviour is not easily modeled with most of existing macro mechanics based models. Designers compensate for the model unknowns in failure predictions by generating overly conservative designs with relatively simple ply stacking sequences, thereby mitigating many of the benefits promised by composites. The research presented in this dissertation was undertaken with the primary goal of providing efficient methodologies for use in the design of composite structures considering inherent material variability and model shortcomings. A micromechanics based methodology is proposed to simulate stiffness, strength, and fatigue behaviour of composites. The computational micromechanics framework is based on the properties of the constituents of composite materials: the fiber, matrix and fiber/matrix interface. This model helps the designer to understand in-depth the failure modes in these materials and design efficient structures utilizing arbitrary layups with a reduced requirement for supporting experimental testing. The only limiting factor in using a micromechanics model is the challenge in obtaining the constituent properties. The overall novelty of this dissertation is to calibrate these constituent properties by integrating the micromechanics approach with a Bayesian statistical model. The early research explored the probabilistic aspects of the constituent properties to calculate the stiffness characteristics of a unidirectional lamina. Then these stochastic stiffness properties were considered as an input to analyze the wing box of a wind turbine blade. Results of this study gave a gateway to map constituent uncertainties to the top-level structure. Next, a stochastic first ply failure load method was developed based on micromechanics and Bayesian inference. Finally, probabilistic SN curves of composite materials were calculated after fatigue model parameter calibration using Bayesian inference. Throughout this research, extensive experimental data sets from literature have been used to calibrate and evaluate the proposed models. The micromechanics based probabilistic framework formulated here is quite general, and applied on the specific application of a wind turbine blade. The procedure may be easily generalized to deal with other structural applications such as storage tanks, pressure vessels, civil structural cladding, unmanned air vehicles, automotive bodies, etc. which can be explored in future work. / Graduate / 0548 / enginer315@gmail.com
449

The structure of logical consequence : proof-theoretic conceptions

Hjortland, Ole T. January 2010 (has links)
The model-theoretic analysis of the concept of logical consequence has come under heavy criticism in the last couple of decades. The present work looks at an alternative approach to logical consequence where the notion of inference takes center stage. Formally, the model-theoretic framework is exchanged for a proof-theoretic framework. It is argued that contrary to the traditional view, proof-theoretic semantics is not revisionary, and should rather be seen as a formal semantics that can supplement model-theory. Specifically, there are formal resources to provide a proof-theoretic semantics for both intuitionistic and classical logic. We develop a new perspective on proof-theoretic harmony for logical constants which incorporates elements from the substructural era of proof-theory. We show that there is a semantic lacuna in the traditional accounts of harmony. A new theory of how inference rules determine the semantic content of logical constants is developed. The theory weds proof-theoretic and model-theoretic semantics by showing how proof-theoretic rules can induce truth-conditional clauses in Boolean and many-valued settings. It is argued that such a new approach to how rules determine meaning will ultimately assist our understanding of the apriori nature of logic.
450

Bayesian approaches for modeling protein biophysics

Hines, Keegan 18 September 2014 (has links)
Proteins are the fundamental unit of computation and signal processing in biological systems. A quantitative understanding of protein biophysics is of paramount importance, since even slight malfunction of proteins can lead to diverse and severe disease states. However, developing accurate and useful mechanistic models of protein function can be strikingly elusive. I demonstrate that the adoption of Bayesian statistical methods can greatly aid in modeling protein systems. I first discuss the pitfall of parameter non-identifiability and how a Bayesian approach to modeling can yield reliable and meaningful models of molecular systems. I then delve into a particular case of non-identifiability within the context of an emerging experimental technique called single molecule photobleaching. I show that the interpretation of this data is non-trivial and provide a rigorous inference model for the analysis of this pervasive experimental tool. Finally, I introduce the use of nonparametric Bayesian inference for the analysis of single molecule time series. These methods aim to circumvent problems of model selection and parameter identifiability and are demonstrated with diverse applications in single molecule biophysics. The adoption of sophisticated inference methods will lead to a more detailed understanding of biophysical systems. / text

Page generated in 0.0624 seconds