• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 32
  • 10
  • 4
  • 1
  • Tagged with
  • 1313
  • 484
  • 92
  • 86
  • 67
  • 54
  • 49
  • 43
  • 42
  • 41
  • 40
  • 39
  • 36
  • 29
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Statistical inference in mixture models with random effects

Meddings, D. P. January 2014 (has links)
There is currently no existing asymptotic theory for statistical inference on the maximum likelihood estimators of the parameters in a mixture of linear mixed models (MLMMs). Despite this many researchers assume the estimators are asymptotically normally distributed with covariance matrix given by the inverse of the information matrix. Mixture models create new identifability problems that are not inherited from the underlying linear mixed model (LMM), and this subject has not been investigated for these models. Since identifability is a prerequisite for the existence of a consistent estimator of the model parameters, then this is an important area of research that has been neglected. MLMMs are mixture models with random effects, and they are typically used in medical and genetics settings where random heterogeneity in repeated measures data are observed between measurement units (people, genes), but where it is assumed the units belong to one and only one of a finite number of sub-populations or components. This is expressed probabalistically by using a sub-population specific probability distribution function which are often called the component distribution functions. This thesis is motivated by the belief that the use of MLMMs in applied settings such as these is being held back by the lack of development of the statistical inference framework. Specifically this thesis has the following primary objectives; i To investigate the quality of statistical inference provided by different information matrix based methods of confidence interval construction. ii To investigate the impact of component distribution function separation on the quality of statistical inference, and to propose a new method to quantify this separation. iii To determine sufficient conditions for identifiability of MLMMs.
12

Evidence-based crime investigation : a Bayesian approach

Lund, E. January 2014 (has links)
I this dissertation I study the standard of evidence practiced in crime investigation. I concentrate on one recurrent decision-problem: That of assessing and deciding the basic, causal/logical, evidential value of means of evidence in given criminal cases. I restrict further to such problems when the means of evidence is based on imprints without transferred components (illustrated by bitemarks on human skin) and expert-knowledge about such imprints (forensic odontology and medicine). The question is: Which standard of induction can and should be required for crime investigative decisions in modern democracies? The answer depends on (a) the specification of “logical coherence” and (b) the analytical and institutional conditions and aims of crime investigation. Having established a minimum standard of evidence and the conditions and aims of crime investigation, I ask: (1) What is the current inductive procedure for determining the evidential value of means of evidence based on imprints in the form of bitemarks on human skin?; (2) does this procedure provide logically coherent justification according to the minimum standard of evidence identified I chapter 1? Two empirical studies suggest a procedure of the type “incomplete and open induction” and “no” to question (2). An alternative procedure, “complete and closed induction”, anchored in Bayesian Theory and Bayesian Inference Networks, is suggested, theoretically justified, and demonstrated in the last four chapters of the dissertation. The question is: This procedure provides logically coherent justification to the standard of evidence identified in chapter 1 and thus secures the basic shared epistemic aspects of trust-formation, but is it compatible with and able to secure the further and internally conflicting contextual/situational legal, social, and emotional aspects of trust-formation? The answer is a contingent “yes” – if it is restricted to the investigative phase and under-communicated during the last part of the trial phase: Epistemic needs are as necessary for trust-formation as are contextual/situational social and emotional needs, but the former is not directly compatible with the latter and must therefore be secured prior to the latter.
13

Decision making under uncertainty and competition for sustainable energy technologies

Maurovich Horvat, L. January 2015 (has links)
This dissertation addresses the main challenges faced in the transition to a more sustainable energy sector by applying modelling tools that could design more effective managerial responses and provide policy insights. To mitigate the impact of climate change, the electric power industry needs to reduce markedly its emissions of greenhouse gases. As energy consumption is set to increase in the foreseeable future, this can be achieved only through costly investments in more efficient conventional generation or in renewable energy resources. While more energy-efficient technologies are commercially available, the deregulation of most electricity industries implies that investment decisions need to be taken by private investors with government involvement limited to setting policy measures or designing market rules. Thus, it is desirable to understand how investment and operational decisions are to be made by decentralised entities that face uncertainty and competition. One of the most efficient thermal power technologies is cogeneration, or combined heat and power (CHP), which can recover heat that otherwise would be discarded from conventional generation. Cogeneration is particularly efficient when the recovered heat can be used in the vicinity of the combustion engine. Although governments are supporting on-site CHP generation through feed-in tariffs and favourable grid access, the adoption of small-scale electricity generation has been hindered by uncertain electricity and gas prices. While deterministic and real options studies have revealed distributed generation to be both economical and effective at reducing CO2 emissions, these analyses have not addressed the aspect of risk management. In order to overcome the barriers of financial uncertainties to investment, it is imperative to address the decision-making problems of a risk-averse energy consumer. Towards that end, we develop a multi-stage, stochastic mean-risk optimisation model for the long-term and medium-term risk management problems of a large consumer. We first show that installing a CHP unit not only results in both lower CO2 emissions and expected running cost but also leads to lower risk exposure. In essence, by investing in a CHP unit, a large consumer obtains the option to use on-site generation whenever the electricity price peaks, thereby reducing significantly its financial risk over the investment period. To provide further insights into risk management strategies with on-site generation, we examine also the medium-term operational problem of a large consumer. In this model, we include all available contracts from electricity and gas futures markets, and analyse their interactions with on-site generation. We conclude that by swapping the volatile electricity spot price for the less volatile gas spot price, on-site generation with CHP can lead to lower risk exposure even in the medium term, and it alters a risk-averse consumer’s demand for futures contracts. While extensive subsidies have triggered investments in renewable generation, these installations need to be accompanied by transmission expansion. The reason for this is that solar and wind energy output is intermittent, and attractive solar and wind sites are often located far away from demand centres. Thus, to integrate renewable generation into the grid system and to maintain a reliable and secure electricity supply, a vastly improved transmission network is crucial. Finding the optimal transmission line investments for a given network is already a very complex task since these decisions need to take into account future demand and generation configurations, too, which now depend on private investors. To address these concerns, our third study models the problem of wind energy investment and transmission expansion jointly through a stochastic bi-level programming model under different market designs for transmission line investment. This enables the game-theoretic interaction between distinct decision makers, i.e., those investing in power plants and those constructing transmission lines, to be addressed directly. We find that under perfect competition only one of the wind power producers, the one with lower capital cost, makes investment and to a lower degree under a profit-maximising merchant investor (MI) than under a welfare-maximising transmission system operator (TSO), as the MI reduces the transmission capacity to increase congestion rent. In addition, we note that regardless of whether the grid expansion is carried out by the TSO or by the MI, a higher proportion of wind energy is installed when power producers exercise market power. In effect, strategic withholding of generation capacity by producers prompts more transmission investment since the TSO aims to increase welfare by subsidising wind and the MI creates more flow to maximise profit. Under perfect competition, a higher level of wind generation can be achieved only through mandating renewable portfolio standards (RPS), which in turn results also in increased transmission investment.
14

Dimension reduction for functional regression with application to ozone trend analysis

Park, A. Y. January 2014 (has links)
This thesis concerns the solutions to the ill-posed problem in functional regression, where either covariates or responses are in functional spaces. The regression coefficient in these functional settings lives on infinite-dimensional spaces. Therefore, dimension reduction is commonly considered. In Chapter 2, to analyze trends in stratospheric ozone profiles, the profiles are regressed on the time and relevant proxies (function on multivariate regression). To achieve dimension reduction, we employ Functional Principal Component Analysis (FPCA) and the projections of the profiles onto the PC basis are used in the subsequent statistical step to reveal the non-linear effects of the covariates on ozone. Variations in the influences and the trends across altitudes are found, which highlights the benefits of using the functional approach. When the PC basis is used for the regression coefficient, the subspace is chosen without regard to how well it helps prediction. In Chapter 3, we introduce a more efficient dimension reduction method, Functional Principal Fitted Component Regression (FPFCR), accounting for the response when choosing the components, based on an inverse regression. Our numerical studies provide insights about the possible advantages of using our proposed approach: it leads to more parsimonious model selection, compared to classical dimension reduction methods, which is particularly apparent in our brain image analysis. The solutions to the regression problem above are based on a frequentist perspective. In Chapter 4, we adopt a Bayesian viewpoint and propose Functional Bayesian Linear Regression (FBLR). We impose a Gaussian prior for the regression coefficient with the precision written in differential form. In addition, a Gamma prior is assumed for the precision of the regression error. We obtain the posterior of the regression parameter and further quantify its uncertainties via point-wise Bayesian credible regions.
15

Practical use of multiple imputation

Morris, T. P. January 2014 (has links)
Multiple imputation is a flexible technique for handling missing data that is widely used in medical research. Its properties are understood well for some simple settings but less so for the complex settings in which it is typically applied. The three research topics considered in thesis consider incomplete continuous covariates when the analysis model involves nonlinear functions of one or more of these. Chapters 2–4 evaluate two imputation techniques known as predictive mean matching and local residual draws, which may protect against bias when the imputation model is misspecified. Following a review of the literature, I focus on how to match, the appropriate size of donor pool, and whether transformation can improve imputation. Neither method performs as well as hoped when the imputation model is misspecified but both can offer some protection against imputation model misspecification. Chapter 5 investigates strategies for imputing the ratio of two variables. Various ‘active’ and ‘passive’ strategies are critiqued, applied to two datasets and compared in a simulation study. (‘Active’ indicates the ratio is imputed directly within a model; ‘passive’ means it is calculated externally to the imputation model.) Without prior transformation, passive imputation after imputing the numerator and denominator should be avoided, but other methods require less caution. Chapter 6 proposes techniques for combining multiple imputation with (multivariable) fractional polynomial methods. A new technique for imputing dimension-one fractional polynomials is developed and nested in a chained-equations procedure. Two candidate methods for estimating exponents in the fractional polynomial model, using Wald statistics and log-likelihoods, are assessed via simulation. Finally, the type I error and power are compared for model selection procedures based on Wald and likelihood-ratio type tests. Both methods can out-perform complete cases analysis, with the Wald method marginally better than likelihood-ratio tests.
16

Combining insights from mean and quantile regression : an application to spatio-temporal data

Liang, X. M. January 2014 (has links)
This thesis is concerned with the analysis of spatio-temporal data sets in which it is required to estimate the effects of (potentially several) covariates upon the distribution of a response variable. Regression analysis is particularly useful in the modelling of such data; we consider generalized additive models, and quantile regression models. Our contributions concern two aspects: estimation and inference. Accurate estimation and inference in the presence of dependence remains challenging. Here we consider a penalized spline framework. This provides a class of smoothers that are easy to fit, and allows for flexibility via the combination of different spline bases and penalties. Inference procedures may be misleading when failing to account for the dependence in the data; we develop inference tools to ensure valid statistical inference. The first part of the thesis extends generalized additive models by fitting smooth functions to the data incorporating all relevant covariates of space and time. The spatial dependence is accounted for by assuming independence during fitting and then adjusting the standard errors of parameter estimates to ensure valid inferences. The second part of the thesis concerns quantile regression; we introduce a simple modification and parameterization of the standard nonparametric quantile regression problem which can be exploited to determine a set of procedures which approximate the computationally demanding nonlinear optimization problem for quantile estimation. We also address the issue of smoothing parameter selection and inference of quantile regression estimates in the presence of dependence. This work is motivated by, and illustrated using, the southwest Western Australia rainfall data set as a case study. The developments in this thesis will be useful to practitioners who require a practical and computationally convenient way to compute nonparametric mean and quantile regression curves for spatio-temporal applications. Importantly, they are also readily implementable using existing statistical software.
17

A study on the properties and applications of advanced MCMC methods for diffusion models

Pazos, E. A. January 2014 (has links)
The aim of this thesis is to find ways to make advanced Markov Chain Monte Carlo (MCMC) algorithms more efficient. Our framework is relevant for target distributions defined as change of measures from Gaussian laws; we use this def- inition because it provides the flexibility to apply our methods to a wider range of problems –including models driven by Stochastic Differential Equations (SDE). The advanced MCMC algorithms presented in this thesis are well-defined on the infinite-dimensional path-space and exhibit superior properties in terms of compu- tational complexity. The consequence of the well-definition of these algorithms is that they have mesh-free mixing properties and their convergence time does not de- teriorate when the dimension of the path increases. The contributions we make in this thesis are in four areas: First, we present a new proof for the well-posedness of the advanced Hybrid Monte Carlo (HMC) algorithm; this proof allows us to verify the validity of the required assumptions for well-posedness in several practi- cal applications. Second, by comparing analytically and with numerical examples the computational costs of different algorithms, we show that the advanced Ran- dom Walk Metropolis and the Metropolis-adjusted Langevin algorithm (MALA) have similar complexity when applied to ‘long’ diffusion paths, whereas the HMC algorithm is more efficient than both. Third, we demonstrate that the Golightly- Wikinson transformation can be applied to a wider range of applications – than the typically used Lamperti– when using HMC algorithms to sample from complex target distributions such as SDEs with general diffusion coefficients. Four, we im- plemented a novel joint update scheme to sample from a path observed with error, where the path itself was driven by a fractional Brownian motion (fBm) instead of a Wiener process. Here HMC’s scaling properties proved desirable, since, the non-Markovian properties of fBm made techniques like blocking overly expensive. We achieved this by a well-planned use of the Davies-Harte algorithm to provide the mapping between fBm and uncorrelated white noise that we used to decouple the a-priori involved model parameters from the high-dimensional latent variables. Finally, we showed numerically that our proposed algorithm works efficiently and provided ample comparisons to corroborate it.
18

Multidimensional time series classification and its application to video activity recognition

Sengupta, Shreeya January 2015 (has links)
A collection of observations made sequentially through time is known as a time series. The order in which the observations in a time series are recorded is important and is a distinct characteristic of time series. A time series may have only one dimension (unidimensional) or may have multiple dimensions (multidimensional). The research presented in this thesis considers multidimensional time series. Financial stock market, videos, medical (EEG and ECG) and speech data are all examples of multidimensional time series data. Analysis of multidimensional time series data can reveal underlying patterns, the knowledge of which can benefit several time series applications. For example, rules derived by analysing the stock data can be helpful in predicting the behaviour of the stock market and identifying the pattern of the strokes in signatures can aid signature verification. However, time series analysis is often hindered by the presence of variability in the series. Variability refers to the difference in the time series data generated at different points of time. It arises because of the stochasticity of the process generating the time series, non-stationarity of time series, presence of noise in time series and the variable sampling rate with which a time series is sampled. This research studies the effect of non-stationarity and variability on multidimensional time series analysis, with a pat1icular focus on video activity recognition. The research, firstly, studies the effect of non-stationarity, one of the causes of variability, and variability on time series analysis in general. The efficacy of several analysis models was evaluated on various time series problems. Results show that both non-stationarity and variability degrades the performance of the models consequently affecting time series analysis. Then, the research concentrates on video data analysis where space and time variabilities abound. The variability of the video content along the space and time dimension is known as spatial and temporal variability respectively. New methods to handle / minimise the effect of spatial and temporal variabilities are proposed. The proposed methods are then assessed against existing methods which do not handle the spatial and temporal variabilities explicitly. The proposed methods perform better than the existing methods, indicating that better video activity recognition can be achieved by addressing the spatial and temporal variabilities
19

Optimal dynamic treatment regimes : regret-regression method with myopic strategies

Mohamed, Nur Anisah January 2013 (has links)
Optimal dynamic treatment strategies provide a set of decision rules that are based on a patient’s history. We assume there are a sequence of decision times j = 1,2,...,K. At each time a measurement of the state of the patient Sj is obtained and then some action Aj is decided. The aim is to provide rules for action choice so as to maximise some final value Y . In this thesis we will focus on the regret-regression method described by Henderson et al. (2009), and the regret approach to optimal dynamic treatment regimes proposed by Murphy (2003). The regret-regression method combines the regret function with regression modelling and it is suitable for both long term and myopic (short-term) strategies. We begin by describing and demonstrating the current theory using the Murphy and Robins G-estimation techniques. Comparison between the regret-regression method and these two methods is possible and it is found that the regret-regression method provides a better estimation method than Murphy’s and Robins G-estimation. The next approach is to investigate misspecification of the Murphy and regret-regression models. We consider the effect of misspecifying the model that is assumed for the actions, which is required for the Murphy method, and of the model for states, which is required for the regret-regression approach. We also consider robustness of the fitting algorithms to starting values of the parameters. Diagnostic tests are available for model adequacy. An application to anticoagulant data is presented in detail. Myopic one and twostep ahead strategies are studied. Further investigation involves the use of Generalised Estimating Equations (GEEs) and Quadratic Inference Functions (QIF) for estimation. We also assess the robustness of both methods. Finally we consider the influence of individual observations on the parameter estimates.
20

Reasoning with incomplete information : within the framework of Bayesian networks and influence diagrams

Enderwick, Tracey Claire January 2008 (has links)
Human cognitive limitations make it very difficult to effectively process and rationalise information in complex situations. To overcome this limitation many analytical methods have been designed and applied to aid decision-makers in complex situations. In some cases, the information gained is comprehensive and complete. However, very often it is the case that information regarding the situation is incomplete and uncertain. In these cases it is necessary to reason with incomplete and uncertain information. The probabilistic graphical models known as Bayesian Networks and Influence Diagrams provide a powerful and increasingly popular framework to represent such situations. The research described here makes use of this framework to address a number of aspects relating to incomplete information. The methods presented are intended to provide support in areas of measuring the completeness of information, assessing the trade-off of speed versus quality of decision-making and incorporating the impact of unrevealed information as time progresses. Two measures are investigated to determine the completeness levels of influential observable information. One measure is based on mutual information. This measure is ultimately shown to fail, however, since it can result in a negative completeness value. The other measure focuses on the range reductions of either the probabilities (for the Bayesian Networks) or the utilities (for the Influence Diagrams) when observations are made. Analytical models were developed to determine the trade off between waiting for more information or making an immediate decision. A number of experiments involving participants in imaginary decision-making scenarios were also conducted to gain an understanding of how people intuitively weight such choices. The value of unrevealed information was utilised by applying likelihood evidence. Unrevealed information relates to something we are looking for but have not yet found. The longer time passes without it being found, the more confident we can become that it is not actually there.

Page generated in 0.0206 seconds