Spelling suggestions: "subject:"aximum likelihood ,"" "subject:"amaximum likelihood ,""
71 |
Multiple imputation in the presence of a detection limit, with applications : an empirical approach / Shawn Carl LiebenbergLiebenberg, Shawn Carl January 2014 (has links)
Scientists often encounter unobserved or missing measurements that are typically reported as less than a fixed detection limit. This especially occurs in the environmental sciences when detection of low exposures are not possible due to limitations of the measuring instrument, and the resulting data are often referred to as type I and II left censored data. Observations lying below this detection limit are therefore often ignored, or `guessed' because it cannot be measured accurately. However, reliable estimates of the population parameters are nevertheless required to perform statistical analysis. The problem of dealing with values below a detection limit becomes increasingly complex when a large number of observations are present below this limit. Researchers thus have interest in developing statistical robust estimation procedures for dealing with left- or right-censored data sets (SinghandNocerino2002). The aim of this study focuses on several main components regarding the problems mentioned above. The imputation of censored data below a fixed detection limit are studied, particularly using the maximum likelihood procedure of Cohen(1959), and several variants thereof, in combination with four new variations of the multiple imputation concept found in literature. Furthermore, the focus also falls strongly on estimating the density of the resulting imputed, `complete' data set by applying various kernel density estimators. It should be noted that bandwidth selection issues are not of importance in this study, and will be left for further research. In this study, however, the maximum likelihood estimation method of Cohen (1959) will be compared with several variant methods, to establish which of these maximum likelihood estimation procedures for censored data estimates the population parameters of three chosen Lognormal distribution, the most reliably in terms of well-known discrepancy measures. These methods will be implemented in combination with four new multiple imputation procedures, respectively, to assess which of these nonparametric methods are most effective with imputing the 12 censored values below the detection limit, with regards to the global discrepancy measures mentioned above. Several variations of the Parzen-Rosenblatt kernel density estimate will be fitted to the complete filled-in data sets, obtained from the previous methods, to establish which is the preferred data-driven method to estimate these densities. The primary focus of the current study will therefore be the performance of the four chosen multiple imputation methods, as well as the recommendation of methods and procedural combinations to deal with data in the presence of a detection limit. An extensive Monte Carlo simulation study was performed to compare the various methods and procedural combinations. Conclusions and recommendations regarding the best of these methods and combinations are made based on the study's results. / MSc (Statistics), North-West University, Potchefstroom Campus, 2014
|
72 |
Multiple imputation in the presence of a detection limit, with applications : an empirical approach / Shawn Carl LiebenbergLiebenberg, Shawn Carl January 2014 (has links)
Scientists often encounter unobserved or missing measurements that are typically reported as less than a fixed detection limit. This especially occurs in the environmental sciences when detection of low exposures are not possible due to limitations of the measuring instrument, and the resulting data are often referred to as type I and II left censored data. Observations lying below this detection limit are therefore often ignored, or `guessed' because it cannot be measured accurately. However, reliable estimates of the population parameters are nevertheless required to perform statistical analysis. The problem of dealing with values below a detection limit becomes increasingly complex when a large number of observations are present below this limit. Researchers thus have interest in developing statistical robust estimation procedures for dealing with left- or right-censored data sets (SinghandNocerino2002). The aim of this study focuses on several main components regarding the problems mentioned above. The imputation of censored data below a fixed detection limit are studied, particularly using the maximum likelihood procedure of Cohen(1959), and several variants thereof, in combination with four new variations of the multiple imputation concept found in literature. Furthermore, the focus also falls strongly on estimating the density of the resulting imputed, `complete' data set by applying various kernel density estimators. It should be noted that bandwidth selection issues are not of importance in this study, and will be left for further research. In this study, however, the maximum likelihood estimation method of Cohen (1959) will be compared with several variant methods, to establish which of these maximum likelihood estimation procedures for censored data estimates the population parameters of three chosen Lognormal distribution, the most reliably in terms of well-known discrepancy measures. These methods will be implemented in combination with four new multiple imputation procedures, respectively, to assess which of these nonparametric methods are most effective with imputing the 12 censored values below the detection limit, with regards to the global discrepancy measures mentioned above. Several variations of the Parzen-Rosenblatt kernel density estimate will be fitted to the complete filled-in data sets, obtained from the previous methods, to establish which is the preferred data-driven method to estimate these densities. The primary focus of the current study will therefore be the performance of the four chosen multiple imputation methods, as well as the recommendation of methods and procedural combinations to deal with data in the presence of a detection limit. An extensive Monte Carlo simulation study was performed to compare the various methods and procedural combinations. Conclusions and recommendations regarding the best of these methods and combinations are made based on the study's results. / MSc (Statistics), North-West University, Potchefstroom Campus, 2014
|
73 |
Metrics and Test Procedures for Data Quality Estimation in the Aeronautical Telemetry ChannelHill, Terry 10 1900 (has links)
ITC/USA 2015 Conference Proceedings / The Fifty-First Annual International Telemetering Conference and Technical Exhibition / October 26-29, 2015 / Bally's Hotel & Convention Center, Las Vegas, NV / There is great potential in using Best Source Selectors (BSS) to improve link availability in aeronautical telemetry applications. While the general notion that diverse data sources can be used to construct a consolidated stream of "better" data is well founded, there is no standardized means of determining the quality of the data streams being merged together. Absent this uniform quality data, the BSS has no analytically sound way of knowing which streams are better, or best. This problem is further exacerbated when one imagines that multiple vendors are developing data quality estimation schemes, with no standard definition of how to measure data quality. In this paper, we present measured performance for a specific Data Quality Metric (DQM) implementation, demonstrating that the signals present in the demodulator can be used to quickly and accurately measure the data quality, and we propose test methods for calibrating DQM over a wide variety of channel impairments. We also propose an efficient means of encapsulating this DQM information with the data, to simplify processing by the BSS. This work leads toward a potential standardization that would allow data quality estimators and best source selectors from multiple vendors to interoperate.
|
74 |
NON-COHERENTLY DETECTED FQPSK: RAPID SYNCHRONIZATION AND COMPATIBILITY WITH PCM/FM RECEIVERSPark, Hyung Chul, Lee, Kwyro, Feher, Kamilo 10 1900 (has links)
International Telemetering Conference Proceedings / October 22-25, 2001 / Riviera Hotel and Convention Center, Las Vegas, Nevada / A new class of non-coherent detection techniques for recently standardized Feher patented
quadrature phase-shift keying (FQPSK) systems is proposed and studied by computer aided
design/simulations and also verified by experimental hardware measurements.
The theoretical concepts of the described non-coherent techniques are based on an
interpretation of the instantaneous frequency deviation or phase transition characteristics of
FQPSK-B modulated signal at the front end of the receiver. These are accomplished either
by Limiter-Discriminator (LD) or by Limiter-Discriminator followed by Integrate-and-Dump (LD I&D) methods. It is shown that significant BER performance improvements can
be obtained by increasing the received signal’s observation time over multiple symbols as
well as by adopting trellis-demodulation. For example, our simulation results show that a
BER=10^-4 can be obtained for an E(b)/N(0)=12.7 dB.
|
75 |
ModPET: Novel Applications of Scintillation Cameras to Preclinical PETMoore, Stephen K. January 2011 (has links)
We have designed, developed, and assessed a novel preclinical positron emission tomography (PET) imaging system named ModPET. The system was developed using modular gamma cameras, originally developed for SPECT applications at the Center for Gamma Ray Imaging (CGRI), but configured for PET imaging by enabling coincidence timing. A pair of cameras are mounted on a exible system gantry that also allows for acquisition of optical images such that PET images can be registered to an anatomical reference. Data is acquired in a super list-mode form where raw PMT signals and event times are accumulated in events lists for each camera. Event parameter estimation of position and energy is carried out with maximum likelihood methods using careful camera calibrations accomplished with collimated beams of 511-keV photons and a new iterative mean-detector-response-function processing routine. Intrinsic lateral spatial resolution for 511-keV photons was found to be approximately 1.6 mm in each direction. Lists of coincidence pairs are found by comparing event times in the two independent camera lists. A timing window of 30 nanoseconds is used. By bringing the 4.5 inch square cameras in close proximity, with a 32-mm separation for mouse imaging, a solid angle coverage of ∼75% partially compensates for the relatively low stopping power in the 5-mm-thick NaI crystals to give a mea- sured sensitivity of up to 0.7%. An NECR analysis yields 11,000 pairs per second with 84 μCi of activity. A list-mode MLEM reconstruction algorithm was developed to reconstruct objects in a 88 x 88 x 30 mm field of view. Tomographic resolution tests with a phantom suggest a lateral resolution of 1.5 mm and a slightly degraded resolution of 2.5 mm in the direction normal to the camera faces. The system can also be configured to provide (99m)Tc planar scintigraphy images. Selected biological studies of inammation, apoptosis, tumor metabolism, and bone osteogenic activity are presented.
|
76 |
Parameter Estimation Techniques for Nonlinear Dynamic Models with Limited Data, Process Disturbances and Modeling ErrorsKarimi, Hadiseh 23 December 2013 (has links)
In this thesis appropriate statistical methods to overcome two types of problems that occur during parameter estimation in chemical engineering systems are studied. The first problem is having too many parameters to estimate from limited available data, assuming that the model structure is correct, while the second problem involves estimating unmeasured disturbances, assuming that enough data are available for parameter estimation. In the first part of this thesis, a model is developed to predict rates of undesirable reactions during the finishing stage of nylon 66 production. This model has too many parameters to estimate (56 unknown parameters) and not having enough data to reliably estimating all of the parameters. Statistical techniques are used to determine that 43 of 56 parameters should be estimated. The proposed model matches the data well. In the second part of this thesis, techniques are proposed for estimating parameters in Stochastic Differential Equations (SDEs). SDEs are fundamental dynamic models that take into account process disturbances and model mismatch. Three new approximate maximum likelihood methods are developed for estimating parameters in SDE models. First, an Approximate Expectation Maximization (AEM) algorithm is developed for estimating model parameters and process disturbance intensities when measurement noise variance is known. Then, a Fully-Laplace Approximation Expectation Maximization (FLAEM) algorithm is proposed for simultaneous estimation of model parameters, process disturbance intensities and measurement noise variances in nonlinear SDEs. Finally, a Laplace Approximation Maximum Likelihood Estimation (LAMLE) algorithm is developed for estimating measurement noise variances along with model parameters and disturbance intensities in nonlinear SDEs. The effectiveness of the proposed algorithms is compared with a maximum-likelihood based method. For the CSTR examples studied, the proposed algorithms provide more accurate estimates for the parameters. Additionally, it is shown that the performance of LAMLE is superior to the performance of FLAEM. SDE models and associated parameter estimates obtained using the proposed techniques will help engineers who implement on-line state estimation and process monitoring schemes. / Thesis (Ph.D, Chemical Engineering) -- Queen's University, 2013-12-23 15:12:35.738
|
77 |
Likelihood-Based Tests for Common and Idiosyncratic Unit Roots in the Exact Factor ModelSolberger, Martin January 2013 (has links)
Dynamic panel data models are widely used by econometricians to study over time the economics of, for example, people, firms, regions, or countries, by pooling information over the cross-section. Though much of the panel research concerns inference in stationary models, macroeconomic data such as GDP, prices, and interest rates are typically trending over time and require in one way or another a nonstationary analysis. In time series analysis it is well-established how autoregressive unit roots give rise to stochastic trends, implying that random shocks to a dynamic process are persistent rather than transitory. Because the implications of, say, government policy actions are fundamentally different if shocks to the economy are lasting than if they are temporary, there are now a vast number of univariate time series unit root tests available. Similarly, panel unit root tests have been designed to test for the presence of stochastic trends within a panel data set and to what degree they are shared by the panel individuals. Today, growing data certainly offer new possibilities for panel data analysis, but also pose new problems concerning double-indexed limit theory, unobserved heterogeneity, and cross-sectional dependencies. For example, economic shocks, such as technological innovations, are many times global and make national aggregates cross-country dependent and related in international business cycles. Imposing a strong cross-sectional dependence, panel unit root tests often assume that the unobserved panel errors follow a dynamic factor model. The errors will then contain one part which is shared by the panel individuals, a common component, and one part which is individual-specific, an idiosyncratic component. This is appealing from the perspective of economic theory, because unobserved heterogeneity may be driven by global common shocks, which are well captured by dynamic factor models. Yet, only a handful of tests have been derived to test for unit roots in the common and in the idiosyncratic components separately. More importantly, likelihood-based methods, which are commonly used in classical factor analysis, have been ruled out for large dynamic factor models due to the considerable number of parameters. This thesis consists of four papers where we consider the exact factor model, in which the idiosyncratic components are mutually independent, and so any cross-sectional dependence is through the common factors only. Within this framework we derive some likelihood-based tests for common and idiosyncratic unit roots. In doing so we address an important issue for dynamic factor models, because likelihood-based tests, such as the Wald test, the likelihood ratio test, and the Lagrange multiplier test, are well-known to be asymptotically most powerful against local alternatives. Our approach is specific-to-general, meaning that we start with restrictions on the parameter space that allow us to use explicit maximum likelihood estimators. We then proceed with relaxing some of the assumptions, and consider a more general framework requiring numerical maximum likelihood estimation. By simulation we compare size and power of our tests with some established panel unit root tests. The simulations suggest that the likelihood-based tests are locally powerful and in some cases more robust in terms of size. / Solving Macroeconomic Problems Using Non-Stationary Panel Data
|
78 |
Sensory Integration During Goal Directed Reaches: The Effects of Manipulating Target AvailabilityKhanafer, Sajida 19 October 2012 (has links)
When using visual and proprioceptive information to plan a reach, it has been proposed that the brain combines these cues to estimate the object and/or limb’s location. Specifically, according to the maximum-likelihood estimation (MLE) model, more reliable sensory inputs are assigned a greater weight (Ernst & Banks, 2002). In this research we examined if the brain is able to adjust which sensory cue it weights the most. Specifically, we asked if the brain changes how it weights sensory information when the availability of a visual cue is manipulated. Twenty-four healthy subjects reached to visual (V), proprioceptive (P), or visual + proprioceptive (VP) targets under different visual delay conditions (e.g. on V and VP trials, the visual target was available for the entire reach, it was removed with the go-signal or it was removed 1, 2 or 5 seconds before the go-signal). Subjects completed 5 blocks of trials, with 90 trials per block. For 12 subjects, the visual delay was kept consistent within a block of trials, while for the other 12 subjects, different visual delays were intermixed within a block of trials. To establish which sensory cue subjects weighted the most, we compared endpoint positions achieved on V and P reaches to VP reaches. Results indicated that all subjects weighted sensory cues in accordance with the MLE model across all delay conditions and that these weights were similar regardless of the visual delay. Moreover, while errors increased with longer visual delays, there was no change in reaching variance. Thus, manipulating the visual environment was not enough to change subjects’ weighting strategy, further i
|
79 |
SENSITIVITY ANALYSIS IN HANDLING DISCRETE DATA MISSING AT RANDOM IN HIERARCHICAL LINEAR MODELS VIA MULTIVARIATE NORMALITYZheng, Xiyu 01 January 2016 (has links)
Abstract
In a two-level hierarchical linear model(HLM2), the outcome as well as covariates may have missing values at any of the levels. One way to analyze all available data in the model is to estimate a multivariate normal joint distribution of variables, including the outcome, subject to missingness conditional on covariates completely observed by maximum likelihood(ML); draw multiple imputation (MI) of missing values given the estimated joint model; and analyze the hierarchical model given the MI [1,2]. The assumption is data missing at random (MAR). While this method yields efficient estimation of the hierarchical model, it often estimates the model given discrete missing data that is handled under multivariate normality. In this thesis, we evaluate how robust it is to estimate a hierarchical linear model given discrete missing data by the method. We simulate incompletely observed data from a series of hierarchical linear models given discrete covariates MAR, estimate the models by the method, and assess the sensitivity of handling discrete missing data under the multivariate normal joint distribution by computing bias, root mean squared error, standard error, and coverage probability in the estimated hierarchical linear models via a series of simulation studies. We want to achieve the following aim: Evaluate the performance of the method handling binary covariates MAR. We let the missing patterns of level-1 and -2 binary covariates depend on completely observed variables and assess how the method handles binary missing data given different values of success probabilities and missing rates.
Based on the simulation results, the missing data analysis is robust under certain parameter settings. Efficient analysis performs very well for estimation of level-1 fixed and random effects across varying success probabilities and missing rates. MAR estimation of level-2 binary covariate is not well estimated when the missing rate in level-2 binary covariate is greater than 10%.
The rest of the thesis is organized as follows: Section 1 introduces the background information including conventional methods for hierarchical missing data analysis, different missing data mechanisms, and the innovation and significance of this study. Section 2 explains the efficient missing data method. Section 3 represents the sensitivity analysis of the missing data method and explain how we carry out the simulation study using SAS, software package HLM7, and R. Section 4 illustrates the results and useful recommendations for researchers who want to use the missing data method for binary covariates MAR in HLM2. Section 5 presents an illustrative analysis National Growth of Health Study (NGHS) by the missing data method. The thesis ends with a list of useful references that will guide the future study and simulation codes we used.
|
80 |
Gaining Insight With Recursive Partitioning Of Generalized Linear ModelsRusch, Thomas, Zeileis, Achim 06 1900 (has links) (PDF)
Recursive partitioning algorithms separate a feature space into a set of disjoint rectangles.
Then, usually, a constant in every partition is fitted. While this is a simple and
intuitive approach, it may still lack interpretability as to how a specific relationship between dependent and
independent variables may look. Or it may be that a certain model is assumed or of
interest and there is a number of candidate variables that may non-linearily give rise to
different model parameter values.
We present an approach that combines generalized linear models with recursive partitioning
that offers enhanced interpretability of classical trees as well as providing an
explorative way to assess a candidate variable's influence on a parametric model.
This method conducts recursive partitioning of a the generalized linear model by
(1) fitting the model to the data set, (2) testing for parameter instability over a set of
partitioning variables, (3) splitting the data set with respect to the variable associated with
the highest instability. The outcome is a tree where each terminal node is associated with a generalized linear model.
We will show the methods versatility and suitability to gain additional insight
into the relationship of dependent and independent variables by two examples, modelling
voting behaviour and a failure model for debt amortization. / Series: Research Report Series / Department of Statistics and Mathematics
|
Page generated in 0.0632 seconds