Spelling suggestions: "subject:"square""
461 |
Sufficient Dimension Reduction with Missing DataXIA, QI January 2017 (has links)
Existing sufficient dimension reduction (SDR) methods typically consider cases with no missing data. The dissertation aims to propose methods to facilitate the SDR methods when the response can be missing. The first part of the dissertation focuses on the seminal sliced inverse regression (SIR) approach proposed by Li (1991). We show that missing responses generally affect the validity of the inverse regressions under the mechanism of missing at random. We then propose a simple and effective adjustment with inverse probability weighting that guarantees the validity of SIR. Furthermore, a marginal coordinate test is introduced for this adjusted estimator. The proposed method share the simplicity of SIR and requires the linear conditional mean assumption. The second part of the dissertation proposes two new estimating equation procedures: the complete case estimating equation approach and the inverse probability weighted estimating equation approach. The two approaches are applied to a family of dimension reduction methods, which includes ordinary least squares, principal Hessian directions, and SIR. By solving the estimating equations, the two approaches are able to avoid the common assumptions in the SDR literature, the linear conditional mean assumption, and the constant conditional variance assumption. For all the aforementioned methods, the asymptotic properties are established, and their superb finite sample performances are demonstrated through extensive numerical studies as well as a real data analysis. In addition, existing estimators of the central mean space have uneven performances across different types of link functions. To address this limitation, a new hybrid SDR estimator is proposed that successfully recovers the central mean space for a wide range of link functions. Based on the new hybrid estimator, we further study the order determination procedure and the marginal coordinate test. The superior performance of the hybrid estimator over existing methods is demonstrated in simulation studies. Note that the proposed procedures dealing with the missing response at random can be simply adapted to this hybrid method. / Statistics
|
462 |
Integer Least Squares Problem Application in MIMO systems: An LLL Reduction Aided Sphere Decoding AlgorithmGuo, Jin 04 1900 (has links)
<p> Solving the integer least squares problem min ||Hs- x|| 2 , where the
unknown vector s is comprised of integers, the coefficient matrix H and given
vector x are comprised of real numbers arises in many applications and is
equivalent to find the closest lattice point to a given one known as NP-hard.
In multiple antenna systems, the received signal represented by vector xis not
arbitrary, but an lattice point perturbed by an additive noise vector whose statistical
properties are known. It has been shown the Sphere Decoding, in which
the lattice points inside a hyper-sphere are generated and the closest lattice
point to the received signal is determined, together with Maximum Likelihood
(ML) method often yields a near-optimal performance on average (cubic) while
the worst case complexity is still exponential. By using lattice basis reduction
as pre-processing step in the sub-optimum decoding algorithms, we can show
that the lattice reduction aided sphere decoding (LRSD) achieves a better
performance than the maximum likelihood sphere decoding (MLSD) in terms
of symbol error rate (SER) and average algorithm running time. In the FIR
(Finite Impulse Response) MIMO channel, the channel matrix is Toeplitz and
thus gives us the leverage to use the fact that all its column vectors all linearly
independent and the matrix itself is often well-conditioned. </p> <p> In this thesis, we will develop a lattice reduction added sphere decoding
algorithm along with an improved LLL algorithm, and provide the simulations
to show that this new algorithm achieves a better performance than the maximum likelihood sphere decoding. </p> <p> In chapter 1, we define our system model and establish the foundations
for understanding of mathematical model - namly the integer least squares
problem, and thus the choice of the simulation data. In chapter 2, we explain
the integer least squares problems and exploit serveral ways for solving the
problems, then we introduce the sphere decoding and maximum likelihood
at the end. In chapter 3, we explore the famous LLL reduction algorithm
named after Lenstra, Lenstra and Lovasz in details and show an example how
to break Merkle-Hellman code using the LLL reduction algorithm. Finally,
in chapter 4 we give the LLL reduction aided sphere decoding algorithm and
the experiment setup as well as the simulation results against the MLSD and
conclusions, further research directions. </p> / Thesis / Master of Science (MSc)
|
463 |
Multivariate Analysis Applied to Discrete Part ManufacturingWallace, Darryl 09 1900 (has links)
<p>The overall focus of this thesis is the implementation of a process monitoring
system in a real manufacturing environment that utilizes multivariate analysis techniques
to assess the state of the process. The process in question was the medium-high volume
manufacturing of discrete aluminum parts using relatively simple machining processes
involving the use of two tools. This work can be broken down into three main sections.</p><p>The first section involved the modeling of temperatures and thermal expansion
measurements for real-time thermal error compensation. Thermal expansion of the Z-axis
was measured indirectly through measurement of the two quality parameters related
to this axis with a custom gage that was designed for this part. A compensation strategy
is proposed which is able to hold the variation of the parts to ±0.02mm, where the
tolerance is ±0.05mm.</p><p>The second section involved the modeling of the process data from the parts that
included vibration, current, and temperature signals from the machine. The modeling of
the process data using Principal Component Analysis (PCA), while unsuccessful in
detecting minor simulated process faults, was successful in detecting a miss-loaded part
during regular production. Simple control charts using Hotelling's T^2 statistic and
Squared Prediction Error are illustrated. The modeling of quality data from the process
data of good parts using Projection to Latent Structures by Partial Least Squares (PLS)
data did not provide very accurate fits to the data; however, all of the predictions are
within the tolerance specifications.</p><p>The final section discusses the implementation of a process monitoring system
in both manual and automatic production environments. A method for the integration
and storage of process data with Mitutoyo software MCOSMOS and MeasurLink® is
described. All of the codes to perform multivariate analysis and process monitoring were
written using Matlab.</p> / Thesis / Master of Applied Science (MASc)
|
464 |
Automatic age and gender classification using supervised appearance modelBukar, Ali M., Ugail, Hassan, Connah, David 01 August 2016 (has links)
Yes / Age and gender classification are two important problems that recently gained popularity in the
research community, due to their wide range of applications. Research has shown that both age and gender
information are encoded in the face shape and texture, hence the active appearance model (AAM), a statistical
model that captures shape and texture variations, has been one of the most widely used feature extraction
techniques for the aforementioned problems. However, AAM suffers from some drawbacks, especially when
used for classification. This is primarily because principal component analysis (PCA), which is at the core of
the model, works in an unsupervised manner, i.e., PCA dimensionality reduction does not take into account
how the predictor variables relate to the response (class labels). Rather, it explores only the underlying structure
of the predictor variables, thus, it is no surprise if PCA discards valuable parts of the data that represent discriminatory
features. Toward this end, we propose a supervised appearance model (sAM) that improves on AAM
by replacing PCA with partial least-squares regression. This feature extraction technique is then used for the
problems of age and gender classification. Our experiments show that sAM has better predictive power than the
conventional AAM.
|
465 |
Reweighted Discriminative Optimization for least-squares problems with point cloud registrationZhao, Y., Tang, W., Feng, J., Wan, Tao Ruan, Xi, L. 26 March 2022 (has links)
Yes / Optimization plays a pivotal role in computer graphics and vision. Learning-based optimization algorithms have emerged as a powerful optimization technique for solving problems with robustness and accuracy because it learns gradients from data without calculating the Jacobian and Hessian matrices. The key aspect of the algorithms is the least-squares method, which formulates a general parametrized model of unconstrained optimizations and makes a residual vector approach to zeros to approximate a solution. The method may suffer from undesirable local optima for many applications, especially for point cloud registration, where each element of transformation vectors has a different impact on registration. In this paper, Reweighted Discriminative Optimization (RDO) method is proposed. By assigning different weights to components of the parameter vector, RDO explores the impact of each component and the asymmetrical contributions of the components on fitting results. The weights of parameter vectors are adjusted according to the characteristics of the mean square error of fitting results over the parameter vector space at per iteration. Theoretical analysis for the convergence of RDO is provided, and the benefits of RDO are demonstrated with tasks of 3D point cloud registrations and multi-views stitching. The experimental results show that RDO outperforms state-of-the-art registration methods in terms of accuracy and robustness to perturbations and achieves further improvement than non-weighting learning-based optimization.
|
466 |
Feasible Generalized Least Squares: theory and applicationsGonzález Coya Sandoval, Emilio 04 June 2024 (has links)
We study the Feasible Generalized Least-Squares (FGLS) estimation of the parameters of a linear regression model in which the errors are allowed to exhibit heteroskedasticity of unknown form and to be serially correlated. The main contribution
is two fold; first we aim to demystify the reasons often advanced to use OLS instead of FGLS by showing that the latter estimate is robust, and more efficient and precise. Second, we devise consistent FGLS procedures, robust to misspecification, which achieves a lower mean squared error (MSE), often close to that of the correctly
specified infeasible GLS.
In the first chapter we restrict our attention to the case with independent heteroskedastic errors. We suggest a Lasso based procedure to estimate the skedastic function of the residuals. This estimate is then used to construct a FGLS estimator. Using extensive Monte Carlo simulations, we show that this Lasso-based FGLS procedure has better finite sample properties than OLS and other linear regression-based FGLS estimates. Moreover, the FGLS-Lasso estimate is robust to misspecification of
both the functional form and the variables characterizing the skedastic function.
The second chapter generalizes our investigation to the case with serially correlated errors. There are three main contributions; first we show that GLS is consistent requiring only pre-determined regressors, whereas OLS requires exogenous regressors to be consistent. The second contribution is to show that GLS is much more robust that OLS; even a misspecified GLS correction can achieve a lower MSE than OLS. The third contribution is to devise a FGLS procedure valid whether or not the regressors are exogenous, which achieves a MSE close to that of the correctly specified infeasible GLS. Extensive Monte Carlo experiments are conducted to assess the performance of our FGLS procedure against OLS in finite samples. FGLS achieves important reductions in MSE and variance relative to OLS.
In the third chapter we consider an empirical application; we re-examine the Uncovered Interest Parity (UIP) hypothesis, which states that the expected rate of return to speculation in the forward foreign exchange market is zero. We extend the FGLS procedure to a setting in which lagged dependent variables are included as regressors. We thus provide a consistent and efficient framework to estimate the parameters of a general k-step-ahead linear forecasting equation. Finally, we apply our FGLS procedures to the analysis of the two main specifications to test the UIP.
|
467 |
An Iterative Confidence Passing Approach for Parameter Estimation and Its Applications to MIMO SystemsVasavada, Yash M. 17 July 2012 (has links)
This dissertation proposes an iterative confidence passing (ICP) approach for parameter estimation. The dissertation describes three different algorithms that follow from this ICP approach. These three variations of the ICP approach are applied to (a) macrodiversity and user cooperation diversity reception problems, (b) the co-operative multipoint MIMO reception problem (pertinent to the LTE Advanced system scenarios), and (c) the satellite beamforming problem. The first two of these three applications are some of the significant open DSP research problems that are currently being actively pursued in academia and industry. This dissertation demonstrates a significant performance improvement that the proposed ICP approach delivers compared to the existing known techniques.
The proposed ICP approach jointly estimates (and, thereby, separates) two sets of unknown parameters from the receiver measurements. For applications (a) and (b) mentioned above, one set of unknowns is comprised of the discrete-valued information-bearing transmitted symbols in a multi-channel communication system, and the other set of unknown parameters is formed by the coefficients of a Rayleigh or Rician fading channel. Application (a) is for interference-free, cooperative or macro, transmit or receive, diversity scenarios. Application (b) is for MIMO systems with interference-rich reception. Finally, application (c) is for an interference-free spacecraft array calibration system model in which both the sets of unknowns are complex continuous valued variables whose magnitude follows the Rician distribution.
The algorithm described here is the outcome of an investigation for solving a difficult channel estimation problem. The difficulty of the estimation problem arises because (i) the channel of interest is intermittently observed, and (ii) the partially observed information is not directly of the channel of interest; it has dependency on another unknown and uncorrelated set of complex-valued random variables.
The proposed ICP algorithmic approach for solving the above estimation problems is based on an iterative application of the Weighted Least Squares (WLS) method. The main novelty of the proposed algorithm is a back and forth exchange of the confidence or the belief values in the WLS estimates of the unknown parameters during the algorithm iterations. The confidence values of the previously obtained estimates are used to derive the estimation weights at the next iteration, which generates an improved estimate with a greater confidence. This method of iterative confidence (or belief) passing causes a bootstrapping convergence to the parameter estimates.
Besides the ICP approach, several alternatives are considered to solve the above problems (a, b and c). Results of the performance simulation of the alternative methods show that the ICP algorithm outperforms all the other candidate approaches. Performance benefit is significant when the measurements (and the initial seed estimates) have non-uniform quality, e.g., when many of the measurements are either non-usable (e.g., due to shadowing or blockage) or are missing (e.g., due to instrument failures). / Ph. D.
|
468 |
A Class of Immersed Finite Element Spaces and Their Application to Forward and Inverse Interface ProblemsCamp, Brian David 08 December 2003 (has links)
A class of immersed finite element (IFE) spaces is developed for solving elliptic boundary value problems that have interfaces. IFE spaces are finite element approximation spaces which are based upon meshes that can be independent of interfaces in the domain. Three different quadratic IFE spaces and their related biquadratic IFE spaces are introduced here for the purposes of solving both forward and inverse elliptic interface problems in 1D and 2D. These different spaces are constructed by (i) using a hierarchical approach, (ii) imposing extra continuity requirements or (iii) using a local refinement technique. The interpolation properties of each space are tested against appropriate testing functions in 1D and 2D. The IFE spaces are also used to approximate the solution of a forward elliptic interface problem using the Galerkin finite element method and the mixed least squares finite element method. Finally, one appropriate space is selected to solve an inverse interface problem using either an output least squares approach or the least squares with mixed equation error method. / Ph. D.
|
469 |
Protection Motivation Theory: Understanding the Determinants of Individual Security BehaviorCrossler, Robert E. 20 April 2009 (has links)
Individuals are considered the weakest link when it comes to securing a personal computer system. All the technological solutions can be in place, but if individuals do not make appropriate security protection decisions they introduce holes that technological solutions cannot protect. This study investigates what personal characteristics influence differences in individual security behaviors, defined as behaviors to protect against security threats, by adapting Protection Motivation Theory into an information security context.
This study developed and validated an instrument to measure individual security behaviors. It then tested the differences in these behaviors using the security research model, which built from Protection Motivation Theory, and consisted of perceived security vulnerability, perceived security threat, security self-efficacy, response efficacy, and protection cost. Participants, representing a sample population of home computer users with ages ranging from 20 to 83, provided 279 valid responses to surveys. The behaviors studied include using anti-virus software, utilizing access controls, backing up data, changing passwords frequently, securing access to personal computers, running software updates, securing wireless networks, using care when storing credit card information, educating others in one's house about security behaviors, using caution when following links in emails, running spyware software, updating a computer's operating system, using firewalls, and using pop-up blocking software. Testing the security research model found different characteristics had different impacts depending on the behavior studied. Implications for information security researchers and practitioners are provided, along with ideas for future research. / Ph. D.
|
470 |
Statistical Methods for Reliability Data from Designed ExperimentsFreeman, Laura J. 07 May 2010 (has links)
Product reliability is an important characteristic for all manufacturers, engineers and consumers. Industrial statisticians have been planning experiments for years to improve product quality and reliability. However, rarely do experts in the field of reliability have expertise in design of experiments (DOE) and the implications that experimental protocol have on data analysis. Additionally, statisticians who focus on DOE rarely work with reliability data. As a result, analysis methods for lifetime data for experimental designs that are more complex than a completely randomized design are extremely limited. This dissertation provides two new analysis methods for reliability data from life tests. We focus on data from a sub-sampling experimental design. The new analysis methods are illustrated on a popular reliability data set, which contains sub-sampling. Monte Carlo simulation studies evaluate the capabilities of the new modeling methods. Additionally, Monte Carlo simulation studies highlight the principles of experimental design in a reliability context. The dissertation provides multiple methods for statistical inference for the new analysis methods. Finally, implications for the reliability field are discussed, especially in future applications of the new analysis methods. / Ph. D.
|
Page generated in 0.0746 seconds