Spelling suggestions: "subject:"least square ""
71 |
Parameter Identification for the Preisach Model of HysteresisJoseph, Daniel Scott 27 April 2001 (has links)
Hysteresis, defined as a rate independent memory effect, is a phenomenon that occurs in many physical systems. The effect is sometimes desired, sometimes a nuisance, sometimes catastrophic, but in every case we must understand hysteresis if we are to better understand the system itself. While the study of hysteresis has been conducted by engineers, scientists and mathematicians, the contribution of mathematicians has at times been theoretically sound but impractical to implement. The goal of this work is to use sound mathematical theory to provide practical information on the subject.
The Preisach operator was developed to model hysteresis in magnetism. It is based on a continuous linear combination of relay operators weighted by a distribution function μ. A new method for approximating μ in a finite dimensional space is described. Guidelines are given for choosing the “best” finite dimensional space and a “most efficient” training set. Simulated and experimental data are also introduced to demonstrate the utility of this method.
In addition, the approximation of singular Preisach measures is explored. The types of singularities investigated are characterized by non-zero initial slopes of reversal curves. The difficulties of finding the “optimal” approximation in this case are detailed as well as a method for determining an approximation “close” to the optimal approximation. / Ph. D.
|
72 |
The Effect of Psychometric Parallelism among Predictors on the Efficiency of Equal Weights and Least Squares Weights in Multiple RegressionZhang, Desheng 05 1900 (has links)
There are several conditions for applying equal weights as an alternative to least squares weights. Psychometric parallelism, one of the conditions, has been suggested as a necessary and sufficient condition for equal-weights aggregation. The purpose of this study is to investigate the effect of psychometric parallelism among predictors on the efficiency of equal weights and least squares weights. Target correlation matrices with 10,000 cases were simulated so that the matrices had varying degrees of psychometric parallelism. Five hundred samples with six ratios of observation to predictor = 5/1, 10/1, 20/1, 30/1, 40/1, and 50/1 were drawn from each population. The efficiency is interpreted as the accuracy and the predictive power estimated by the weighting methods. The accuracy is defined by the deviation between the population R² and the sample R² . The predictive power is referred to as the population cross-validated R² and the population mean square error of prediction. The findings indicate there is no statistically significant relationship between the level of psychometric parallelism and the accuracy of least squares weights. In contrast, the correlation between the level of psychometric parallelism and the accuracy of equal weights is significantly negative. Under different conditions, the minimum p value of χ² for testing psychometric parallelism among predictors is also different in order to prove equal weights more powerful than least squares weights. The higher the number of predictors is, the higher the minimum p value. The higher the ratio of observation to predictor is, the higher the minimum p value. The higher the magnitude of intercorrelations among predictors is, the lower the minimum p value. This study demonstrates that the most frequently used levels of significance, 0.05 and 0.01, are no longer the only p values for testing the null hypotheses of psychometric parallelism among predictors when replacing least squares weights with equal weights.
|
73 |
Ordinary least squares regression of ordered categorical data: inferential implications for practiceLarrabee, Beth R. January 1900 (has links)
Master of Science / Department of Statistics / Nora Bello / Ordered categorical responses are frequently encountered in many disciplines. Examples of interest in agriculture include quality assessments, such as for soil or food products, and evaluation of lesion severity, such as teat ends status in dairy cattle. Ordered categorical responses are characterized by multiple categories or levels recorded on a ranked scale that, while apprising relative order, are not informative of magnitude of or proportionality between levels. A number of statistically sound models for ordered categorical responses have been proposed, such as logistic regression and probit models, but these are commonly underutilized in practice. Instead, the ordinary least squares linear regression model is often employed with ordered categorical responses despite violation of basic model assumptions. In this study, the inferential implications of this approach are investigated using a simulation study that evaluates robustness based on realized Type I error rate and statistical power. The design of the simulation study is motivated by applied research cases reported in the literature. A variety of plausible scenarios were considered for simulation, including various shapes of the frequency distribution and different number of categories of the ordered categorical response. Using a real dataset on frequency of antimicrobial use in feedlots, I demonstrate the inferential performance of ordinary least squares linear regression on ordered categorical responses relative to a probit model.
|
74 |
Real-Time Estimation of Aerodynamic ParametersLarsson Cahlin, Sofia January 2016 (has links)
Extensive testing is performed when a new aircraft is developed. Flight testing is costly and time consuming but there are aspects of the process that can be made more efficient. A program that estimates aerodynamic parameters during flight could be used as a tool when deciding to continue or abort a flight from a safety or data collecting perspective. The algorithm of such a program must function in real time, which for this application would mean a maximum delay of a couple of seconds, and it must handle telemetric data, which might have missing samples in the data stream. Here, a conceptual program for real-time estimation of aerodynamic parameters is developed. Two estimation methods and four methods for handling of missing data are compared. The comparisons are performed using both simulated data and real flight test data. The first estimation method uses the least squares algorithm in the frequency domain and is based on the chirp z-transform. The second estimation method is created by adding boundary terms in the frequency domain differentiation and instrumental variables to the first method. The added boundary terms result in better estimates at the beginning of the excitation and the instrumental variables result in a smaller bias when the noise levels are high. The second method is therefore chosen in the algorithm of the conceptual program as it is judged to have a better performance than the first. The sequential property of the transform ensures functionality in real-time and the program has a maximum delay of just above one second. The four compared methods for handling missing data are to discard the missing data, hold the previous value, use linear interpolation or regard the missing samples as variations in the sample time. The linear interpolation method performs best on analytical data and is compared to the variable sample time method using simulated data. The results of the comparison using simulated data varies depending on the other implementation choices but neither method is found to give unbiased results. In the conceptual program, the variable sample time method is chosen as it gives a lower variance and is preferable from an implementational point of view.
|
75 |
Estimation, model selection and evaluation of regression functions in a Least-squares Monte-Carlo frameworkDanielsson, Johan, Gistvik, Gustav January 2014 (has links)
This master thesis will investigate one solution to the problem issues with nested stochastic simulation arising when the future value of a portfolio need to be calculated. The solution investigated is the Least-squares Monte-Carlo method, where regression is used to obtain a proxy function for the given portfolio value. We will further investigate how to generate an optimal regression function that minimizes the number of terms in the regression function and reduces the risk of overtting the regression.
|
76 |
Comparison of Two Vortex-in-cell Schemes Implemented to a Three-dimensional Temporal Mixing LayerSadek, Nabel 24 August 2012 (has links)
Numerical simulations are presented for three dimensional viscous incompressible free shear flows. The numerical method is based on solving the vorticity equation using Vortex-In-Cell method. In this method, the vorticity field is discretized into a finite set of Lagrangian elements (particles) and the computational domain is covered by Eulerian mesh. Velocity field is computed on the mesh by solving Poisson equation. The solution proceeds in time by advecting the particles with the flow. Second order Adam-Bashford method is used for time integration. Exchange of information between Lagrangian particles and Eulerian grid is carried out using the M’4 interpolation scheme. The classical inviscid scheme is enhanced to account for stretching and viscous effects. For that matter, two schemes are used. The first one used periodic remeshing of the vortex particles along with fourth order finite difference approximation for the partial derivatives of the stretching and viscous terms. In the second scheme, derivatives are approximated by least squares polynomial. The novelty of this work is signified by using the moving least squares technique within the framework of the Vortex-in-Cell method and implementing it to a three dimensional temporal mixing layer. Comparisons of the mean flow and velocity statistics are made with experimental studies. The results confirm the validity of the present schemes. Both schemes also demonstrate capability to qualitatively capture significant flow scales, and allow gaining physical insight as to the development of instabilities and the formation of three dimensional vortex structures. The two schemes show acceptable low numerical diffusion as well.
|
77 |
Stabilized Least Squares MigrationGanssle, Graham 18 December 2015 (has links)
Before raw seismic data records are interpretable by geologists, geophysicists must process these data using a technique called migration. Migration spatially repositions the acoustic energy in a seismic record to its correct location in the subsurface. Traditional migration techniques used a transpose approximation to a true acoustic propagation operator. Conventional least squares migration uses a true inverse operator, but is limited in functionality by the large size of modern seismic datasets. This research uses a new technique, called stabilized least squares migration, to correctly migrate seismic data records using a true inverse operator. Contrary to conventional least squares migration, this new technique allows for errors over ten percent in the underlying subsurface velocity model, which is a large limitation in conventional least squares migration. The stabilized least squares migration also decreases the number of iterations required by conventional least squares migration algorithms by an average of about three iterations on the sample data tested in this research.
|
78 |
Approximate replication of high-breakdown robust regression techniquesZeileis, Achim, Kleiber, Christian January 2008 (has links) (PDF)
This paper demonstrates that even regression results obtained by techniques close to the standard ordinary least squares (OLS) method can be difficult to replicate if a stochastic model fitting algorithm is employed. / Series: Research Report Series / Department of Statistics and Mathematics
|
79 |
Regression Analysis of University Giving DataJin, Yi 02 January 2007 (has links)
This project analyzed the giving data of Worcester Polytechnic Institute's alumni and other constituents (parents, friends, neighbors, etc.) from fiscal year 1983 to 2007 using a two-stage modeling approach. Logistic regression analysis was conducted in the first stage to predict the likelihood of giving for each constituent, followed by linear regression method in the second stage which was used to predict the amount of contribution to be expected from each contributor. Box-Cox transformation was performed in the linear regression phase to ensure the assumption underlying the model holds. Due to the nature of the data, multiple imputation was performed on the missing information to validate generalization of the models to a broader population. Concepts from the field of direct and database marketing, like "score" and "lift", were also introduced in this report.
|
80 |
A novel decomposition structure for adaptive systems.January 1995 (has links)
by Wan, Kwok Fai. / Thesis (Ph.D.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 138-148). / Chapter Chapter 1. --- Adaptive signal processing and its applications --- p.1 / Chapter 1.1. --- Introduction --- p.1 / Chapter 1.2. --- Applications of adaptive system --- p.3 / Chapter 1.2.1. --- Adaptive noise cancellation --- p.3 / Chapter 1.2.2. --- Adaptive echo cancellation --- p.5 / Chapter 1.2.3. --- Adaptive line enhancement --- p.5 / Chapter 1.2.4. --- Adaptive linear prediction --- p.7 / Chapter 1.2.5. --- Adaptive system identification --- p.8 / Chapter 1.3. --- Algorithms for adaptive systems --- p.10 / Chapter 1.4. --- Transform domain adaptive filtering --- p.12 / Chapter 1.5 --- The motivation and organization of the thesis --- p.13 / Chapter Chapter 2. --- Time domain split-path adaptive filter --- p.16 / Chapter 2.1. --- Adaptive transversal filter and the LMS algorithm --- p.17 / Chapter 2.1.1. --- Wiener-Hopf solution --- p.17 / Chapter 2.1.2. --- The LMS adaptive algorithm --- p.20 / Chapter 2.2. --- Split structure adaptive filtering --- p.23 / Chapter 2.2.1. --- Split structure of an adaptive filter --- p.24 / Chapter 2.2.2. --- Split-path structure for a non-symmetric adaptive filter --- p.25 / Chapter 2.3. --- Split-path adaptive median filtering --- p.29 / Chapter 2.3.1. --- Median filtering and median LMS algorithm --- p.29 / Chapter 2.3.2. --- The split-path median LMS (SPMLMS) algorithm --- p.32 / Chapter 2.3.3. --- Convergence analysis of SPMLMS --- p.36 / Chapter 2.4. --- Computer simulation examples --- p.41 / Chapter 2.5. --- Summary --- p.45 / Chapter Chapter 3. --- Multi-stage split structure adaptive filtering --- p.46 / Chapter 3.1. --- Introduction --- p.46 / Chapter 3.2. --- Split structure for a symmetric or an anti-symmetric adaptive filter --- p.48 / Chapter 3.3. --- Multi-stage split structure for an FIR adaptive filter --- p.56 / Chapter 3.4. --- Properties of the split structure LMS algorithm --- p.59 / Chapter 3.5. --- Full split-path adaptive algorithm for system identification --- p.66 / Chapter 3.6. --- Summary --- p.71 / Chapter Chapter 4. --- Transform domain split-path adaptive algorithms --- p.72 / Chapter 4.1. --- Introduction --- p.73 / Chapter 4.2. --- general description of transforms --- p.74 / Chapter 4.2.1. --- Fast Karhunen-Loeve transform --- p.75 / Chapter 4.2.2. --- Symmetric cosine transform --- p.77 / Chapter 4.2.3. --- Discrete sine transform --- p.77 / Chapter 4.2.4. --- Discrete cosine transform --- p.78 / Chapter 4.2.5. --- Discrete Hartley transform --- p.78 / Chapter 4.2.6. --- Discrete Walsh transform --- p.79 / Chapter 4.3. --- Transform domain adaptive filters --- p.80 / Chapter 4.3.1. --- Structure of transform domain adaptive filters --- p.80 / Chapter 4.3.2. --- Properties of transform domain adaptive filters --- p.83 / Chapter 4.4. --- Transform domain split-path LMS adaptive predictor --- p.84 / Chapter 4.5. --- Performance analysis of the TRSPAF --- p.93 / Chapter 4.5.1. --- Optimum Wiener solution --- p.93 / Chapter 4.5.2. --- Steady state MSE and convergence speed --- p.94 / Chapter 4.6. --- Computer simulation examples --- p.96 / Chapter 4.7. --- Summary --- p.100 / Chapter Chapter 5. --- Tracking optimal convergence factor for transform domain split-path adaptive algorithm --- p.101 / Chapter 5.1. --- Introduction --- p.102 / Chapter 5.2. --- The optimal convergence factors of TRSPAF --- p.104 / Chapter 5.3. --- Tracking optimal convergence factors for TRSPAF --- p.110 / Chapter 5.3.1. --- Tracking optimal convergence factor for gradient-based algorithms --- p.111 / Chapter 5.3.2. --- Tracking optimal convergence factors for LMS algorithm --- p.112 / Chapter 5.4. --- Comparison of optimal convergence factor tracking method with self-orthogonalizing method --- p.114 / Chapter 5.5. --- Computer simulation results --- p.116 / Chapter 5.6. --- Summary --- p.121 / Chapter Chapter 6. --- A unification between split-path adaptive filtering and discrete Walsh transform adaptation --- p.122 / Chapter 6.1. --- Introduction --- p.122 / Chapter 6.2. --- A new ordering of the Walsh functions --- p.124 / Chapter 6.3. --- Relationship between SM-ordered Walsh function and other Walsh functions --- p.126 / Chapter 6.4. --- Computer simulation results --- p.132 / Chapter 6.5. --- Summary --- p.134 / Chapter Chapter 7. --- Conclusion --- p.135 / References --- p.138
|
Page generated in 0.0535 seconds