Spelling suggestions: "subject:"bobust"" "subject:"arobust""
141 |
Prediction and Anomaly Detection Techniques for Spatial DataLiu, Xutong 11 June 2013 (has links)
With increasing public sensitivity and concern on environmental issues, huge amounts of spatial data have been collected from location based social network applications to scientific data. This has encouraged formation of large spatial data set and generated considerable interests for identifying novel and meaningful patterns. Allowing correlated observations weakens the usual statistical assumption of independent observations, and complicates the spatial analysis. This research focuses on the construction of efficient and effective approaches for three main mining tasks, including spatial outlier detection, robust inference for spatial dataset, and spatial prediction for large multivariate non-Gaussian data.
spatial outlier analysis, which aims at detecting abnormal objects in spatial contexts, can help extract important knowledge in many applications. There exist the well-known masking and swamping problems in most approaches, which can't still satisfy certain requirements aroused recently. This research focuses on development of spatial outlier detection techniques for three aspects, including spatial numerical outlier detection, spatial categorical outlier detection and identification of the number of spatial numerical outliers.
First, this report introduces Random Walk based approaches to identify spatial numerical outliers. The Bipartite and an Exhaustive Combination weighted graphs are modeled based on spatial and/or non-spatial attributes, and then Random walk techniques are performed on the graphs to compute the relevance among objects. The objects with lower relevance are recognized as outliers. Second, an entropy-based method is proposed to estimate the optimum number of outliers. According to the entropy theory, we expect that, by incrementally removing outliers, the entropy value will decrease sharply, and reach a stable state when all the outliers have been removed. Finally, this research designs several Pair Correlation Function based methods to detect spatial categorical outliers for both single and multiple attribute data. Within them, Pair Correlation Ratio(PCR) is defined and estimated for each pair of categorical combinations based on their co-occurrence frequency at different spatial distances. The observations with the lower PCRs are diagnosed as potential SCOs.
Spatial kriging is a widely used predictive model whose predictive accuracy could be significantly compromised if the observations are contaminated by outliers. Also, due to spatial heterogeneity, observations are often different types. The prediction of multivariate spatial processes plays an important role when there are cross-spatial dependencies between multiple responses. In addition, given the large volume of spatial data, it is computationally challenging. These raise three research topics: 1).robust prediction for spatial data sets; 2).prediction of multivariate spatial observations; and 3). efficient processing for large data sets.
First, increasing the robustness of spatial kriging model can be systematically addressed by integrating heavy tailed distributions. However, it is analytically intractable inference. Here, we presents a novel robust and reduced Rank spatial kriging Model (R$^3$-SKM), which is resilient to the influences of outliers and allows for fast spatial inference. Second, this research introduces a flexible hierarchical Bayesian framework that permits the simultaneous modeling of mixed type variable. Specifically, the mixed-type attributes are mapped to latent numerical random variables that are multivariate Gaussian in nature. Finally, the knot-based techniques is utilized to model the predictive process as a reduced rank spatial process, which projects the process realizations of the spatial model to a lower dimensional subspace. This projection significantly reduces the computational cost. / Ph. D.
|
142 |
Control of Multigenerators for the All-Electric ShipBaez Rivera, Yamilka Isabel 30 April 2011 (has links)
The next generation of U.S. Navy ships will see the integration of the propulsion and electrical systems as part of the all-electric ship. This new architecture brings advantages and challenges. One of the challenges is to develop a stable power system that can ride through various issues such as faults or changes in load. style='mso-spacerun:yes'> While terrestrial systems have been studied for a long time related to stability, the unique characteristics of the shipboard power system mean that not all of these results are directly applicable to the all-electric ship. Because of the new shipboard power system structure, more generators are required to be connected in parallel to supply the power needed. Control of parallel generators has been done for years in terrestrial systems; however, the application of an advanced control technique has not been applied in the All-Electric Ship. The challenge is to apply an advanced control technique to the all-electric shipboard power system that will maintain stability of multiple generator systems, keeping in mind that the generators could be dissimilar in ratings. style='mso-spacerun:yes'> For that reason, the control techniques used to solve the problem need to be developed or adapted for test cases that are similar to the electric ship configuration. This dissertation provides a description of an effort to implement a robust control scheme on the all-electric ship. style='mso-spacerun:yes'> The proposed solution is to apply H∞ Robust Control as an advanced control technique, with realistic constraints to keep the shipboard power system within stability margins during normal and abnormal operating scenarios. In this work, H∞ Robust Control has been developed in the form of state space equations which are optimized using linear matrix implementation. The developed H∞ Control has been implemented on the different operating scenarios to validate the functionality and to compare it with another control technique. style='mso-spacerun:yes'> Test case results for one-generator, two-generator similar and two-generator dissimilar have been described. style='mso-spacerun:yes'> Stability indicators have been determined and compared for various types of faults and transients for removing and adding static and dynamic loads. The research provides the foundation for applications of advanced control techniques for the next generation all-electric ship.
|
143 |
A response surface approach to data analysis in robust parameter designKim, Yoon G. 19 June 2006 (has links)
It has become obvious that combined arrays and a response surface approach can be effective tools in our quest to reduce (process) variability. An important aspect of the improvement of quality is to suppress the magnitude of the influence coming from subtle changes of noise factors. To model and control process variability induced by noise factors we take a response surface approach. The derivative of the standard response function with respect to noise factors, i. e., the slopes of the response function in the direction of the noise factors, play an important role in the study of the minimum process variance. For better understanding of the process variability, we study various properties of both biased and the unbiased estimators of the process variance. Response surface modeling techniques and the ideas involved with variance modeling and estimation through the function of the aforementioned derivatives is a valuable concept in this study. In what follows, we describe the use of the response surface methodology for situations in which noise factors are used. The approach is to combine Taguchi's notion of heterogeneous variability with standard design and modeling techniques available in response surface methodology. / Ph. D.
|
144 |
Steering drift and wheel movement during braking: parameter sensitivity studiesKlaps, J., Day, Andrew J. January 2003 (has links)
Yes / In spite of the many signi cant improvements in car chassis design over the past two
decades, steering drift during braking where the driver must apply a corrective steering torque in
order to maintain course can still be experienced under certain conditions while driving. In the past,
such drift, or `pull¿, would have been attributed to side-to-side braking torque variation [1], but
modern automotive friction brakes and friction materials are now able to provide braking torque
with such high levels of consistency that side-to-side braking torque variation is no longer regarded
as a cause of steering drift during braking. Consequently, other in uences must be considered. This
paper is the rst of two papers to report on an experimental investigation into braking-related steering
drift in motor vehicles. Parameters that might in uence steering drift during braking include suspension
compliance and steering o set, and these have been investigated to establish the sensitivity of
steering drift to such parameters. The results indicate how wheel movement arising from compliance
in the front suspension and steering system of a passenger car during braking can be responsible for
steering drift during braking. Braking causes changes in wheel alignment which in turn a ect the toe
steer characteristics of each wheel and therefore the straight-line stability during braking. It is concluded
that a robust design of suspension is possible in which side-to-side variation in toe steer is
not a ected by changes in suspension geometry during braking, and that the magnitude of these
changes and the relationships between the braking forces and the suspension geometry and compliance
require further investigation, which will be presented in the second paper of the two.
|
145 |
Worlds Collide through Gaussian Processes: Statistics, Geoscience and Mathematical ProgrammingChristianson, Ryan Beck 04 May 2023 (has links)
Gaussian process (GP) regression is the canonical method for nonlinear spatial modeling among the statistics and machine learning communities. Geostatisticians use a subtly different technique known as kriging. I shall highlight key similarities and differences between GPs and kriging through the use of large scale gold mining data. Most importantly GPs are largely hands-off, automatically learning from the data whereas kriging requires an expert human in the loop to guide analysis. To emphasize this, I show an imputation method for left censored values frequently seen in mining data. Oftentimes geologists ignore censored values due to the difficulty of imputing with kriging, but GPs execute imputation with relative ease leading to better estimates of the gold surface. My hope is that this research can serve as a springboard to encourage the mining community to consider using GPs over kriging for diverse utility after GP model fitting. Another common use of GPs that would be inefficient for kriging is Bayesian Optimization (BO). Traditionally BO is designed to find a global optima by sequentially sampling from a function of interest using an acquisition function. When two or more local or global optima of the function of interest have similar objective values, it often makes some sense to target the more "robust" solution with a wider domain of attraction. However, traditional BO weighs these solutions the same, favoring whichever has a slightly better objective value. By combining the idea of expected improvement (EI) from the BO community with mathematical programming's concept of an adversary, I introduce a novel algorithm to target robust solutions called robust expected improvement (REI). The adversary penalizes "peaked" areas of the objective function making those values appear less desirable. REI performs acquisitions using EI on the adversarial space yielding data sets focused on the robust solution that exhibit EI's already proven excellent balance of exploration and exploitation. / Doctor of Philosophy / Since its origins in the 1940's, spatial statistics modeling has adapted to fit different communities. The geostatistics community developed with an emphasis on modeling mining operations and has further evolved to cover a slew of different applications largely focused on two or three physical dimensions. The computer experiments community developed later when these physical experiments started moving into the virtual realm with advances in computer technology. While birthed from the same foundation, computer experimenters often look at ten or sometimes even higher dimension problems. Due to these differences among others, each community tailored their methods to best fit their common problems. My research compares the modern instantiations of the differing methodology on two sets of real gold mining data. Ultimately, I prefer the computer experiments methods for their ease of adaptation to downstream tasks at no cost to model performance. A statistical model is almost never a standalone development; it is created with a specific goal in mind. The first case I show of this is "imputation" of mining data. Mining data often have a detection threshold such that any observation with very small mineral concentrations are recorded at the threshold. Frequently, geostatisticians simply throw out these observations because they cause problems in modeling. Statisticians try to use the information that there is a low concentration combined with the rest of the fully observed data to derive a best guess at the concentration of thresholded locations. Under the geostatistics framework, this is cumbersome, but the computer experiments community consider imputation an easy extension. Another common model task is creating an experiment to best learn a surface. The surface may be a gold deposit on Earth or an unknown virtual function or anything measurable really. To do this, computer experimenters often use "active learning" by sampling one point at a time, using that point to generate a better informed model which suggests a new point to sample, repeating until a satisfactory number of points are sampled. Geostatisticians often prefer "one-shot" experiments by deciding all samples prior to collecting any. Thus the geostatistics framework is not appropriate for active learning. Active learning tries to find the "best" location of the surface with either the maximum or minimum response. I adapt this problem to redefine best to find a "robust" location where the response does not change much even if the location is not perfectly specified. As an example, consider setting operating conditions for a factory. If locations produce a similar amount of product, but one needs an exact pressure setting or else it blows up the factory, the other is certainly preferred. To design experiments to find robust locations, I borrow ideas from the mathematical programming community to develop a novel method for robust active learning.
|
146 |
Tuning robust control systems under parametric uncertaintyLaiseca, Mario January 1994 (has links)
No description available.
|
147 |
On the Estimation of Lower-end Quantiles from a Right-skewed DistributionWang, Hongjun 13 April 2010 (has links)
No description available.
|
148 |
The Generalized Linear Mixed Model for Finite Normal Mixtures with Application to Tendon Fibrilogenesis DataZhan, Tingting January 2012 (has links)
We propose the generalized linear mixed model for finite normal mixtures (GLMFM), as well as the estimation procedures for the GLMFM model, which are widely applicable to the hierarchical dataset with small number of individual units and multi-modal distributions at the lowest level of clustering. The modeling task is two-fold: (a). to model the lowest level cluster as a finite mixtures of the normal distribution; and (b). to model the properly transformed mixture proportions, means and standard deviations of the lowest-level cluster as a linear hierarchical structure. We propose the robust generalized weighted likelihood estimators and the new cubic-inverse weight for the estimation of the finite mixture model (Zhan et al., 2011). We propose two robust methods for estimating the GLMFM model, which accommodate the contaminations on all clustering levels, the standard-two-stage approach (Chervoneva et al., 2011, co-authored) and a robust joint estimation. Our research was motivated by the data obtained from the tendon fibril experiment reported in Zhang et al. (2006). Our statistical methodology is quite general and has potential application in a variety of relatively complex statistical modeling situations. / Statistics
|
149 |
Parameter robust reduced-order control of flexible structuresJones, Stephen H. 13 October 2005 (has links)
This thesis generalizes the concept of internal feedback loop modeling, due to Tahk and Speyer, to arrive at two new LQG-based methods of parameter robust control. One component of the robustness procedure, common to both methods, is the application of an auxiliary cost functional penalty to desensitize the system to variations in selected parameters of the state-space model. The other component consists of the formulation of a fictitious noise model to accommodate the effect of these parameter variations.
The "frequency-domain method" utilizes knowledge of the system dynamics to create a frequency-shaped noise model with a power spectrum that approximates the frequency content of unknown error signals in the system due to parameter uncertainties. This design method requires augmentation of additional dynamics to the plant, which results in higher-dimensional full-order controllers. However, the controller design computations are identical to those of a standard LQG problem.
The "time-domain method" emulates the same error signals by means of a multiplicative white noise model which reflects the time-domain behavior of those signals. The resulting robust controller is of the same order as the standard LQG controller, although the design involves a more complex computational algorithm. The application of multiplicative white noise to the system model requires the solution of a system of four coupled equations - two modified Riccati equations and two modified Lyapunov equations.
In addition, the optimal projection equations are applied to both robustness methods to reduce the controller order with minimal loss in performance.
Comparisons are drawn between these and related robust control methods, and it is shown that the relative effectiveness of such methods is problem dependent. Parameter sensitivity analysis is carried out on a simply supported plate model subject to external disturbances. The appropriate robust controller is selected, and it is found to stabilize the plate with little sacrifice in performance. / Ph. D.
|
150 |
Cluster-Based Bounded Influence RegressionLawrence, David E. 14 August 2003 (has links)
In the field of linear regression analysis, a single outlier can dramatically influence ordinary least squares estimation while low-breakdown procedures such as M regression and bounded influence regression may be unable to combat a small percentage of outliers. A high-breakdown procedure such as least trimmed squares (LTS) regression can accommodate up to 50% of the data (in the limit) being outlying with respect to the general trend. Two available one-step improvement procedures based on LTS are Mallows 1-step (M1S) regression and Schweppe 1-step (S1S) regression (the current state-of-the-art method). Issues with these methods include (1) computational approximations and sub-sampling variability, (2) dramatic coefficient sensitivity with respect to very slight differences in initial values, (3) internal instability when determining the general trend and (4) performance in low-breakdown scenarios. A new high-breakdown regression procedure is introduced that addresses these issues, plus offers an insightful summary regarding the presence and structure of multivariate outliers. This proposed method blends a cluster analysis phase with a controlled bounded influence regression phase, thereby referred to as cluster-based bounded influence regression, or CBI. Representing the data space via a special set of anchor points, a collection of point-addition OLS regression estimators forms the basis of a metric used in defining the similarity between any two observations. Cluster analysis then yields a main cluster "halfset" of observations, with the remaining observations becoming one or more minor clusters. An initial regression estimator arises from the main cluster, with a multiple point addition DFFITS argument used to carefully activate the minor clusters through a bounded influence regression framework. CBI achieves a 50% breakdown point, is regression equivariant, scale equivariant and affine equivariant and distributionally is asymptotically normal. Case studies and Monte Carlo studies demonstrate the performance advantage of CBI over S1S and the other high breakdown methods regarding coefficient stability, scale estimation and standard errors. A dendrogram of the clustering process is one graphical display available for multivariate outlier detection. Overall, the proposed methodology represents advancement in the field of robust regression, offering a distinct philosophical viewpoint towards data analysis and the marriage of estimation with diagnostic summary. / Ph. D.
|
Page generated in 0.0295 seconds