• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 764
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1611
  • 591
  • 340
  • 247
  • 245
  • 235
  • 191
  • 187
  • 176
  • 167
  • 167
  • 160
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
341

Efficient Algorithms for Mining Large Spatio-Temporal Data

Chen, Feng 21 January 2013 (has links)
Knowledge discovery on spatio-temporal datasets has attracted<br />growing interests. Recent advances on remote sensing technology mean<br />that massive amounts of spatio-temporal data are being collected,<br />and its volume keeps increasing at an ever faster pace. It becomes<br />critical to design efficient algorithms for identifying novel and<br />meaningful patterns from massive spatio-temporal datasets. Different<br />from the other data sources, this data exhibits significant<br />space-time statistical dependence, and the assumption of i.i.d. is<br />no longer valid. The exact modeling of space-time dependence will<br />render the exponential growth of model complexity as the data size<br />increases. This research focuses on the construction of efficient<br />and effective approaches using approximate inference techniques for<br />three main mining tasks, including spatial outlier detection, robust<br />spatio-temporal prediction, and novel applications to real world<br />problems.<br /><br />Spatial novelty patterns, or spatial outliers, are those data points<br />whose characteristics are markedly different from their spatial<br />neighbors. There are two major branches of spatial outlier detection<br />methodologies, which can be either global Kriging based or local<br />Laplacian smoothing based. The former approach requires the exact<br />modeling of spatial dependence, which is time extensive; and the<br />latter approach requires the i.i.d. assumption of the smoothed<br />observations, which is not statistically solid. These two approaches<br />are constrained to numerical data, but in real world applications we<br />are often faced with a variety of non-numerical data types, such as<br />count, binary, nominal, and ordinal. To summarize, the main research<br />challenges are: 1) how much spatial dependence can be eliminated via<br />Laplace smoothing; 2) how to effectively and efficiently detect<br />outliers for large numerical spatial datasets; 3) how to generalize<br />numerical detection methods and develop a unified outlier detection<br />framework suitable for large non-numerical datasets; 4) how to<br />achieve accurate spatial prediction even when the training data has<br />been contaminated by outliers; 5) how to deal with spatio-temporal<br />data for the preceding problems.<br /><br />To address the first and second challenges, we mathematically<br />validated the effectiveness of Laplacian smoothing on the<br />elimination of spatial autocorrelations. This work provides<br />fundamental support for existing Laplacian smoothing based methods.<br />We also discovered a nontrivial side-effect of Laplacian smoothing,<br />which ingests additional spatial variations to the data due to<br />convolution effects. To capture this extra variability, we proposed<br />a generalized local statistical model, and designed two fast forward<br />and backward outlier detection methods that achieve a better balance<br />between computational efficiency and accuracy than most existing<br />methods, and are well suited to large numerical spatial datasets.<br /><br />We addressed the third challenge by mapping non-numerical variables<br />to latent numerical variables via a link function, such as logit<br />function used in logistic regression, and then utilizing<br />error-buffer artificial variables, which follow a Student-t<br />distribution, to capture the large valuations caused by outliers. We<br />proposed a unified statistical framework, which integrates the<br />advantages of spatial generalized linear mixed model, robust spatial<br />linear model, reduced-rank dimension reduction, and Bayesian<br />hierarchical model. A linear-time approximate inference algorithm<br />was designed to infer the posterior distribution of the error-buffer<br />artificial variables conditioned on observations. We demonstrated<br />that traditional numerical outlier detection methods can be directly<br />applied to the estimated artificial variables for outliers<br />detection. To the best of our knowledge, this is the first<br />linear-time outlier detection algorithm that supports a variety of<br />spatial attribute types, such as binary, count, ordinal, and<br />nominal.<br /><br />To address the fourth and fifth challenges, we proposed a robust<br />version of the Spatio-Temporal Random Effects (STRE) model, namely<br />the Robust STRE (R-STRE) model. The regular STRE model is a recently<br />proposed statistical model for large spatio-temporal data that has a<br />linear order time complexity, but is not best suited for<br />non-Gaussian and contaminated datasets. This deficiency can be<br />systemically addressed by increasing the robustness of the model<br />using heavy-tailed distributions, such as the Huber, Laplace, or<br />Student-t distribution to model the measurement error, instead of<br />the traditional Gaussian. However, the resulting R-STRE model<br />becomes analytical intractable, and direct application of<br />approximate inferences techniques still has a cubic order time<br />complexity. To address the computational challenge, we reformulated<br />the prediction problem as a maximum a posterior (MAP) problem with a<br />non-smooth objection function, transformed it to a equivalent<br />quadratic programming problem, and developed an efficient<br />interior-point numerical algorithm with a near linear order<br />complexity. This work presents the first near linear time robust<br />prediction approach for large spatio-temporal datasets in both<br />offline and online cases. / Ph. D.
342

Discrete-Time Noncausal Linear Periodically Time-Varying Scaling for Robustness Analysis and Controller Synthesis / ロバスト性解析と制御器設計のための離散時間非因果的周期時変スケーリング

Hosoe, Yohei 24 September 2013 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第17889号 / 工博第3798号 / 新制||工||1581(附属図書館) / 30709 / 京都大学大学院工学研究科電気工学専攻 / (主査)教授 萩原 朋道, 教授 土居 伸二, 准教授 久門 尚史 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
343

Path-following Control of Container Ships

Zhao, Yang 25 July 2019 (has links)
No description available.
344

The Robustness of O'Brien's r Transformation to Non-Normality

Gordon, Carol J. (Carol Jean) 08 1900 (has links)
A Monte Carlo simulation technique was employed in this study to determine if the r transformation, a test of homogeneity of variance, affords adequate protection against Type I error over a range of equal sample sizes and number of groups when samples are obtained from normal and non-normal distributions. Additionally, this study sought to determine if the r transformation is more robust than Bartlett's chi-square to deviations from normality. Four populations were generated representing normal, uniform, symmetric leptokurtic, and skewed leptokurtic distributions. For each sample size (6, 12, 24, 48), number of groups (3, 4, 5, 7), and population distribution condition, the r transformation and Bartlett's chi-square were calculated. This procedure was replicated 1,000 times; the actual significance level was determined and compared to the nominal significance level of .05. On the basis of the analysis of the generated data, the following conclusions are drawn. First, the r transformation is generally robust to violations of normality when the size of the samples tested is twelve or larger. Second, in the instances where a significant difference occurred between the actual and nominal significance levels, the r transformation produced (a) conservative Type I error rates if the kurtosis of the parent population were 1.414 or less and (b) an inflated Type I error rate when the index of kurtosis was three. Third, the r transformation should not be used if sample size is smaller than twelve. Fourth, the r transformation is more robust in all instances to non-normality, but the Bartlett test is superior in controlling Type I error when samples are from a population with a normal distribution. In light of these conclusions, the r transformation may be used as a general utility test of homogeneity of variances when either the distribution of the parent population is unknown or is known to have a non-normal distribution, and the size of the equal samples is at least twelve.
345

Integrated Optimal and Robust Control of Spacecraft in Proximity Operations

Pan, Hejia 09 December 2011 (has links)
With the rapid growth of space activities and advancement of aerospace science and technology, many autonomous space missions have been proliferating in recent decades. Control of spacecraft in proximity operations is of great importance to accomplish these missions. The research in this dissertation aims to provide a precise, efficient, optimal, and robust controller to ensure successful spacecraft proximity operations. This is a challenging control task since the problem involves highly nonlinear dynamics including translational motion, rotational motion, and flexible structure deformation and vibration. In addition, uncertainties in the system modeling parameters and disturbances make the precise control more difficult. Four control design approaches are integrated to solve this challenging problem. The first approach is to consider the spacecraft rigid body translational and rotational dynamics together with the flexible motion in one unified optimal control framework so that the overall system performance and constraints can be addressed in one optimization process. The second approach is to formulate the robust control objectives into the optimal control cost function and prove the equivalency between the robust stabilization problem and the transformed optimal control problem. The third approach is to employ the è-D technique, a novel optimal control method that is based on a perturbation solution to the Hamilton-Jacobi-Bellman equation, to solve the nonlinear optimal control problem obtained from the indirect robust control formulation. The resultant optimal control law can be obtained in closedorm, and thus facilitates the onboard implementation. The integration of these three approaches is called the integrated indirect robust control scheme. The fourth approach is to use the inverse optimal adaptive control method combined with the indirect robust control scheme to alleviate the conservativeness of the indirect robust control scheme by using online parameter estimation such that adaptive, robust, and optimal properties can all be achieved. To show the effectiveness of the proposed control approaches, six degree-offreedom spacecraft proximity operation simulation is conducted and demonstrates satisfying performance under various uncertainties and disturbances.
346

A Comparative Simulation Study of Robust Estimators of Standard Errors

Johnson, Natalie 10 July 2007 (has links) (PDF)
The estimation of standard errors is essential to statistical inference. Statistical variability is inherent within data, but is usually of secondary interest; still, some options exist to deal with this variability. One approach is to carefully model the covariance structure. Another approach is robust estimation. In this approach, the covariance structure is estimated from the data. White (1980) introduced a biased, but consistent, robust estimator. Long et al. (2000) added an adjustment factor to White's estimator to remove the bias of the original estimator. Through the use of simulations, this project compares restricted maximum likelihood (REML) with four robust estimation techniques: the Standard Robust Estimator (White 1980), the Long estimator (Long 2000), the Long estimator with a quantile adjustment (Kauermann 2001), and the empirical option of the MIXED procedure in SAS. The results of the simulation show small sample and asymptotic properties of the five estimators. The REML procedure is modelled under the true covariance structure, and is the most consistent of the five estimators. The REML procedure shows a slight small-sample bias as the number of repeated measures increases. The REML procedure may not be the best estimator in a situation in which the covariance structure is in question. The Standard Robust Estimator is consistent, but it has an extreme downward bias for small sample sizes. The Standard Robust Estimator changes little when complexity is added to the covariance structure. The Long estimator is unstable estimator. As complexity is introduced into the covariance structure, the coverage probability with the Long estimator increases. The Long estimator with the quantile adjustment works as designed by mimicking the Long estimator at an inflated quantile level. The empirical option of the MIXED procedure in SAS works well for homogeneous covariance structures. The empirical option of the MIXED procedure in SAS reduces the downward bias of the Standard Robust Estimator when the covariance structure is homogeneous.
347

Parameter Estimation for the Lognormal Distribution

Ginos, Brenda Faith 13 November 2009 (has links) (PDF)
The lognormal distribution is useful in modeling continuous random variables which are greater than or equal to zero. Example scenarios in which the lognormal distribution is used include, among many others: in medicine, latent periods of infectious diseases; in environmental science, the distribution of particles, chemicals, and organisms in the environment; in linguistics, the number of letters per word and the number of words per sentence; and in economics, age of marriage, farm size, and income. The lognormal distribution is also useful in modeling data which would be considered normally distributed except for the fact that it may be more or less skewed (Limpert, Stahel, and Abbt 2001). Appropriately estimating the parameters of the lognormal distribution is vital for the study of these and other subjects. Depending on the values of its parameters, the lognormal distribution takes on various shapes, including a bell-curve similar to the normal distribution. This paper contains a simulation study concerning the effectiveness of various estimators for the parameters of the lognormal distribution. A comparison is made between such parameter estimators as Maximum Likelihood estimators, Method of Moments estimators, estimators by Serfling (2002), as well as estimators by Finney (1941). A simulation is conducted to determine which parameter estimators work better in various parameter combinations and sample sizes of the lognormal distribution. We find that the Maximum Likelihood and Finney estimators perform the best overall, with a preference given to Maximum Likelihood over the Finney estimators because of its vast simplicity. The Method of Moments estimators seem to perform best when σ is less than or equal to one, and the Serfling estimators are quite accurate in estimating μ but not σ in all regions studied. Finally, these parameter estimators are applied to a data set counting the number of words in each sentence for various documents, following which a review of each estimator's performance is conducted. Again, we find that the Maximum Likelihood estimators perform best for the given application, but that Serfling's estimators are preferred when outliers are present.
348

Robustness in design of experiments in manufacturing course

Amana, Ahmed January 2022 (has links)
Design of experiment (DOE) is a statistical method for testing effects of input factors into a process based on its responses or outputs. Since the influence of these factors and their interactions are studied from the process outputs, then quality of these outputs or the measurements play a significant role in a correct statistical conclusion about the significance of factors and their interactions. Linear regression is a method, which can be applied for the DOE purpose, the parameters of such a regression model are estimated by the ordinary least-squares (OLS) method. This method is sensitive to the presence of any blunder in measurements, meaning that blunders significantly affect the result of a regression using OLS method. This research aims to perform a robustness analysis for some full factorial DOEs by different robust estimators as well as the Taguchi methodology. A full factorial DOE with three factors at three levels, two replicants, and three replicants are performed is studied. Taguchi's approach is conducted by computing the signal-to-noise ratio (S/N) from three replicants, where the lower noise factor means the stronger signal. Robust estimators of Andrews, Cauchy, Fair, Huber, Logistic, Talwar, and Welsch are applied to the DOE in different setups and adding different types and percentages of blunders or gross errors to the data to assess the success rate of each. Number and size of the blunders in the measurements are two important factors influencing the success rate of a robust estimator. For evaluation, our measurements are infected by blunders up to different percentages of data. Our study showed the Talwar robust estimator is the best amongst the rest of estimators and resists well against up to 80% of presence of blunders. Consequently, the use of this estimator instated of the OLS is recommended for DOE purposes. The comparison between Taguchi’s method and robust estimators showed that blunders affect the signal-to-noise ratio as the signal is significantly changed by them, whilst robust estimators suppress the blunders well and the same conclusion as that with the OLS with no blunder can be drawn from them.
349

Robustness Analysis For Turbomachinery Stall Flutter

Forhad, Md Moinul 01 January 2011 (has links)
Flutter is an aeroelastic instability phenomenon that can result either in serious damage or complete destruction of a gas turbine blade structure due to high cycle fatigue. Although 90% of potential high cycle fatigue occurrences are uncovered during engine development, the remaining 10% stand for one third of the total engine development costs. Field experience has shown that during the last decades as much as 46% of fighter aircrafts were not mission-capable in certain periods due to high cycle fatigue related mishaps. To assure a reliable and safe operation, potential for blade flutter must be eliminated from the turbomachinery stages. However, even the most computationally intensive higher order models of today are not able to predict flutter accurately. Moreover, there are uncertainties in the operational environment, and gas turbine parts degrade over time due to fouling, erosion and corrosion resulting in parametric uncertainties. Therefore, it is essential to design engines that are robust with respect to the possible uncertainties. In this thesis, the robustness of an axial compressor blade design is studied with respect to parametric uncertainties through the Mu analysis. The nominal flutter model is adopted from [9]. This model was derived by matching a two dimensional incompressible flow field across the flexible rotor and the rigid stator. The aerodynamic load on the blade is derived via the control volume analysis. For use in the Mu analysis, first the model originally described by a set of partial differential equations is reduced to ordinary differential equations by the Fourier series based collocation method. After that, the nominal model is obtained by linearizing the achieved non-linear ordinary differential equations. The uncertainties coming from the modeling assumptions and imperfectly known parameters and coefficients are all modeled as parametric uncertainties through the Monte Carlo simulation. As iv compared with other robustness analysis tools, such as Hinf, the Mu analysis is less conservative and can handle both structured and unstructured perturbations. Finally, Genetic Algorithm is used as an optimization tool to find ideal parameters that will ensure best performance in terms of damping out flutter. Simulation results show that the procedure described in this thesis can be effective in studying the flutter stability margin and can be used to guide the gas turbine blade design.
350

Modification of the Cal Poly Spacecraft Simulator System for Robust Control Law Verification

Kato, Tomoyuki 01 June 2014 (has links) (PDF)
The Cal Poly Spacecraft Dynamics Simulator, also known as the Pyramidal Reaction Wheel Platform (PRWP), is an air-bearing four reaction wheel spacecraft simulator designed to simulate the low-gravity, frictionless condition of the space environment and to test and validate spacecraft attitude control hardware and control laws through real-time motion tests. The PRWP system was modified to the new Mk.III configuration, which adopted the MATLAB xPC kernel for better real-time hardware control. Also the Litton LN-200 IMU was integrated onto the PRWP and replaced the previous attitude sensor. Through the comparison of various control laws through motion tests the Mk.III configuration was tested for robust control law verification capability. Two fixed-gain controllers, full-state feedback (FSFB) and linear quadratic regulator with set-point control(LQRSP), and two adaptive controllers, nonlinear direct model reference adaptive controller (NDMRAC) and the adaptive output feedback (AOF), were each tested in three different cases of varying plant parameters to test controller robustness through real-time motion tests. The first two test cases simulate PRWP inertia tensor variations. The third test case simulates uncertainty of the reaction wheel dynamic by slowing down the response time for one of the four reaction wheels. The Mk.III motion tests were also compared with numerical simulations as well as the older Mk.II motion tests to confirm controller validation capability. The Mk.III test results confirmed certain patterns from the numerical simulations and the Mk.II test results. The test case in which actuator dynamics uncertainty was simulated had the most effect on controller performance, as all four control laws experienced an increase in steady-state error. The Mk.III test results also confirmed that the NDMRAC outperformed the fixed-gain controllers.

Page generated in 0.5495 seconds