• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 85
  • 85
  • 25
  • 13
  • 12
  • 9
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

A Monte Carlo Study of the Robustness and Power Associated with Selected Tests of Variance Equality when Distributions are Non-Normal and Dissimilar in Form

Hardy, James C. (James Clifford) 08 1900 (has links)
When selecting a method for testing variance equality, a researcher should select a method which is robust to distribution non-normality and dissimilarity. The method should also possess sufficient power to ascertain departures from the equal variance hypothesis. This Monte Carlo study examined the robustness and power of five tests of variance equality under specific conditions. The tests examined included one procedure proposed by O'Brien (1978), two by O'Brien (1979), and two by Conover, Johnson, and Johnson (1981). Specific conditions included assorted combinations of the following factors: k=2 and k=3 groups, normal and non-normal distributional forms, similar and dissimilar distributional forms, and equal and unequal sample sizes. Under the k=2 group condition, a total of 180 combinations were examined. A total of 54 combinations were examined under the k=3 group condition. The Type I error rates and statistical power estimates were based upon 1000 replications in each combination examined. Results of this study suggest that when sample sizes are relatively large, all five procedures are robust to distribution non-normality and dissimilarity, as well as being sufficiently powerful.
62

A failure detection system design methodology

Chow, Edward Yik January 1981 (has links)
Thesis (Sc.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1981. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND ENGINEERING. / Includes bibliographical references. / by Edward Yik Chow. / Sc.D.
63

Robust methods of finite element analysis : evaluation of non-linear, lower bound limit loads of plated structures and stiffening members /

Ralph, Freeman E., January 2000 (has links)
Thesis (M.Eng.)--Memorial University of Newfoundland, 2000. / Bibliography: p. 132-135.
64

Uncalibrated Vision-Based Control and Motion Planning of Robotic Arms in Unstructured Environments

Shademan, Azad Unknown Date
No description available.
65

Essays on theories and applications of spatial econometric models

Lin, Xu, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 114-119).
66

Signal Processing and Robust Statistics for Fault Detection in Photovoltaic Arrays

January 2012 (has links)
abstract: Photovoltaics (PV) is an important and rapidly growing area of research. With the advent of power system monitoring and communication technology collectively known as the "smart grid," an opportunity exists to apply signal processing techniques to monitoring and control of PV arrays. In this paper a monitoring system which provides real-time measurements of each PV module's voltage and current is considered. A fault detection algorithm formulated as a clustering problem and addressed using the robust minimum covariance determinant (MCD) estimator is described; its performance on simulated instances of arc and ground faults is evaluated. The algorithm is found to perform well on many types of faults commonly occurring in PV arrays. Among several types of detection algorithms considered, only the MCD shows high performance on both types of faults. / Dissertation/Thesis / M.S. Electrical Engineering 2012
67

Využití numerické lineární algebry k urychlení výpočtu odhadů MCD / Exploiting numerical linear algebra to accelerate the computation of the MCD estimator

Sommerová, Kristýna January 2018 (has links)
This work is dealing with speeding up the algorithmization of the MCD es- timator for detection of the mean and the covariance matrix of a normally dis- tributed multivariate data contaminated with outliers. First, the main idea of the estimator and its well-known aproximation by the FastMCD algorithm is discussed. The main focus was to be placed on possibilities of a speedup of the iteration step known as C-step while maintaining the quality of the estimations. This proved to be problematic, if not impossible. The work is, therefore, aiming at creating a new implementation based on the C-step and Jacobi method for eigenvalues. The proposed JacobiMCD algorithm is compared to the FastMCD in terms of floating operation count and results. In conclusion, JacobiMCD is not found to be fully equivalent to FastMCD but hints at a possibility of its usage on larger problems. The numerical experiments suggest that the computation can indeed be quicker by an order of magnitude, while the quality of results is close to those from FastMCD in some settings. 1
68

Segmentation de processus avec un bruit autorégressif / Segmenting processes with an autoregressive noise

Chakar, Souhil 22 September 2015 (has links)
Nous proposons d’étudier la méthodologie de la segmentation de processus avec un bruit autorégressif sous ses aspects théoriques et pratiques. Par « segmentation » on entend ici l’inférence de points de rupture multiples correspondant à des changements abrupts dans la moyenne de la série temporelle. Le point de vue adopté est de considérer les paramètres de l’autorégression comme des paramètres de nuisance, à prendre en compte dans l’inférence dans la mesure où cela améliore la segmentation.D’un point de vue théorique, le but est de conserver un certain nombre de propriétés asymptotiques de l’estimation des points de rupture et des paramètres propres à chaque segment. D’un point de vue pratique, on se doit de prendre en compte les limitations algorithmiques liées à la détermination de la segmentation optimale. La méthode proposée, doublement contrainte, est basée sur l’utilisation de techniques d’estimation robuste permettant l’estimation préalable des paramètres de l’autorégression, puis la décorrélation du processus, permettant ainsi de s’approcher du problème de la segmentation dans le cas d’observations indépendantes. Cette méthode permet l’utilisation d’algorithmes efficaces. Elle est assise sur des résultats asymptotiques que nous avons démontrés. Elle permet de proposer des critères de sélection du nombre de ruptures adaptés et fondés. Une étude de simulations vient l’illustrer. / We propose to study the methodology of autoregressive processes segmentation under both its theoretical and practical aspects. “Segmentation” means here inferring multiple change-points corresponding to mean shifts. We consider autoregression parameters as nuisance parameters, whose estimation is considered only for improving the segmentation.From a theoretical point of view, we aim to keep some asymptotic properties of change-points and other parameters estimators. From a practical point of view, we have to take into account the algorithmic constraints to get the optimal segmentation. To meet these requirements, we propose a method based on robust estimation techniques, which allows a preliminary estimation of the autoregression parameters and then the decorrelation of the process. The aim is to get our problem closer to the segmentation in the case of independent observations. This method allows us to use efficient algorithms. It is based on asymptotic results that we proved. It allows us to propose adapted and well-founded number of changes selection criteria. A simulation study illustrates the method.
69

New design comparison criteria in Taguchi's robust parameter design

Savarese, Paul Tenzing 06 June 2008 (has links)
Choice of an experimental design is an important concern for most researchers. Judicious selection of an experimental design is also a weighty matter in Robust Parameter Design (RPD). RPD seeks to choose the levels of fixed controllable variables that provide insensitivity (robustness) to the variability of a process induced by uncontrollable noise variables. We use the fact that in the RPD scenario interest lies primarily with the ability of a design to estimate the noise and control by noise interaction effects in the fitted model. These effects allow for effective estimation of the process variance — an understanding of which is necessary to achieve the goals of RPD. Possible designs for use in RPD are quite numerous. Standard designs such as crossed array designs, Plackett-Burman designs, combined array factorial designs and many second order designs all vie for a place in the experimenters tool kit. New criteria are developed based on classical optimality criteria for judging various designs with respect to their performance in RPD. Many different designs are studied and compared. Several first-order and many second order designs such as the central-composite designs, Box-Behnken designs, and hybrid designs are studied and compared via our criteria. Numerous scenarios involving different models and designs are considered; results and conclusions are presented regarding which designs are preferable for use in RPD. Also, a new design rotatability entity is introduced. Optimality conditions with respect to our criteria are studied. For designs which are rotatable by our new rotatability entity, conditions are given which lead to optimality for a number of the new design comparison criteria. Finally, a sequential design-augmentation algorithm was developed and programmed on a computer. By cultivating a unique mechanism the algorithm implements a D<sub>s</sub>-optimal strategy in selecting candidate points. D<sub>s</sub>-optimality is likened to D-optimality on a subset of model parameters and is naturally suited to the RPD scenario. The algorithm can be used in either a sequential design-augmentation scenario or in a design-building scenario. Especially useful when a standard design does not exist to match the number of runs available to the researcher, the algorithm can be used to generate a design of the requisite size that should perform well in RPD. / Ph. D.
70

A unified decision analysis framework for robust system design evaluation in the face of uncertainty

Duan, Chunming 06 June 2008 (has links)
Some engineered systems now in use are not adequately meeting the needs for which they were developed, nor are they very cost-effective in terms of consumer utilization. Many problems associated with unsatisfactory system performance and high life-cycle cost are the direct result of decisions made during early phases of system design. To develop quality systems, both engineering and management need fundamental principles and methodologies to guide decision making during system design and advanced planning. In order to provide for the efficient resolution of complex system design decisions involving uncertainty, human judgments, and value tradeoffs, an efficient and effective decision analysis framework is required. Experience indicates that an effective approach to improving the quality of detail designs is through the application of Genichi Taguchi's philosophy of robust design. How to apply Taguchi's philosophy of robust design to system design evaluation at the preliminary design stage is an open question. The goal of this research is to develop a unified decision analysis framework to support the need for developing better system designs in the face of various uncertainties. This goal is accomplished by adapting and integrating statistical decision theory, utility theory, elements of the systems engineering process, and Taguchi's philosophy of robust design. The result is a structured, systematic methodology for evaluating system design alternatives. The decision analysis framework consists of two parts: (1) decision analysis foundations, and (2) an integrated approach. Part I (Chapters 2 through 5) covers the foundations for design decision analysis in the face of uncertainty. This research begins with an examination of the life cycle of engineered systems and identification of the elements of the decision process of system design and development. After investigating various types of uncertainty involved in the process of system design, the concept of robust design is defined from the perspective of system life-cycle engineering. Some common measures for assessing the robustness of candidate system designs are then identified and examined. Then the problem of design evaluation in the face of uncertainty is studied within the context of decision theory. After classifying design decision problems into four categories, the structure of each type of problem in terms of sequence and causal relationships between various decisions and uncertain outcomes is represented by a decision tree. Based upon statistical decision theory, the foundations for choosing a best design in the face of uncertainty are identified. The assumptions underlying common objective functions in design optimization are also investigated. Some confusion and controversy which surround Taguchi's robust design criteria — loss functions and signal-to-noise ratios -- are addressed and clarified. Part Il (Chapters 6 through 9) covers models and their application to design evaluation in the face of uncertainty. Based upon the decision analysis foundations, an integrated approach is developed and presented for resolving beth discrete decisions, continuous decisions, and decisions involving both uncertainty and multiple attributes. Application of the approach is illustrated by two hypothetical examples: bridge design and repairable equipment population system design. / Ph. D.

Page generated in 0.1029 seconds