• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 754
  • 229
  • 138
  • 95
  • 30
  • 29
  • 19
  • 16
  • 14
  • 10
  • 7
  • 5
  • 4
  • 4
  • 4
  • Tagged with
  • 1600
  • 586
  • 336
  • 242
  • 242
  • 235
  • 190
  • 184
  • 175
  • 167
  • 165
  • 158
  • 143
  • 135
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Microcomputer implementation of robust regression techniques /

Detwiler, Dana. January 1993 (has links)
Report (M.S.)--Virginia Polytechnic Institute and State University. M.S. 1993. / Abstract. Includes bibliographical references (leaf 54). Also available via the Internet.
22

The robust optimization of non-linear requirements models

Gay, Gregory January 2010 (has links)
Thesis (M.S.)--West Virginia University, 2010. / Title from document title page. Document formatted into pages; contains ix, 128 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 93-102).
23

Optimal mixed-level robust parameter designs

Hu, Jingjing 13 January 2016 (has links)
In this thesis, we propose a methodology for choosing optimal mixed-level fractional factorial robust parameter designs when experiments involve both qualitative factors and quantitative factors. At the beginning, a brief review of fractional factorial designs and two-level robust parameter designs is given to help understanding our method. The minimum aberration criterion, one of the most commonly used criterion for design selection, is introduced. We modify this criterion and develop two generalized minimum aberration criteria for selecting optimal mixed-level fractional factorial robust parameter designs. Finally, we implement an effective computer program. A catalogue of 18-run optimal designs is constructed and some results are given. / February 2016
24

Design of generalized PID controllers for linear multivariable plants

Boddy, C. L. January 1988 (has links)
No description available.
25

Extending linear grouping analysis and robust estimators for very large data sets

Harrington, Justin 11 1900 (has links)
Cluster analysis is the study of how to partition data into homogeneous subsets so that the partitioned data share some common characteristic. In one to three dimensions, the human eye can distinguish well between clusters of data if clearly separated. However, when there are more than three dimensions and/or the data is not clearly separated, an algorithm is required which needs a metric of similarity that quantitatively measures the characteristic of interest. Linear Grouping Analysis (LGA, Van Aelst et al. 2006) is an algorithm for clustering data around hyperplanes, and is most appropriate when: 1) the variables are related/correlated, which results in clusters with an approximately linear structure; and 2) it is not natural to assume that one variable is a “response”, and the remainder the “explanatories”. LGA measures the compactness within each cluster via the sum of squared orthogonal distances to hyperplanes formed from the data. In this dissertation, we extend the scope of problems to which LGA can be applied. The first extension relates to the linearity requirement inherent within LGA, and proposes a new method of non-linearly transforming the data into a Feature Space, using the Kernel Trick, such that in this space the data might then form linear clusters. A possible side effect of this transformation is that the dimension of the transformed space is significantly larger than the number of observations in a given cluster, which causes problems with orthogonal regression. Therefore, we also introduce a new method for calculating the distance of an observation to a cluster when its covariance matrix is rank deficient. The second extension concerns the combinatorial problem for optimizing a LGA objective function, and adapts an existing algorithm, called BIRCH, for use in providing fast, approximate solutions, particularly for the case when data does not fit in memory. We also provide solutions based on BIRCH for two other challenging optimization problems in the field of robust statistics, and demonstrate, via simulation study as well as application on actual data sets, that the BIRCH solution compares favourably to the existing state-of-the-art alternatives, and in many cases finds a more optimal solution. / Science, Faculty of / Statistics, Department of / Graduate
26

An Adjustable Robust Optimization Approach to Multi-objective Personnel Scheduling Under Uncertain Demand: A Case Study at a Pathology Department

Mahdavi, Roshanak 11 September 2020 (has links)
In this thesis, we address a multi-objective personnel scheduling problem where personnel’s workload is uncertain and propose a two-stage robust modelling approach with demand uncertainty. In the first stage, we model a multi-objective personnel scheduling problem without incorporating demand coverage and, in the second stage, we minimize over or under-staffing after the realization of the demand and the assignments from the first stage. Two solution approaches are introduced for this model. The first approach solves the proposed model through a cutting plane strategy known as Benders dual cutting plane method, and the second approach reformulates the problem based on the strong duality theory. As a case study, the proposed model and the first solution approach are applied to an existing scheduling problem in the pathology department at The Ottawa Hospital. It is shown that the proposed model is successful at reducing the unmet demand while maintaining the performance with respect to other metrics when compared against the deterministic alternative.
27

Robust Statistical Methods for Measurement Calibration in Large Electric Power Systems

Ghassemian, Alireza 14 October 1997 (has links)
The Objective of the Remote Measurements Calibration (RMC) method is to minimize systematic errors through an appropriate scaling procedure. A new method for RMC has been developed. This method solves the problems of observability, multiplicity of solutions, and ambiguity of reference points associated with the method proposed by Adibi et. al. [6-9]. The new algorithm uses the simulated annealing technique together with the matroid method to identify and minimize the number of RTUs (Remote Terminal Units) required to observe the system. After field calibration, these RTUs provide measurements that are used to estimate the whole state of the system. These estimates are then returned as a reference for remotely calibrating the remaining RTUs. The calibration coefficients are estimated by means of highly robust estimator, namely the Least Median of Squares (LMS) estimator. The calibration method is applicable to large systems by means of network tearing and dynamic programming. The number of field calibrations can be decreased further whenever multiple voltage measurements at the same buses are available. The procedure requires that the measurement biases are estimated from recorded metered values when buses, or lines, or transformers are disconnected. It also requires the application of a robust comparative voltage calibration method. To this end, a modified Friedman test has been developed and its robustness characteristics investigated. / Ph. D.
28

Robust Image Transmission over Wireless CDMA Channels Using CombinedError-Resilient Source Coding and Channel Error Control

Aïssa, Sonia January 1998 (has links)
No description available.
29

Robust controller design for lightly damped systems with feedback delay

Hung, Donald Lu-Cheng January 1991 (has links)
No description available.
30

Watermarking With Wavelet Transforms

Parker, Kristen Michelle 15 December 2007 (has links)
Digital watermarking algorithms based on wavelet transforms provide increased performance and perceptual quality. This thesis proposes two wavelet-based schemes: one robust and one fragile. Robust watermarks should withstand attacks, such as compression, while maintaining the data integrity. The first approach presented is an algorithm which implements image watermarking in the domain of an overcomplete, or redundant, wavelet transform. Alternately, fragile watermarks are intended for use in applications wherein any loss of image quality is not acceptable. In the second approach presented, data embedding in the domain of an integer wavelet transform is considered. An algorithm is proposed that uses a bilevel image coder to compress a chosen bitplane, thereby providing space in which to store a payload while guaranteeing perfect image recovery.

Page generated in 0.0424 seconds