• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 11
  • Tagged with
  • 42
  • 42
  • 42
  • 16
  • 15
  • 12
  • 9
  • 7
  • 7
  • 6
  • 6
  • 5
  • 5
  • 5
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Effective design augmentation for prediction

Rozum, Michael A. 03 August 2007 (has links)
In a typical response surface study, an experimenter will fit a first order model in the early stages of the study and obtain the path of steepest ascent. The path leads the experimenter out of this initial region of interest and into a new region of interest. The experimenter may fit another first order model here or, if curvature is believed to be present in the underlying system, a second order model. In the final stages of the study, the experimenter fits a second order model and typically contracts the region of interest as the levels of the factors that optimize the response are nearly determined. Due to the sequential nature of experimentation in a typical response surface study, the experimenter may find himself/herself wanting to augment some initial design with additional runs within the current region of interest. The little discussion that exists in the statistical literature suggests adding runs sequentially in a conditional D-optimal manner. Four prediction oriented criteria, I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup> and G, and two estimation oriented criteria, A and E, are studied here as other possible sequential design augmentation optimality criteria. Analytical properties of I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub>, and A are developed within the context of the design augmentation problem. I<sub>SV</sub><sub>r</sub> is found to be somewhat ineffective in actual sequential design augmentation situations. A new more effective criterion,I<sub>SV</sub><sub>r</sub><sup>ADJ</sup> is introduced and thoroughly developed. Software is developed which allows sequential design augmentation via these seven criteria. Unlike existing design augmentation software, all locations within the current region of interest are eligible for inclusion in the augmenting design (a continuous candidate list). Case studies were performed. For a first order model there was negligible difference in the prediction variance properties of the designs generated via sequential augmentation by D and the A best of the other criteria, I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, and A. For a second order model, however, the designs generated via sequential augmentation by D place too few runs too late in the interior of the region of interest. Thus, designs generated via sequential augmentation by D yield inferior prediction variance properties to the designs generated via I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, and A. The D-efficiencies of the designs generated via sequential augmentation by I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, and A range from the reasonable to fully D-optimum. Therefore, the I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, optimality criteria are recommended for sequential design augmentation when quality of prediction is more important than quality in estimation of coefficients. / Ph. D.
32

Robust parameter optimization strategies in computer simulation experiments

Panis, Renato P. 06 June 2008 (has links)
An important consideration in computer simulation studies is the issue of model validity, the level of accuracy with which the simulation model represents the real world system under study. This dissertation addresses a major cause of model validity problems: the dissimilarity between the simulation model and the real system due to the dynamic nature of the real system that results from the presence of nonstationary stochastic processes within the system. This transitory characteristic of the system is typically not addressed in the search for an optimal solution. In reliability and quality control studies, it is known that optimizing with respect to the variance of the response is as important a concern as optimizing with respect to average performance response. Genichi Taguchi has been instrumental in the advancement of this philosophy. His work has resulted in what is now popularly known as the Taguchi Methods for robust parameter design. Following Taguchi's philosophy, the goal of this research is to devise a framework for finding optimum operating levels for the controllable input factors in a stochastic system that are insensitive to internal sources of variation. Specifically, the model validity problem of nonstationary system behavior is viewed as a major internal cause of system variation. In this research the typical application of response surface methodology (RSM) to the problem of simulation optimization is examined. Simplifying assumptions that enable the use of RSM techniques are examined. The relaxation of these assumptions to address model validity leads to a modification of the RSM approach to properly handle the problem of optimization in the presence of nonstationarity. Taguchi's strategy and methods are then adapted and applied to this problem. Finally, dual-response RSM extensions of the Taguchi approach separately modeling the process performance mean and variance are considered and suitably revised to address the same problem. A second cause of model validity problems is also considered: the random behavior of the supposedly controllable input factors to the system. A resolution to this source of model invalidity is proposed based on the methodology described above. / Ph. D.
33

Simulation-optimization studies: under efficient stimulationstrategies, and a novel response surface methodology algorithm

Joshi, Shirish 06 June 2008 (has links)
While attempting to solve optimization problems, the lack of an explicit mathematical expression of the problem may preclude the application of the standard methods of optimization which prove valuable in an analytical framework. In such situations, computer simulations are used to obtain the mean response values for the required settings of the independent variables. Procedures for optimizing on the mean response values, which are in turn obtained through computer simulation experiments, are called simulation-optimization techniques. The focus of this work is on the simulation-optimization technique of response surface methodology (RSM). RSM is a collection of mathematical and statistical techniques for experimental optimization. Correlation induction strategies can be employed in RSM to achieve improved statistical inferences on experimental designs and sequential experimentations. Also, the search procedures currently employed by RSM algorithms can be improved by incorporating gradient deflection methods. This dissertation has three major goals: (a) develop analytical results to quantitatively express the gains of using the common random number (CRN) strategy of variance reduction over direct simulation (independent streams or IS strategy) at each stage RSM, (b) develop a new RSM algorithm by incorporating gradient deflection methods in existing RSM algorithms, and (c) to conduct extensive empirical studies to quantify: (i) the use of eRN strategy over direct simulation in a standard RSM algorithm, and (ii) the gains of the new RSM algorithm over a standard existing RSM algorithm. / Ph. D.
34

Outliers and robust response surface designs

O'Gorman, Mary Ann January 1984 (has links)
A commonly occurring problem in response surface methodology is that of inconsistencies in the response variable. These inconsistencies, or maverick observations, are referred to here as outliers. Many models exist for describing these outliers. Two of these models, the mean shift and the variance inflation outlier models, are employed in this research. Several criteria are developed for determining when the outlying observation is detrimental to the analysis. These criteria all lead to the same condition which is used to develop statistical tests of the null hypothesis that the outlier is not detrimental to the analysis. These results are extended to the multiple outlier case for both models. The robustness of response surface designs is also investigated. Robustness to outliers, missing data and errors in control are examined for first order models. The orthogonal designs with large second moments, such as the 2ᵏ factorial designs, are optimal in all three cases. In the second order case, robustness to outliers and to missing data are examined. Optimal design parameters are obtained by computer for the central composite, Box-Behnken, hybrid, small composite and equiradial designs. Similar results are seen for both robustness to outliers and to missing data. The central composite turns out to be the optimal design type and of the two economical design types the small composite is preferred to the hybrid. / Ph. D.
35

Applications and optimization of response surface methodologies in high-pressure, high-temperature gauges

Hässig Fonseca, Santiago 05 July 2012 (has links)
High-Pressure, High-Temperature (HPHT) pressure gauges are commonly used in oil wells for pressure transient analysis. Mathematical models are used to relate input perturbation (e.g., flow rate transients) with output responses (e.g., pressure transients), and subsequently, solve an inverse problem that infers reservoir parameters. The indispensable use of pressure data in well testing motivates continued improvement in the accuracy (quality), sampling rate (quantity), and autonomy (lifetime) of pressure gauges. This body of work presents improvements in three areas of high-pressure, high-temperature quartz memory gauge technology: calibration accuracy, multi-tool signal alignment, and tool autonomy estimation. The discussion introduces the response surface methodology used to calibrate gauges, develops accuracy and autonomy estimates based on controlled tests, and where applicable, relies on field gauge drill stem test data to validate accuracy predictions. Specific contributions of this work include: - Application of the unpaired sample t-test, a first in quartz sensor calibration, which resulted in reduction of uncertainty in gauge metrology by a factor of 2.25, and an improvement in absolute and relative tool accuracies of 33% and 56%, accordingly. Greater accuracy yields more reliable data and a more sensitive characterization of well parameters. - Post-processing of measurements from 2+ tools using a dynamic time warp algorithm that mitigates gauge clock drifts. Where manual alignment methods account only for linear shifts, the dynamic algorithm elastically corrects nonlinear misalignments accumulated throughout a job with an accuracy that is limited only by the clock's time resolution. - Empirical modeling of tool autonomy based on gauge selection, battery pack, sampling mode, and average well temperature. A first of its kind, the model distills autonomy into two independent parameters, each a function of the same two orthogonal factors: battery power capacity and gauge current consumption as functions of sampling mode and well temperature -- a premise that, for 3+ gauge and battery models, reduces the design of future autonomy experiments by at least a factor of 1.5.
36

Analytical Fragility Curves for Highway Bridges in Moderate Seismic Zones

Nielson, Bryant G. 23 November 2005 (has links)
Historical seismic events such as the San Fernando earthquake of 1971 and the Loma Prieta earthquake of 1989 did much to highlight the vulnerabilities in many existing highway bridges. However, it was not until 1990 that this awareness extended to the moderate seismic regions such as the Central and Southeastern United States (CSUS). This relatively long neglect of seismic issues pertaining to bridges in these moderate seismic zones has resulted in a portfolio of existing bridges with seismic deficiencies which must be assessed and addressed. An emerging decision tool, whose use is becoming ever increasingly popular in the assessment of this seismic risk, is that of seismic fragility curves. Fragility curves are conditional probability statements which give the probability of a bridge reaching or exceeding a particular damage level for an earthquake of a given intensity level. As much research has been devoted to the implementation of fragility curves in risk assessment packages, a great need has arisen for bridge fragility curves which are reliable, particularly for those in moderate seismic zones. The purpose of this study is to use analytical methods to generate fragility curves for nine bridge classes which are most common to the CSUS. This is accomplished by first considering the existing bridge inventory and assessing typical characteristics and details from which detailed 3-D analytical models are created. The bridges are subjected to a suite of synthetic ground motions which were developed explicitly for the region. Probabilistic seismic demand models (PSDM) are then generated using these analyses. From these PSD models, fragility curves are generated by considering specific levels of damage which may be of interest. The fragility curves show that the most vulnerable of all the bridge nine bridge classes considered are those utilizing steel girders. Concrete girder bridges appear to be the next most vulnerable followed by single span bridges of all types. Various sources of uncertainty are considered and tracked throughout this study, which allows for their direct implementation into existing seismic risk assessment packages.
37

Seismic Vulnerability Assessment of Retrofitted Bridges Using Probabilistic Methods

Padgett, Jamie Ellen 09 April 2007 (has links)
The central focus of this dissertation is a seismic vulnerability assessment of retrofitted bridges. The objective of this work is to establish a methodology for the development of system level fragility curves for typical classes of retrofitted bridges using a probabilistic framework. These tools could provide valuable support for risk mitigation efforts in the region by quantifying the impact of retrofit on potential levels of damage over a range of earthquake intensities. The performance evaluation includes the development of high-fidelity three-dimensional nonlinear analytical models of bridges retrofit with a range of retrofit measures, and characterization of the response under seismic loading. Sensitivity analyses were performed to establish an understanding of the appropriate level of uncertainty treatment to model, assess, and propagate sources of uncertainty inherent to a seismic performance evaluation for portfolios of structures. Seismic fragility curves are developed to depict the impact of various retrofit devices on the seismic vulnerability of bridge systems. This work provides the first set of fragility curves for a range of bridge types and retrofit measures. Framework for their use in decision making for identification of viable retrofit measures, performance-based retrofit of bridges, and cost-benefit analyses are illustrated. The fragility curves developed as a part of this research will fill a major gap in existing seismic risk assessment software, and enable decision makers to quantify the benefits of various retrofits.
38

An efficient technique for structural reliability with applications

Janajreh, Ibrahim Mustafa 28 July 2008 (has links)
An efficient reliability technique has been developed based on Response Surface Methodology (RSM) in conjunction with the First Order Second Moment (FOSM) reliability method. The technique is applied when the limit state function cannot be obtained explicitly in terms of the design variables, i.e., when the analysis is performed using numerical techniques such as finite elements. The technique has proven to be efficient because it can handle problems with large numbers of design variables and correlated as well as nonnormal random variables. When compared with analytical results, the method has shown excellent agreement. The technique contains a sensitivity analysis scheme which can be used to reduce the computation time resulting in nearly the same accuracy. This technique allows the extension of most finite element codes to account for probabilistic analysis, where statistical variations can be added to the design variables. An explicit solution for rocket motors consisting of propellant and steel case under environmental temperature variations is compared to the RSM technique. The method is then used for the analysis of rocket motors subjected to mechanical loads for which the stress analysis is performed using the finite element method. The technique is also applied to study the reliability of a laminated composite plate with geometric nonlinearity subjected to static and time dependent loadings. Different failure modes were considered as well as different meshes. Results have shown that when the relative size of the element is introduced into the probabilistic model, the same reliability value is obtained regardless of the number of elements in the mesh. This is good because it allows the technique to be used for problems where the failure region is unknown. / Ph. D.
39

The use of response surface methodology and artificial neural networks for the establishment of a design space for a sustained release salbutamol sulphate formulation

Chaibva, Faith Anesu January 2010 (has links)
Quality by Design (QbD) is a systematic approach that has been recommended as suitable for the development of quality pharmaceutical products. The QbD approach commences with the definition of a quality target drug profile and predetermined objectives that are then used to direct the formulation development process with an emphasis on understanding the pharmaceutical science and manufacturing principles that apply to a product. The design space is directly linked to the use of QbD for formulation development and is a multidimensional combination and interaction of input variables and process parameters that have been demonstrated to provide an assurance of quality. The objective of these studies was to apply the principles of QbD as a framework for the optimisation of a sustained release (SR) formulation of salbutamol sulphate (SBS), and for the establishment of a design space using Response Surface Methodology (RSM) and Artificial Neural Networks (ANN). SBS is a short-acting ♭₂ agonist that is used for the management of asthma and chronic obstructive pulmonary disease (COPD). The use of a SR formulation of SBS may provide clinical benefits in the management of these respiratory disorders. Ashtalin®8 ER (Cipla Ltd., Mumbai, Maharashtra, India) was selected as a reference formulation for use in these studies. An Ishikawa or Cause and Effect diagram was used to determine the impact of formulation and process factors that have the potential to affect product quality. Key areas of concern that must be monitored include the raw materials, the manufacturing equipment and processes, and the analytical and assessment methods employed. The conditions in the laboratory and manufacturing processes were carefully monitored and recorded for any deviation from protocol, and equipment for assessment of dosage form performance, including dissolution equipment, balances and hardness testers, underwent regular maintenance. Preliminary studies to assess the potential utility of Methocel® Kl OOM, alone and in combination with other matrix forming polymers, revealed that the combination of this polymer with xanthan gum and Carbopol® has the potential to modulate the release of SBS at a specific rate, for a period of 12 hr. A central composite design using Methocel® KlOOM, xanthan gum, Carbopol® 974P and Surelease® as the granulating fluid was constructed to fully evaluate the impact of these formulation variables on the rate and extent of SBS release from manufactured formulations. The results revealed that although Methocel® KlOOM and xanthan gum had the greatest retardant effect on drug release, interactions between the polymers used in the study were also important determinants of the measureable responses. An ANN model was trained for optimisation using the data generated from a central composite study. The efficiency of the network was optimised by assessing the impact of the number of nodes in the hidden layer using a three layer Multi Layer Perceptron (MLP). The results revealed that a network with nine nodes in the hidden layer had the best predictive ability, suitable for application to formulation optimisation studies. Pharmaceutical optimisation was conducted using both the RSM and the trained ANN models. The results from the two optimisation procedures yielded two different formulation compositions that were subjected to in vitro dissolution testing using USP Apparatus 3. The results revealed that, although the formulation compositions that were derived from the optimisation procedures were different, both solutions gave reproducible results for which the dissolution profiles were indeed similar to that of the reference formulation. RSM and ANN were further investigated as possible means of establishing a design space for formulation compositions that would result in dosage forms that have similar in vitro release test profiles comparable to the reference product. Constraint plots were used to determine the bounds of the formulation variables that would result in the manufacture of dosage forms with the desired release profile. ANN simulations with hypothetical formulations that were generated within a small region of the experimental domain were investigated as a means of understanding the impact of varying the composition of the formulation on resultant dissolution profiles. Although both methods were suitable for the establishment of a design space, the use of ANN may be better suited for this purpose because of the manner in which ANN handles data. As more information about the behaviour of a formulation and its processes is generated during the product Iifecycle, ANN may be used to evaluate the impact of formulation and process variables on measureable responses. It is recommended that ANN may be suitable for the optimisation of pharmaceutical formulations and establishment of a design space in line with ICH Pharmaceutical Development [1], Quality Risk Management [2] and Pharmaceutical Quality Systems [3]
40

Global Resource Management of Response Surface Methodology

Miller, Michael Chad 04 March 2014 (has links)
Statistical research can be more difficult to plan than other kinds of projects, since the research must adapt as knowledge is gained. This dissertation establishes a formal language and methodology for designing experimental research strategies with limited resources. It is a mathematically rigorous extension of a sequential and adaptive form of statistical research called response surface methodology. It uses sponsor-given information, conditions, and resource constraints to decompose an overall project into individual stages. At each stage, a "parent" decision-maker determines what design of experimentation to do for its stage of research, and adapts to the feedback from that research's potential "children", each of whom deal with a different possible state of knowledge resulting from the experimentation of the "parent". The research of this dissertation extends the real-world rigor of the statistical field of design of experiments to develop an deterministic, adaptive algorithm that produces deterministically generated, reproducible, testable, defendable, adaptive, resource-constrained multi-stage experimental schedules without having to spend physical resource.

Page generated in 0.1138 seconds