Spelling suggestions: "subject:"design Of byexperiments"" "subject:"design Of c.experiments""
91 |
An Efficient Robust Concept Exploration Method and Sequential Exploratory Experimental DesignLin, Yao 31 August 2004 (has links)
Experimentation and approximation are essential for efficiency and effectiveness in concurrent engineering analyses of large-scale complex systems. The approximation-based design strategy is not fully utilized in industrial applications in which designers have to deal with multi-disciplinary, multi-variable, multi-response, and multi-objective analysis using very complicated and expensive-to-run computer analysis codes or physical experiments. With current experimental design and metamodeling techniques, it is difficult for engineers to develop acceptable metamodels for irregular responses and achieve good design solutions in large design spaces at low prices. To circumvent this problem, engineers tend to either adopt low-fidelity simulations or models with which important response properties may be lost, or restrict the study to very small design spaces. Information from expensive physical or computer experiments is often used as a validation in late design stages instead of analysis tools that are used in early-stage design. This increases the possibility of expensive re-design processes and the time-to-market.
In this dissertation, two methods, the Sequential Exploratory Experimental Design (SEED) and the Efficient Robust Concept Exploration Method (E-RCEM) are developed to address these problems. The SEED and E-RCEM methods help develop acceptable metamodels for irregular responses with expensive experiments and achieve satisficing design solutions in large design spaces with limited computational or monetary resources. It is verified that more accurate metamodels are developed and better design solutions are achieved with SEED and E-RCEM than with traditional approximation-based design methods. SEED and E-RCEM facilitate the full utility of the simulation-and-approximation-based design strategy in engineering and scientific applications.
Several preliminary approaches for metamodel validation with additional validation points are proposed in this dissertation, after verifying that the most-widely-used method of leave-one-out cross-validation is theoretically inappropriate in testing the accuracy of metamodels. A comparison of the performance of kriging and MARS metamodels is done in this dissertation. Then a sequential metamodeling approach is proposed to utilize different types of metamodels along the design timeline.
Several single-variable or two-variable examples and two engineering example, the design of pressure vessels and the design of unit cells for linear cellular alloys, are used in this dissertation to facilitate our studies.
|
92 |
Contributions to variable selection for mean modeling and variance modeling in computer experimentsAdiga, Nagesh 17 January 2012 (has links)
This thesis consists of two parts. The first part reviews a Variable Search, a variable selection procedure for mean modeling. The second part deals with variance modeling for robust parameter design in computer experiments.
In the first chapter of my thesis, Variable Search (VS) technique developed by Shainin (1988) is reviewed. VS has received quite a bit of attention from experimenters in industry. It uses the experimenters' knowledge about the process, in terms of good and bad settings and their importance. In this technique, a few experiments are conducted first at the best and worst settings of the variables to ascertain that they are indeed different from each other. Experiments are then conducted sequentially in two stages, namely swapping and capping, to determine the significance of variables, one at a time. Finally after all the significant variables have been identified, the model is fit and the best settings are determined.
The VS technique has not been analyzed thoroughly. In this report, we analyze each stage of the method mathematically. Each stage is formulated as a hypothesis test, and its performance expressed in terms of the model parameters. The performance of the VS technique is expressed as a function of the performances in each stage. Based on this, it is possible to compare its performance with the traditional techniques.
The second and third chapters of my thesis deal with variance modeling for robust parameter design in computer experiments. Computer experiments based on engineering models might be used to explore process behavior if physical experiments (e.g. fabrication of nanoparticles) are costly or time consuming. Robust parameter design (RPD) is a key technique to improve process repeatability. Absence of replicates in computer experiments (e.g. Space Filling Design (SFD)) is a challenge in locating RPD solution. Recently, there have been studies (e.g. Bates et al. (2005), Chen et al. (2006), Dellino et al. (2010 and 2011), Giovagnoli and Romano (2008)) of RPD issues on computer experiments. Transmitted variance model (TVM) proposed by Shoemaker and Tsui. (1993) for physical experiments can be applied in computer simulations. The approaches stated above rely heavily on the estimated mean model because they obtain expressions for variance directly from mean models or by using them for generating replicates. Variance modeling based on some kind of replicates relies on the estimated mean model to a lesser extent. To the best of our knowledge, there is no rigorous research on variance modeling needed for RPD in computer experiments.
We develop procedures for identifying variance models. First, we explore procedures to decide groups of pseudo replicates for variance modeling. A formal variance change-point procedure is developed to rigorously determine the replicate groups. Next, variance model is identified and estimated through a three-step variable selection procedure. Properties of the proposed method are investigated under various conditions through analytical and empirical studies. In particular, impact of correlated response on the performance is discussed.
|
93 |
Development of Cleaning-in-Place Procedures for Protein A Chromatography Resins using Design of Experiments and High Throughput Screening TechnologiesTengliden, Hanna January 2008 (has links)
<p>Robust and efficient cleaning procedures for protein A chromatography resins used for production of monoclonal antibody based biopharmaceuticals are crucial for safe and cost efficient processes. In this master thesis the effect of different cleaning regimes with respect to ligand stability of two protein A derived media, MabSelectTM and MabSelect SuReTM, has been investigated. A 96-well format has been used for preliminary screening of different cleaning agents, contact times and temperatures. NaCl as a ligand stabilizer during cleaning-in-place (CIP) was also included as a parameter. For optimal throughput and efficiency of screening, Rectangular Experimental Design for Multi-Unit Platforms; RED-MUP, and TECAN robotic platform have been utilized. For verification of screening, selected conditions were run in column format using the parallel chromatography system ÄKTAxpressTM. In the efficiency study, where a manual preparation of CIP solutions was compared with an automated mode performed in TECAN, the total process time ended up at eight hours versus three days respectively. However, the time measured included the learning process for the TECAN platform and for further preparations the automated mode is the superior choice. The study confirmed the higher alkaline stability of MabSelect SuRe compared to MabSelect. After exposure to 0.55 M NaOH during 24h MabSelect SuRe still retained 90% of the initial capacity. In contrast MabSelect had 60% of the initial binding capacity. When CIP with 10 mM NaOH was performed at 40 °C MabSelect reduced its capacity by half while MabSelect SuRe still had a binding capacity of 80%. The 96-well screening showed that an addition of NaCl during CIP had a significant positive effect on the stability of MabSelect, but needs to be verified on column format. The correlation between results from screening in 96-well filter plate and column format was good.</p>
|
94 |
Development of Cleaning-in-Place Procedures for Protein A Chromatography Resins using Design of Experiments and High Throughput Screening TechnologiesTengliden, Hanna January 2008 (has links)
Robust and efficient cleaning procedures for protein A chromatography resins used for production of monoclonal antibody based biopharmaceuticals are crucial for safe and cost efficient processes. In this master thesis the effect of different cleaning regimes with respect to ligand stability of two protein A derived media, MabSelectTM and MabSelect SuReTM, has been investigated. A 96-well format has been used for preliminary screening of different cleaning agents, contact times and temperatures. NaCl as a ligand stabilizer during cleaning-in-place (CIP) was also included as a parameter. For optimal throughput and efficiency of screening, Rectangular Experimental Design for Multi-Unit Platforms; RED-MUP, and TECAN robotic platform have been utilized. For verification of screening, selected conditions were run in column format using the parallel chromatography system ÄKTAxpressTM. In the efficiency study, where a manual preparation of CIP solutions was compared with an automated mode performed in TECAN, the total process time ended up at eight hours versus three days respectively. However, the time measured included the learning process for the TECAN platform and for further preparations the automated mode is the superior choice. The study confirmed the higher alkaline stability of MabSelect SuRe compared to MabSelect. After exposure to 0.55 M NaOH during 24h MabSelect SuRe still retained 90% of the initial capacity. In contrast MabSelect had 60% of the initial binding capacity. When CIP with 10 mM NaOH was performed at 40 °C MabSelect reduced its capacity by half while MabSelect SuRe still had a binding capacity of 80%. The 96-well screening showed that an addition of NaCl during CIP had a significant positive effect on the stability of MabSelect, but needs to be verified on column format. The correlation between results from screening in 96-well filter plate and column format was good.
|
95 |
A Preclinical Assessment of Lithium to Enhance Fracture HealingBernick, Joshua Hart 21 November 2013 (has links)
Delayed or impaired bone healing occurs in 5-10% of all fractures, yet cost effective solutions to enhance the healing process are limited. Lithium, a current treatment for bipolar disorder, is not clinically indicated for use in fracture management, but has been reported to positively influence bone biology. The objective of this study was to identify lithium administration parameters that maximize bone healing in a preclinical, rodent femur fracture model. Using a three factor, two level, design of experiments (DOE) approach, bone healing was assessed through mechanical testing and μCT-image analysis. Significant improvements in healing were found at a low dose, later onset, longer duration treatment combination, with onset identified as the most influential parameter. The positive results from this DOE screening focuses the optimization phase towards further investigation of the onset component of treatment, and forms a crucial foundation for future studies evaluating the role of lithium in fracture healing.
|
96 |
Modeling of Molecular Weight Distributions in Ziegler-Natta Catalyzed Ethylene CopolymerizationsThompson, Duncan 29 May 2009 (has links)
The objective of this work is to develop mathematical models to predict molecular weight distributions (MWDs) of ethylene copolymers produced in an industrial gas-phase reactor using a Ziegler-Natta (Z-N) catalyst. Because of the multi-site nature of Z-N catalysts, models of Z-N catalyzed copolymerization tend to be very large and have many parameters that need to be estimated. It is important that the data that are available for parameter estimation be used effectively, and that a suitable balance is achieved between modeling rigour and simplification.
In the thesis, deconvolution analysis is used to gain an understanding of how the polymer produced by various types of active sites on the Z-N catalyst responds to changes in the reactor operating conditions. This analysis reveals which reactions are important in determining the MWD and also shows that some types of active sites share similar behavior and can therefore share some kinetic parameters. With this knowledge, a simplified model is developed to predict MWDs of ethylene/hexene copolymers produced at 90 °C. Estimates of the parameters in this isothermal model provide good initial guesses for parameter estimation in a subsequent more complex model.
The isothermal model is extended to account for the effects of butene and temperature.
Estimability analysis and cross-validation are used to determine which parameters should be estimated from the available industrial data set. Twenty model parameters are estimated so that the model provides good predictions of MWD and comonomer incorporation. Finally, D-, A-,and V-optimal experimental designs for improving the quality of the model predictions are
determined. Difficulties with local minima are addressed and a comparison of the optimality criteria is presented. / Thesis (Ph.D, Chemical Engineering) -- Queen's University, 2009-05-28 20:43:58.37
|
97 |
Design of Experiments for Large Scale Catalytic SystemsKumar, Siddhartha Unknown Date
No description available.
|
98 |
A Preclinical Assessment of Lithium to Enhance Fracture HealingBernick, Joshua Hart 21 November 2013 (has links)
Delayed or impaired bone healing occurs in 5-10% of all fractures, yet cost effective solutions to enhance the healing process are limited. Lithium, a current treatment for bipolar disorder, is not clinically indicated for use in fracture management, but has been reported to positively influence bone biology. The objective of this study was to identify lithium administration parameters that maximize bone healing in a preclinical, rodent femur fracture model. Using a three factor, two level, design of experiments (DOE) approach, bone healing was assessed through mechanical testing and μCT-image analysis. Significant improvements in healing were found at a low dose, later onset, longer duration treatment combination, with onset identified as the most influential parameter. The positive results from this DOE screening focuses the optimization phase towards further investigation of the onset component of treatment, and forms a crucial foundation for future studies evaluating the role of lithium in fracture healing.
|
99 |
A Fault-Based Model of Fault Localization TechniquesHays, Mark A 01 January 2014 (has links)
Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important.
In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault's location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future.
While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers' work. The results of the survey indicate that software testing researchers are underreporting the significance of their work.
|
100 |
Model Refinement and Reduction for the Nitroxide-Mediated Radical Polymerization of Styrene with Applications on the Model-Based Design of ExperimentsHazlett, Mark Daniel 21 September 2012 (has links)
Polystyrene (PS) is an important commodity polymer. In its most commonly used form, PS is a high molecular weight linear polymer, typically produced through free-radical polymerization, which is a well understood and robust process. This process produces a high molecular weight, clear thermoplastic that is hard, rigid and has good thermal and melt flow properties for use in moldings, extrusions and films. However, polystyrene produced through the free radical process has a very broad molecular weight distribution, which can lead to poor performance in some applications.
To this end, nitroxide-mediated radical polymerization (NMRP) can synthesize materials with a much more consistently defined molecular architecture as well as relatively low polydispersity than other methods. NMRP involves radical polymerization in the presence of a nitroxide mediator. This mediator is usually of the form of a stable radical which can bind to and disable the growing polymer chain. This will “tie up” some of the free radicals forming a dynamic equilibrium between active and dormant species, through a reversible coupling process.
NMRP can be conducted through one of two different processes: (1) The bimolecular process, which can be initiated with a conventional peroxide initiator (i.e. BPO) but in the presence of a stable nitroxide radical (i.e. TEMPO), which is a stable radical that can reversibly bind with the growing polymer radical chain, and (2) The unimolecular process, where nitroxyl ether is introduced to the system, which then degrades to create both the initiator and mediator radicals.
Based on previous research in the group, which included experimental investigations with both unimolecular and bimolecular NMRP under various conditions, it was possible to build on an earlier model and come up with an improved detailed mechanistic model. Additionally, it was seen that certain parameters in the model had little impact on the overall model performance, which suggested that their removal would be appropriate, also serving to reduce the complexity of the model. Comparisons of model predictions with experimental data both from within the group and the general literature were performed and trends verified.
Further work was done on the development of an additionally reduced model, and on the testing of these different levels of model complexity with data. The aim of this analysis was to develop a model to capture the key process responses in a simple and easy to implement manner with comparable accuracy to the complete models. Due to its lower complexity, this substantially reduced model would me a much likelier candidate for use in on-line applications.
Application of these different model levels to the model-based D-optimal design of experiments was then pursued, with results compared to those generated by a parallel Bayesian design project conducted within the group. Additional work was done using a different optimality criterion, targeted at reducing the amount of parameter correlation that may be seen in D-optimal designs.
Finally, conclusions and recommendations for future work were made, including a detailed explanation of how a model similar to the ones described in this paper could be used in the optimal selection of sensors and design of experiments.
|
Page generated in 0.1015 seconds