• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 138
  • 131
  • 38
  • 16
  • 11
  • 9
  • 7
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 437
  • 437
  • 91
  • 84
  • 81
  • 72
  • 72
  • 48
  • 44
  • 42
  • 40
  • 40
  • 34
  • 30
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Contributions to variable selection for mean modeling and variance modeling in computer experiments

Adiga, Nagesh 17 January 2012 (has links)
This thesis consists of two parts. The first part reviews a Variable Search, a variable selection procedure for mean modeling. The second part deals with variance modeling for robust parameter design in computer experiments. In the first chapter of my thesis, Variable Search (VS) technique developed by Shainin (1988) is reviewed. VS has received quite a bit of attention from experimenters in industry. It uses the experimenters' knowledge about the process, in terms of good and bad settings and their importance. In this technique, a few experiments are conducted first at the best and worst settings of the variables to ascertain that they are indeed different from each other. Experiments are then conducted sequentially in two stages, namely swapping and capping, to determine the significance of variables, one at a time. Finally after all the significant variables have been identified, the model is fit and the best settings are determined. The VS technique has not been analyzed thoroughly. In this report, we analyze each stage of the method mathematically. Each stage is formulated as a hypothesis test, and its performance expressed in terms of the model parameters. The performance of the VS technique is expressed as a function of the performances in each stage. Based on this, it is possible to compare its performance with the traditional techniques. The second and third chapters of my thesis deal with variance modeling for robust parameter design in computer experiments. Computer experiments based on engineering models might be used to explore process behavior if physical experiments (e.g. fabrication of nanoparticles) are costly or time consuming. Robust parameter design (RPD) is a key technique to improve process repeatability. Absence of replicates in computer experiments (e.g. Space Filling Design (SFD)) is a challenge in locating RPD solution. Recently, there have been studies (e.g. Bates et al. (2005), Chen et al. (2006), Dellino et al. (2010 and 2011), Giovagnoli and Romano (2008)) of RPD issues on computer experiments. Transmitted variance model (TVM) proposed by Shoemaker and Tsui. (1993) for physical experiments can be applied in computer simulations. The approaches stated above rely heavily on the estimated mean model because they obtain expressions for variance directly from mean models or by using them for generating replicates. Variance modeling based on some kind of replicates relies on the estimated mean model to a lesser extent. To the best of our knowledge, there is no rigorous research on variance modeling needed for RPD in computer experiments. We develop procedures for identifying variance models. First, we explore procedures to decide groups of pseudo replicates for variance modeling. A formal variance change-point procedure is developed to rigorously determine the replicate groups. Next, variance model is identified and estimated through a three-step variable selection procedure. Properties of the proposed method are investigated under various conditions through analytical and empirical studies. In particular, impact of correlated response on the performance is discussed.
92

Development of Cleaning-in-Place Procedures for Protein A Chromatography Resins using Design of Experiments and High Throughput Screening Technologies

Tengliden, Hanna January 2008 (has links)
<p>Robust and efficient cleaning procedures for protein A chromatography resins used for production of monoclonal antibody based biopharmaceuticals are crucial for safe and cost efficient processes. In this master thesis the effect of different cleaning regimes with respect to ligand stability of two protein A derived media, MabSelectTM and MabSelect SuReTM, has been investigated. A 96-well format has been used for preliminary screening of different cleaning agents, contact times and temperatures. NaCl as a ligand stabilizer during cleaning-in-place (CIP) was also included as a parameter. For optimal throughput and efficiency of screening, Rectangular Experimental Design for Multi-Unit Platforms; RED-MUP, and TECAN robotic platform have been utilized. For verification of screening, selected conditions were run in column format using the parallel chromatography system ÄKTAxpressTM. In the efficiency study, where a manual preparation of CIP solutions was compared with an automated mode performed in TECAN, the total process time ended up at eight hours versus three days respectively. However, the time measured included the learning process for the TECAN platform and for further preparations the automated mode is the superior choice. The study confirmed the higher alkaline stability of MabSelect SuRe compared to MabSelect. After exposure to 0.55 M NaOH during 24h MabSelect SuRe still retained 90% of the initial capacity. In contrast MabSelect had 60% of the initial binding capacity. When CIP with 10 mM NaOH was performed at 40 °C MabSelect reduced its capacity by half while MabSelect SuRe still had a binding capacity of 80%. The 96-well screening showed that an addition of NaCl during CIP had a significant positive effect on the stability of MabSelect, but needs to be verified on column format. The correlation between results from screening in 96-well filter plate and column format was good.</p>
93

Development of Cleaning-in-Place Procedures for Protein A Chromatography Resins using Design of Experiments and High Throughput Screening Technologies

Tengliden, Hanna January 2008 (has links)
Robust and efficient cleaning procedures for protein A chromatography resins used for production of monoclonal antibody based biopharmaceuticals are crucial for safe and cost efficient processes. In this master thesis the effect of different cleaning regimes with respect to ligand stability of two protein A derived media, MabSelectTM and MabSelect SuReTM, has been investigated. A 96-well format has been used for preliminary screening of different cleaning agents, contact times and temperatures. NaCl as a ligand stabilizer during cleaning-in-place (CIP) was also included as a parameter. For optimal throughput and efficiency of screening, Rectangular Experimental Design for Multi-Unit Platforms; RED-MUP, and TECAN robotic platform have been utilized. For verification of screening, selected conditions were run in column format using the parallel chromatography system ÄKTAxpressTM. In the efficiency study, where a manual preparation of CIP solutions was compared with an automated mode performed in TECAN, the total process time ended up at eight hours versus three days respectively. However, the time measured included the learning process for the TECAN platform and for further preparations the automated mode is the superior choice. The study confirmed the higher alkaline stability of MabSelect SuRe compared to MabSelect. After exposure to 0.55 M NaOH during 24h MabSelect SuRe still retained 90% of the initial capacity. In contrast MabSelect had 60% of the initial binding capacity. When CIP with 10 mM NaOH was performed at 40 °C MabSelect reduced its capacity by half while MabSelect SuRe still had a binding capacity of 80%. The 96-well screening showed that an addition of NaCl during CIP had a significant positive effect on the stability of MabSelect, but needs to be verified on column format. The correlation between results from screening in 96-well filter plate and column format was good.
94

A Preclinical Assessment of Lithium to Enhance Fracture Healing

Bernick, Joshua Hart 21 November 2013 (has links)
Delayed or impaired bone healing occurs in 5-10% of all fractures, yet cost effective solutions to enhance the healing process are limited. Lithium, a current treatment for bipolar disorder, is not clinically indicated for use in fracture management, but has been reported to positively influence bone biology. The objective of this study was to identify lithium administration parameters that maximize bone healing in a preclinical, rodent femur fracture model. Using a three factor, two level, design of experiments (DOE) approach, bone healing was assessed through mechanical testing and &mu;CT-image analysis. Significant improvements in healing were found at a low dose, later onset, longer duration treatment combination, with onset identified as the most influential parameter. The positive results from this DOE screening focuses the optimization phase towards further investigation of the onset component of treatment, and forms a crucial foundation for future studies evaluating the role of lithium in fracture healing.
95

Modeling of Molecular Weight Distributions in Ziegler-Natta Catalyzed Ethylene Copolymerizations

Thompson, Duncan 29 May 2009 (has links)
The objective of this work is to develop mathematical models to predict molecular weight distributions (MWDs) of ethylene copolymers produced in an industrial gas-phase reactor using a Ziegler-Natta (Z-N) catalyst. Because of the multi-site nature of Z-N catalysts, models of Z-N catalyzed copolymerization tend to be very large and have many parameters that need to be estimated. It is important that the data that are available for parameter estimation be used effectively, and that a suitable balance is achieved between modeling rigour and simplification. In the thesis, deconvolution analysis is used to gain an understanding of how the polymer produced by various types of active sites on the Z-N catalyst responds to changes in the reactor operating conditions. This analysis reveals which reactions are important in determining the MWD and also shows that some types of active sites share similar behavior and can therefore share some kinetic parameters. With this knowledge, a simplified model is developed to predict MWDs of ethylene/hexene copolymers produced at 90 °C. Estimates of the parameters in this isothermal model provide good initial guesses for parameter estimation in a subsequent more complex model. The isothermal model is extended to account for the effects of butene and temperature. Estimability analysis and cross-validation are used to determine which parameters should be estimated from the available industrial data set. Twenty model parameters are estimated so that the model provides good predictions of MWD and comonomer incorporation. Finally, D-, A-,and V-optimal experimental designs for improving the quality of the model predictions are determined. Difficulties with local minima are addressed and a comparison of the optimality criteria is presented. / Thesis (Ph.D, Chemical Engineering) -- Queen's University, 2009-05-28 20:43:58.37
96

Design of Experiments for Large Scale Catalytic Systems

Kumar, Siddhartha Unknown Date
No description available.
97

A Preclinical Assessment of Lithium to Enhance Fracture Healing

Bernick, Joshua Hart 21 November 2013 (has links)
Delayed or impaired bone healing occurs in 5-10% of all fractures, yet cost effective solutions to enhance the healing process are limited. Lithium, a current treatment for bipolar disorder, is not clinically indicated for use in fracture management, but has been reported to positively influence bone biology. The objective of this study was to identify lithium administration parameters that maximize bone healing in a preclinical, rodent femur fracture model. Using a three factor, two level, design of experiments (DOE) approach, bone healing was assessed through mechanical testing and &mu;CT-image analysis. Significant improvements in healing were found at a low dose, later onset, longer duration treatment combination, with onset identified as the most influential parameter. The positive results from this DOE screening focuses the optimization phase towards further investigation of the onset component of treatment, and forms a crucial foundation for future studies evaluating the role of lithium in fracture healing.
98

A Fault-Based Model of Fault Localization Techniques

Hays, Mark A 01 January 2014 (has links)
Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important. In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault's location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future. While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers' work. The results of the survey indicate that software testing researchers are underreporting the significance of their work.
99

Model Refinement and Reduction for the Nitroxide-Mediated Radical Polymerization of Styrene with Applications on the Model-Based Design of Experiments

Hazlett, Mark Daniel 21 September 2012 (has links)
Polystyrene (PS) is an important commodity polymer. In its most commonly used form, PS is a high molecular weight linear polymer, typically produced through free-radical polymerization, which is a well understood and robust process. This process produces a high molecular weight, clear thermoplastic that is hard, rigid and has good thermal and melt flow properties for use in moldings, extrusions and films. However, polystyrene produced through the free radical process has a very broad molecular weight distribution, which can lead to poor performance in some applications. To this end, nitroxide-mediated radical polymerization (NMRP) can synthesize materials with a much more consistently defined molecular architecture as well as relatively low polydispersity than other methods. NMRP involves radical polymerization in the presence of a nitroxide mediator. This mediator is usually of the form of a stable radical which can bind to and disable the growing polymer chain. This will “tie up” some of the free radicals forming a dynamic equilibrium between active and dormant species, through a reversible coupling process. NMRP can be conducted through one of two different processes: (1) The bimolecular process, which can be initiated with a conventional peroxide initiator (i.e. BPO) but in the presence of a stable nitroxide radical (i.e. TEMPO), which is a stable radical that can reversibly bind with the growing polymer radical chain, and (2) The unimolecular process, where nitroxyl ether is introduced to the system, which then degrades to create both the initiator and mediator radicals. Based on previous research in the group, which included experimental investigations with both unimolecular and bimolecular NMRP under various conditions, it was possible to build on an earlier model and come up with an improved detailed mechanistic model. Additionally, it was seen that certain parameters in the model had little impact on the overall model performance, which suggested that their removal would be appropriate, also serving to reduce the complexity of the model. Comparisons of model predictions with experimental data both from within the group and the general literature were performed and trends verified. Further work was done on the development of an additionally reduced model, and on the testing of these different levels of model complexity with data. The aim of this analysis was to develop a model to capture the key process responses in a simple and easy to implement manner with comparable accuracy to the complete models. Due to its lower complexity, this substantially reduced model would me a much likelier candidate for use in on-line applications. Application of these different model levels to the model-based D-optimal design of experiments was then pursued, with results compared to those generated by a parallel Bayesian design project conducted within the group. Additional work was done using a different optimality criterion, targeted at reducing the amount of parameter correlation that may be seen in D-optimal designs. Finally, conclusions and recommendations for future work were made, including a detailed explanation of how a model similar to the ones described in this paper could be used in the optimal selection of sensors and design of experiments.
100

Model Discrimination Using Markov Chain Monte Carlo Methods

Masoumi, Samira 24 April 2013 (has links)
Model discrimination deals with situations where there are several candidate models available to represent a system. The objective is to find the “best” model among rival models with respect to prediction of system behavior. Empirical and mechanistic models are two important categories of models. Mechanistic models are developed based on physical mechanisms. These types of models can be applied for prediction purposes, but they are also developed to gain improved understanding of the underlying physical mechanism or to estimate physico-chemical parameters of interest. When model discrimination is applied to mechanistic models, the main goal is typically to determine the “correct” underlying physical mechanism. This study focuses on mechanistic models and presents a model discrimination procedure which is applicable to mechanistic models for the purpose of studying the underlying physical mechanism. Obtaining the data needed from the real system is one of the challenges particularly in applications where experiments are expensive or time consuming. Therefore, it is beneficial to get the maximum information possible from the real system using the least possible number of experiments. In this research a new approach to model discrimination is presented that takes advantage of Monte Carlo (MC) methods. It combines a design of experiments (DOE) method with an adaptation of MC model selection methods to obtain a sequential Bayesian Markov Chain Monte Carlo model discrimination framework which is general and usable for a wide range of model discrimination problems. The procedure has been applied to chemical engineering case studies and the promising results have been discussed. Four case studies, order of reaction, rate of FeIII formation, copolymerization, and RAFT polymerization, are presented in this study. The first three benchmark problems allowed us to refine the proposed approach. Moreover, applying the Sequential Bayesian Monte Carlo model discrimination framework in the RAFT problem made a contribution to the polymer community by recommending analysis an approach to selecting the correct mechanism.

Page generated in 0.1119 seconds