1 |
Robust Parameter Design for Automatically Controlled Systems and Nanostructure SynthesisDasgupta, Tirthankar 25 June 2007 (has links)
This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures.
In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored.
To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.
|
2 |
Multiple testing problems in classical clinical trial and adaptive designsDeng, Xuan 07 November 2018 (has links)
Multiplicity issues arise prevalently in a variety of situations in clinical trials and statistical methods for multiple testing have gradually gained importance with the increasing number of complex clinical trial designs. In general, two types of multiple testing can be performed (Dmitrienko et al., 2009): union-intersection testing (UIT) and intersection-union testing (IUT). The UIT is of the interest in this dissertation. Thus, the familywise error rate (FWER) is required to be controlled in the strong sense.
A number of methods have been developed for controlling the FWER, including single-step and stepwise procedures. In single-step approaches, such as the simple Bonferroni method, the rejection decision of a hypothesis does not depend on the decision of any other hypotheses. Single-step approaches can be improved in terms of power through stepwise approaches, while also controlling for the desired error rate. Besides, it is also possible to improve those procedures by a parametric approach. In the first project, we developed a new and powerful single-step progressive parametric multiple (SPPM) testing procedure for correlated normal test statistics. Through simulation studies, we demonstrate that SPPM improves power substantially when the correlation is moderate and/or the magnitude of eect sizes are similar.
Group sequential designs (GSD) are clinical trials allowing interim looks with the possibility of early terminations due to ecacy, harm or futility, which can reduce the overall costs and timelines for the development of a new drug. However, repeated looks of data also have multiplicity issues and could inflate the type I error rate. The proper treatments to the error inflation have been discussed widely (Pocock, 1977), (O'Brien and Fleming, 1979), (Wang and Tsiatis, 1987), (Lan and DeMets, 1983). Most literature about GSD focuses on a single endpoint. GSD with multiple endpoints however, has also received considerable attention. The main focus of our second project is a GSD with multiple primary endpoints, in which the trial is to evaluate whether at least one of the endpoints is statistically signicant. In this study design, multiplicity issues arise from repeated interims and multiple endpoints. Therefore, the appropriate adjustments must be made to control the Type I error rate. Our second purpose here is to show that the combination of multiple endpoint and repeated interim analyses can lead to a more powerful design. Via the multivariate normal distribution, a method that allows for simultaneously consideration of interim analyses and all clinical endpoints was proposed. The new approach is derived from the closure principle, thus it can control type I error rate strongly. We evaluate the power under dierent scenarios and show that it compares favorably to other methods when correlation among endpoints is non-zero.
In the group sequential design framework, another interesting topic is multiple arm multiple stage design (MAMS), where multiple arms are involved in the trial at the beginning with the flexibility about treatment selection or stopping decisions during the interim analyses. One of major hurdles of MAMS is the computational cost with the increasing number of arms and interim looks. Various designs were implemented to overcome this diculty (Thall et al., 1988; Schaid et al., 1990; Follmann et al., 1994; Stallard and Todd, 2003; Stallard and Friede, 2008; Magirr et al., 2012; Wason et al., 2017), but also control the FWER with the potential inflation from the multiple arm comparisons and multiple interim tests. Here, we consider a more flexible drop-the-loser design allowing the safety information in the treatment selection without a pre-specied dropping-arms mechanism and it still retains reasonable high power. The two dierent types of stopping boundaries are proposed for such a design. A sample size is also adjustable if the winner arm is dropped due to the safety considerations.
|
3 |
The use and non-use of sports supplements : A mixed methods study among people exercising at gyms / Användning och icke-användning av kosttillskott : En mixed methods-studie bland individer som tränar på gymMoberg, Kajsa January 2017 (has links)
Sports supplements include nutritional supplements and ergogenic aids and are widely used in the gym culture. Previous research has examined predictors for supplement use, but lacks an insight into why these patterns appear. The objective was to examine predictors for sports supplement use among people exercising at gyms and explore how sports supplements are used, perceived and viewed upon among a group of regular gym users. A mixed methods explanatory sequential design was used. In phase 1, an online cross-sectional survey was conducted. Phase 2 consisted of six semi-structured interviews exploring why sports supplements are used and not used, as well as expectations and beliefs regarding sports supplements among training individuals. 85 individuals participated in the survey. 68 percent used sports supplements regularly, but no predictors from previous research could be confirmed. The interviews showed that supplements were used for convenience and to ensure a sufficient nutrition intake, while non-users expressed a lack of knowledge and believed supplements to be inefficient and unnecessary. No predictors for use of sports supplements were confirmed, but both users and non-users highly value health responsibility in their decision of supplement use. To users, sports supplements are efficient and convenient dietary complements and replacements. Non-supplement users regard sports supplements to be unnecessary, inefficient and less enjoyable than food. Due to the small sample size, more studies are needed within the field in order to fully understand the role of sports supplements in the target group. / Träningstillskott inkluderar tillskott av enskilda näringsämnen och prestationshöjande medel och används flitigt i gymvärlden. Tidigare forskning har undersökt vilka faktorer som spelar roll för användning av tillskott, men saknar en djupare insikt om varför dessa mönster uppstår.Syftet var att undersöka avgörande faktorer för användning av träningstillskott bland individer som tränar på gym samt undersöka hur en grupp gymtränande individer använder, uppfattar och ser på träningstillskott. En mixed methods explanatory sequential design användes. Fas 1 bestod av en onlineenkät. Fas 2 utgjordes av sex semistrukturerade intervjuer som undersökte varför träningstillskott används respektive inte används samt förväntningar och uppfattningar gällande träningstillskott bland tränande individer. 85 individer deltog i enkäten. 68 procent använde träningstillskott regelbundet, men inga av de faktorer för användning som identifierats i tidigare forskning kunde bekräftas. Intervjuerna visade att tillskott användes av bekvämlighetsskäl och för att försäkra individen om ett tillräckligt näringsintag, medan icke-användare uttryckte otillräcklig kunskap och uppfattade tillskott som ineffektiva och onödiga. Inga prediktorer för användning av träningstillskott kunde bekräftas, men både användare och icke-användare värderar hälsoansvar högt i sitt beslut gällande användning av träningstillskott. Användare anser tillskott vara effektiva och bekväma komplement och ersättare i kosten. Icke-tillskottsanvändare uppfattar tillskott som onödiga, ineffektiva och mindre njutningsfulla än mat. På grund av ett litet urval behövs fler studier inom området för att få en tydligare bild av hur träningstillskott används av målgruppen.
|
4 |
Experiences of employees in a non-profit organisation : the role of psychological capital / Lorette TheronTheron, Lorette January 2015 (has links)
Research regarding employee well-being has generally been neglected in the non-profit
organisation (NPO) sector. In many aspects NPOs function similar to for-profit organisations, but face challenges such as more financial restraints. Despite these
difficulties, many people choose to work at and remain employed with NPOs. The
NPO sector is expanding at a rapid pace and therefore needs to recruit and retain
people more effectively without spending too many resources. The objective of this study was to investigate the role of psychological capital (PsyCap) in the decision to work in the NPO sector, and determine further reasons to choose and remain with this sector. An explanatory sequential mixed method design was used with an availability sample (N=108) of employees at an NPO in the social services sector in Gauteng and North West provinces. In the quantitative study, the Psychological Capital Questionnaire (PCQ) was used as measuring instrument. The qualitative study entailed semi-structured interviews with participants with lower (n = 8) and higher (n = 8) PsyCap. The results indicated that NPO employees had a higher level of PsyCap. Differences with regard to their preference to work at an NPO were found between individuals with higher and lower levels of PsyCap, specifically pertaining to the reasons for joining an NPO, motivation, meaning, fulfilment and viewing their work as a calling. No clear inconsistencies with regards to rewards and the choice of working in the NPO, public and private sectors were found among individuals with higher and lower PsyCap. The main reasons influencing the decision to work at an NPO were altruism, type of rewards, job satisfaction, organisational factors, positive social influence, and experiencing their work at an NPO as a calling. The study addresses the lack of research on employee well-being in the NPO sector and extends PsyCap research to NPOs. Characteristics of employees who choose to work in NPOs are emphasised. Recommendations for the organisation and suggestions for future research are presented. / MA (Industrial Psychology)--North-West University, Vaal Triangle Campus, 2015
|
5 |
Experiences of employees in a non-profit organisation : the role of psychological capital / Lorette TheronTheron, Lorette January 2015 (has links)
Research regarding employee well-being has generally been neglected in the non-profit
organisation (NPO) sector. In many aspects NPOs function similar to for-profit organisations, but face challenges such as more financial restraints. Despite these
difficulties, many people choose to work at and remain employed with NPOs. The
NPO sector is expanding at a rapid pace and therefore needs to recruit and retain
people more effectively without spending too many resources. The objective of this study was to investigate the role of psychological capital (PsyCap) in the decision to work in the NPO sector, and determine further reasons to choose and remain with this sector. An explanatory sequential mixed method design was used with an availability sample (N=108) of employees at an NPO in the social services sector in Gauteng and North West provinces. In the quantitative study, the Psychological Capital Questionnaire (PCQ) was used as measuring instrument. The qualitative study entailed semi-structured interviews with participants with lower (n = 8) and higher (n = 8) PsyCap. The results indicated that NPO employees had a higher level of PsyCap. Differences with regard to their preference to work at an NPO were found between individuals with higher and lower levels of PsyCap, specifically pertaining to the reasons for joining an NPO, motivation, meaning, fulfilment and viewing their work as a calling. No clear inconsistencies with regards to rewards and the choice of working in the NPO, public and private sectors were found among individuals with higher and lower PsyCap. The main reasons influencing the decision to work at an NPO were altruism, type of rewards, job satisfaction, organisational factors, positive social influence, and experiencing their work at an NPO as a calling. The study addresses the lack of research on employee well-being in the NPO sector and extends PsyCap research to NPOs. Characteristics of employees who choose to work in NPOs are emphasised. Recommendations for the organisation and suggestions for future research are presented. / MA (Industrial Psychology)--North-West University, Vaal Triangle Campus, 2015
|
6 |
A comparison of adaptive designs in clinical trials : when multiple treatments are tested in multiple stagesPark, Sukyung 09 October 2014 (has links)
In recent times, there has been an increasing interest in adaptive designs for clinical trials. As opposed to conventional designs, adaptive designs allow flexible design adaptation in the middle of a trial based on accumulated data. Although various models have been developed using both frequentist and Bayesian perspectives, relative statistical performances of adaptive designs are somewhat controversial and little is known about those of Bayesian adaptive designs. Most comparison studies also focused on single experimental treatment rather than multiple experimental treatments. In this report, both frequentist and Baysian adaptive designs were compared in terms of statistical power by a simulation study, assuming the situation when multiple experimental treatments are tested in multiple stages. The designs included in the current study are group sequential design (frequentist), adaptive design based on combination tests (frequentist), and Bayesian adaptive design (Bayesian). Based upon the results under multiple scenarios, the Bayesian adaptive design showed the highest power, and the design based on combination tests performed better than group sequential designs when proper interim adaptation could be conducted to increase power. / text
|
7 |
Dismembering the Multi-Armed BanditTimothy J Keaton (6991049) 14 August 2019 (has links)
<div>The multi-armed bandit (MAB) problem refers to the task of sequentially assigning treatments to experimental units so as to identify the best treatment(s) while controlling the opportunity cost of further investigation. Many algorithms have been developed that attempt to balance this trade-off between exploiting the seemingly optimum treatment and exploring the other treatments. The selection of an MAB algorithm for implementation in a particular context is often performed by comparing candidate algorithms in terms of their abilities to control the expected regret of exploration versus exploitation. This singular criterion of mean regret is insufficient for many practical problems, and therefore an additional criterion that should be considered is control of the variance, or risk, of regret.</div><div>This work provides an overview of how the existing prominent MAB algorithms handle both criteria. We additionally investigate the effects of incorporating prior information into an algorithm's model, including how sharing information across treatments affects the mean and variance of regret.</div><div>A unified and accessible framework does not currently exist for constructing MAB algorithms that control both of these criteria. To this end, we develop such a framework based on the two elementary concepts of dismemberment of treatments and a designed learning phase prior to dismemberment. These concepts can be incorporated into existing MAB algorithms to effectively yield new algorithms that better control the expectation and variance of regret. We demonstrate the utility of our framework by constructing new variants of the Thompson sampler that involve a small number of simple tuning parameters. As we illustrate in simulation and case studies, these new algorithms are implemented in a straightforward manner and achieve improved control of both regret criteria compared to the traditional Thompson sampler. Ultimately, our consideration of additional criteria besides expected regret illuminates novel insights into the multi-armed bandit problem.</div><div>Finally, we present visualization methods, and a corresponding R Shiny app for their practical execution, that can yield insights into the comparative performances of popular MAB algorithms. Our visualizations illuminate the frequentist dynamics of these algorithms in terms of how they perform the exploration-exploitation trade-off over their populations of realizations as well as the algorithms' relative regret behaviors. The constructions of our visualizations facilitate a straightforward understanding of complicated MAB algorithms, so that our visualizations and app can serve as unique and interesting pedagogical tools for students and instructors of experimental design.</div>
|
8 |
Robust design using sequential computer experimentsGupta, Abhishek 30 September 2004 (has links)
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
|
9 |
Robust design using sequential computer experimentsGupta, Abhishek 30 September 2004 (has links)
Modern engineering design tends to use computer simulations such as Finite Element Analysis (FEA) to replace physical experiments when evaluating a quality response, e.g., the stress level in a phone packaging process. The use of computer models has certain advantages over running physical experiments, such as being cost effective, easy to try out different design alternatives, and having greater impact on product design. However, due to the complexity of FEA codes, it could be computationally expensive to calculate the quality response function over a large number of combinations of design and environmental factors. Traditional experimental design and response surface methodology, which were developed for physical experiments with the presence of random errors, are not very effective in dealing with deterministic FEA simulation outputs. In this thesis, we will utilize a spatial statistical method (i.e., Kriging model) for analyzing deterministic computer simulation-based experiments. Subsequently, we will devise a sequential strategy, which allows us to explore the whole response surface in an efficient way. The overall number of computer experiments will be remarkably reduced compared with the traditional response surface methodology. The proposed methodology is illustrated using an electronic packaging example.
|
10 |
AN APPROACH FOR FINDING A GENERAL APPROXIMATION TO THE GROUP SEQUENTIAL BOOTSTRAP TESTEkstedt, Douglas January 2022 (has links)
Randomized experiments are regarded as the gold standard for estimating causal effects. Commonly, a single test is performed using a fixed sample size. However, observations may also be observed sequentially and because of economical and ethical reasons, it may be desirable to terminate the trial early. The group sequential design allows for interim analyses and early stopping of a trial without the need for continuous monitoring of the accumulating data. The implementation of a group sequential procedure requires that the sampling distribution of the test statistic observed at each wave of testing to have a known or asymptotically known sampling distribution. This thesis investigates an approach for finding a general approximation to the group sequential bootstrap test for test statistics with unknown or analytically intractable sampling distributions. There is currently no bootstrap version of the group sequential test. The approach implies approximating the covariance structure of the test statistics over time, but not the marginal sampling distribution, with that of a normal test statistic. The evaluation is performed with a Monte Carlo simulation study where the achieved significance level is compared to the nominal. Evidence from the Monte Carlo simulations suggests that the approach performs well for test statistics with sampling distributions close to a normal distribution.
|
Page generated in 0.0583 seconds