Spelling suggestions: "subject:"design Of byexperiments"" "subject:"design Of c.experiments""
291 |
Methods for evaluating dropout attrition in survey dataHochheimer, Camille J 01 January 2019 (has links)
As researchers increasingly use web-based surveys, the ease of dropping out in the online setting is a growing issue in ensuring data quality. One theory is that dropout or attrition occurs in phases that can be generalized to phases of high dropout and phases of stable use. In order to detect these phases, several methods are explored. First, existing methods and user-specified thresholds are applied to survey data where significant changes in the dropout rate between two questions is interpreted as the start or end of a high dropout phase. Next, survey dropout is considered as a time-to-event outcome and tests within change-point hazard models are introduced. Performance of these change-point hazard models is compared. Finally, all methods are applied to survey data on patient cancer screening preferences, testing the null hypothesis of no phases of attrition (no change-points) against the alternative hypothesis that distinct attrition phases exist (at least one change-point).
|
292 |
SPC and DOE in production of organic electronicsNilsson, Marcus, Ruth, Johan January 2006 (has links)
<p>At Acreo AB located in Norrköping, Sweden, research and development in the field of organic electronics have been conducted since 1998. Several electronic devices and systems have been realized. In late 2003 a commercial printing press was installed to test large scale production of these devices. Prior to the summer of 2005 the project made significant progress. As a step towards industrialisation, the variability and yield of the printing process needed to bee studied. A decision to implement Statistical Process Control (SPC) and Design of Experiments (DOE) to evaluate and improve the process was taken.</p><p>SPC has been implemented on the EC-patterning step in the process. A total of 26 Samples were taken during the period October-December 2005. An - and s-chart were constructed from these samples. The charts clearly show that the process is not in statistical control. Investigations of what causes the variation in the process have been performed. The following root causes to variation has been found: </p><p>PEDOT:PSS-substrate sheet resistance and poorly cleaned screen printing drums. </p><p>After removing points affected by root causes, the process is still not in control. Further investigations are needed to get the process in control. Examples of where to go next is presented in the report. In the DOE part a four factor full factorial experiment was performed. The goal with the experiment was to find how different factors affects switch time and life length of an electrochromic display. The four factors investigated were: Electrolyte, Additive, Web speed and Encapsulation. All statistical analysis was performed using Minitab 14. The analysis of measurements from one day and seven days after printing showed that:</p><p>- Changing Electrolyte from E230 to E235 has small effect on the switch time</p><p>- Adding additives Add1 and Add2 decreases the switch time after 1 and 7 days</p><p>- Increasing web speed decreases the switch time after 1 and 7 days </p><p>- Encapsulation before UV-step decreases the switch time after 7 days</p>
|
293 |
Experimental Designs at the Crossroads of Drug DiscoveryOlsson, Ing-Marie January 2006 (has links)
<p>New techniques and approaches for organic synthesis, purification and biological testing are enabling pharmaceutical industries to produce and test increasing numbers of compounds every year. Surprisingly, this has not led to more new drugs reaching the market, prompting two questions – why is there not a better correlation between their efforts and output, and can it be improved? One possible way to make the drug discovery process more efficient is to ensure, at an early stage, that the tested compounds are diverse, representative and of high quality. In addition the biological evaluation systems have to be relevant and reliable. The diversity of the tested compounds could be ensured and the reliability of the biological assays improved by using Design Of Experiments (DOE) more frequently and effectively. However, DOE currently offers insufficient options for these purposes, so there is a need for new, tailor-made DOE strategies. The aim of the work underlying this thesis was to develop and evaluate DOE approaches for diverse compound selection and efficient assay optimisation. This resulted in the publication of two new DOE strategies; D-optimal Onion Design (DOOD) and Rectangular Experimental Designs for Multi-Unit Platforms (RED-MUP), both of which are extensions to established experimental designs.</p><p>D-Optimal Onion Design (DOOD) is an extension to D-optimal design. The set of possible objects that could be selected is divided into layers and D-optimal selection is applied to each layer. DOOD enables model-based, but not model-dependent, selections in discrete spaces to be made, since the selections are not only based on the D-optimality criterion, but are also biased by the experimenter’s prior knowledge and specific needs. Hence, DOOD selections provide controlled diversity.</p><p>Assay development and optimisation can be a major bottleneck restricting the progress of a project. Although DOE is a recognised tool for optimising experimental systems, there has been widespread unwillingness to use it for assay optimisation, mostly because of the difficulties involved in performing experiments according to designs in 96-, 384- and 1536- well formats. The RED-MUP framework combines classical experimental designs orthogonally onto rectangular experimental platforms, which facilitates the execution of DOE on these platforms and hence provides an efficient tool for assay optimisation.</p><p>In combination, these two strategies can help uncovering the crossroads between biology and chemistry in drug discovery as well as lead to higher information content in the data received from biological evaluations, providing essential information for well-grounded decisions as to the future of the project. These two strategies can also help researchers identify the best routes to take at the crossroads linking biological and chemical elements of drug discovery programs.</p>
|
294 |
Multivariate profiling of metabolites in human disease : Method evaluation and application to prostate cancerThysell, Elin January 2012 (has links)
There is an ever increasing need of new technologies for identification of molecular markers for early diagnosis of fatal diseases to allow efficient treatment. In addition, there is great value in finding patterns of metabolites, proteins or genes altered in relation to specific disease conditions to gain a deeper understanding of the underlying mechanisms of disease development. If successful, scientific achievements in this field could apart from early diagnosis lead to development of new drugs, treatments or preventions for many serious diseases. Metabolites are low molecular weight compounds involved in the chemical reactions taking place in the cells of living organisms to uphold life, i.e. metabolism. The research field of metabolomics investigates the relationship between metabolite alterations and biochemical mechanisms, e.g. disease processes. To understand these associations hundreds of metabolites present in a sample are quantified using sensitive bioanalytical techniques. In this way a unique chemical fingerprint is obtained for each sample, providing an instant picture of the current state of the studied system. This fingerprint or picture can then be utilized for the discovery of biomarkers or biomarker patterns of biological and clinical relevance. In this thesis the focus is set on evaluation and application of strategies for studying metabolic alterations in human tissues associated with disease. A chemometric methodology for processing and modeling of gas chromatography-mass spectrometry (GC-MS) based metabolomics data, is designed for developing predictive systems for generation of representative data, validation and result verification, diagnosis and screening of large sample sets. The developed strategies were specifically applied for identification of metabolite markers and metabolic pathways associated with prostate cancer disease progression. The long-term goal was to detect new sensitive diagnostic/prognostic markers, which ultimately could be used to differentiate between indolent and aggressive tumors at diagnosis and thus aid in the development of personalized treatments. Our main finding so far is the detection of high levels of cholesterol in prostate cancer bone metastases. This in combination with previously presented results suggests cholesterol as a potentially interesting therapeutic target for advanced prostate cancer. Furthermore we detected metabolic alterations in plasma associated with metastasis development. These results were further explored in prospective samples attempting to verify some of the identified metabolites as potential prognostic markers.
|
295 |
Metabolomics studies of ALS : a multivariate search for clues about a devastating diseaseWuolikainen, Anna January 2009 (has links)
Amyotrophic lateral sclerosis (ALS), also known as Charcot’s disease, motor neuron disease (MND) and Lou Gehrig’s disease, is a deadly, adult-onset neurodegenerative disorder characterized by progressive loss of upper and lower motor neurons, resulting in evolving paresis of the linked muscles. ALS is defined by classical features of the disease, but may present as a wide spectrum of phenotypes. About 10% of all ALS cases have been reported as familial, of which about 20% have been associated with mutations in the gene encoding for CuZn superoxide dismutase (SOD1). The remaining cases are regarded as sporadic. Research has advanced our understanding of the disease, but the cause is still unknown, no reliable diagnostic test exists, no cure has been found and the current therapies are unsatisfactory. Riluzole (Rilutek®) is the only registered drug for the treatment of ALS. The drug has shown only a modest effect in prolonging life and the mechanism of action of riluzole is not yet fully understood. ALS is diagnosed by excluding diseases with similar symptoms. At an early stage, there are numerous possible diseases that may present with similar symptoms, thereby making the diagnostic procedure cumbersome, extensive and time consuming with a significant risk of misdiagnosis. Biomarkers that can be developed into diagnostic test of ALS are therefore needed. The high number of unsuccessful attempts at finding a single diseasespecific marker, in combination with the complexity of the disease, indicates that a pattern of several markers is perhaps more likely to provide a diagnostic signature for ALS. Metabolomics, in combination with chemometrics, can be a useful tool with which to study human disease. Metabolomics can screen for small molecules in biofluids such as cerebrospinal fluid (CSF) and chemometrics can provide structure and tools in order to handle the types of data generated from metabolomics. In this thesis, ALS has been studied using a combination of metabolomics and chemometrics. Collection and storage of CSF in relation to metabolite stability have been extensively evaluated. Protocols for metabolomics on CSF samples have been proposed, used and evaluated. In addition, a new feature of data processing allowing new samples to be predicted into existing models has been tested, evaluated and used for metabolomics on blood and CSF. A panel of potential biomarkers has been generated for ALS and subtypes of ALS. An overall decrease in metabolite concentration was found for subjects with ALS compared to their matched controls. Glutamic acid was one of the metabolites found to be decreased in patients with ALS. A larger metabolic heterogeneity was detected among SALS cases compared to FALS. This was also reflected in models of SALS and FALS against their respective matched controls, where no significant difference from control was found for SALS while the FALS samples significantly differed from their matched controls. Significant deviating metabolic patterns were also found between ALS subjects carrying different mutations in the gene encoding SOD1.
|
296 |
Robust inference of gene regulatory networks : System properties, variable selection, subnetworks, and design of experimentsNordling, Torbjörn E. M. January 2013 (has links)
In this thesis, inference of biological networks from in vivo data generated by perturbation experiments is considered, i.e. deduction of causal interactions that exist among the observed variables. Knowledge of such regulatory influences is essential in biology. A system property–interampatteness–is introduced that explains why the variation in existing gene expression data is concentrated to a few “characteristic modes” or “eigengenes”, and why previously inferred models have a large number of false positive and false negative links. An interampatte system is characterized by strong INTERactions enabling simultaneous AMPlification and ATTEnuation of different signals and we show that perturbation of individual state variables, e.g. genes, typically leads to ill-conditioned data with both characteristic and weak modes. The weak modes are typically dominated by measurement noise due to poor excitation and their existence hampers network reconstruction. The excitation problem is solved by iterative design of correlated multi-gene perturbation experiments that counteract the intrinsic signal attenuation of the system. The next perturbation should be designed such that the expected response practically spans an additional dimension of the state space. The proposed design is numerically demonstrated for the Snf1 signalling pathway in S. cerevisiae. The impact of unperturbed and unobserved latent state variables, that exist in any real biological system, on the inferred network and required set-up of the experiments for network inference is analysed. Their existence implies that a subnetwork of pseudo-direct causal regulatory influences, accounting for all environmental effects, in general is inferred. In principle, the number of latent states and different paths between the nodes of the network can be estimated, but their identity cannot be determined unless they are observed or perturbed directly. Network inference is recognized as a variable/model selection problem and solved by considering all possible models of a specified class that can explain the data at a desired significance level, and by classifying only the links present in all of these models as existing. As shown, these links can be determined without any parameter estimation by reformulating the variable selection problem as a robust rank problem. Solution of the rank problem enable assignment of confidence to individual interactions, without resorting to any approximation or asymptotic results. This is demonstrated by reverse engineering of the synthetic IRMA gene regulatory network from published data. A previously unknown activation of transcription of SWI5 by CBF1 in the IRMA strain of S. cerevisiae is proven to exist, which serves to illustrate that even the accumulated knowledge of well studied genes is incomplete. / Denna avhandling behandlar inferens av biologiskanätverk från in vivo data genererat genom störningsexperiment, d.v.s. bestämning av kausala kopplingar som existerar mellan de observerade variablerna. Kunskap om dessa regulatoriska influenser är väsentlig för biologisk förståelse. En system egenskap—förstärksvagning—introduceras. Denna förklarar varför variationen i existerande genexpressionsdata är koncentrerat till några få ”karakteristiska moder” eller ”egengener” och varför de modeller som konstruerats innan innehåller många falska positiva och falska negativa linkar. Ett system med förstärksvagning karakteriseras av starka kopplingar som möjliggör simultan FÖRSTÄRKning och förSVAGNING av olika signaler. Vi demonstrerar att störning av individuella tillståndsvariabler, t.ex. gener, typiskt leder till illakonditionerat data med både karakteristiska och svaga moder. De svaga moderna domineras typiskt av mätbrus p.g.a. dålig excitering och försvårar rekonstruktion av nätverket. Excitationsproblemet löses med iterativdesign av experiment där korrelerade störningar i multipla gener motverkar systemets inneboende försvagning av signaller. Följande störning bör designas så att det förväntade svaret praktiskt spänner ytterligare en dimension av tillståndsrummet. Den föreslagna designen demonstreras numeriskt för Snf1 signalleringsvägen i S. cerevisiae. Påverkan av ostörda och icke observerade latenta tillståndsvariabler, som existerar i varje verkligt biologiskt system, på konstruerade nätverk och planeringen av experiment för nätverksinferens analyseras. Existens av dessa tillståndsvariabler innebär att delnätverk med pseudo-direkta regulatoriska influenser, som kompenserar för miljöeffekter, generellt bestäms. I princip så kan antalet latenta tillstånd och alternativa vägar mellan noder i nätverket bestämmas, men deras identitet kan ej bestämmas om de inte direkt observeras eller störs. Nätverksinferens behandlas som ett variabel-/modelselektionsproblem och löses genom att undersöka alla modeller inom en vald klass som kan förklara datat på den önskade signifikansnivån, samt klassificera endast linkar som är närvarande i alla dessa modeller som existerande. Dessa linkar kan bestämmas utan estimering av parametrar genom att skriva om variabelselektionsproblemet som ett robustrangproblem. Lösning av rangproblemet möjliggör att statistisk konfidens kan tillskrivas individuella linkar utan approximationer eller asymptotiska betraktningar. Detta demonstreras genom rekonstruktion av det syntetiska IRMA genreglernätverket från publicerat data. En tidigare okänd aktivering av transkription av SWI5 av CBF1 i IRMA stammen av S. cerevisiae bevisas. Detta illustrerar att t.o.m. den ackumulerade kunskapen om välstuderade gener är ofullständig. / <p>QC 20130508</p>
|
297 |
Analysis Of The Influence Of Non-machining Process Parameters On Product Quality By Experimental Design And Statistical AnalysisYurtseven, Saygin 01 September 2003 (has links) (PDF)
This thesis illustrates analysis of the influence of the non-machining processes on product quality by experimental design and statistical analysis. For the analysis objective / dishwasher production in Arcelik Dishwasher plant is examined. Sheet metal forming processes of dishwasher production constitutes the greatest portion of production cost and using the Pareto analysis technique / four pieces among twenty six pieces are determined to be investigated. These four pieces are the U Sheet, L Sheet, Inner Door and Side Panel of the dishwasher. By the help of the flow diagrams production process of the determined pieces are defined. Brainstorming technique and cause& / effect diagrams are used to determine which non-machining process parameters can cause pieces to be scrapped. These parameters are used as control factors in experimental design. Taguchi& / #8217 / s L16(215) orthogonal array, Taguchi& / #8217 / s L16(215) orthogonal array using S/N transformation and 28-4 fractional factorial design are used on purpose. With repetitions and confirmation experiments the effective parameters are determined and optimum level of these parameters are defined for the improvements on scrap quantity and quality of production.
|
298 |
SPC and DOE in production of organic electronicsNilsson, Marcus, Ruth, Johan January 2006 (has links)
At Acreo AB located in Norrköping, Sweden, research and development in the field of organic electronics have been conducted since 1998. Several electronic devices and systems have been realized. In late 2003 a commercial printing press was installed to test large scale production of these devices. Prior to the summer of 2005 the project made significant progress. As a step towards industrialisation, the variability and yield of the printing process needed to bee studied. A decision to implement Statistical Process Control (SPC) and Design of Experiments (DOE) to evaluate and improve the process was taken. SPC has been implemented on the EC-patterning step in the process. A total of 26 Samples were taken during the period October-December 2005. An - and s-chart were constructed from these samples. The charts clearly show that the process is not in statistical control. Investigations of what causes the variation in the process have been performed. The following root causes to variation has been found: PEDOT:PSS-substrate sheet resistance and poorly cleaned screen printing drums. After removing points affected by root causes, the process is still not in control. Further investigations are needed to get the process in control. Examples of where to go next is presented in the report. In the DOE part a four factor full factorial experiment was performed. The goal with the experiment was to find how different factors affects switch time and life length of an electrochromic display. The four factors investigated were: Electrolyte, Additive, Web speed and Encapsulation. All statistical analysis was performed using Minitab 14. The analysis of measurements from one day and seven days after printing showed that: - Changing Electrolyte from E230 to E235 has small effect on the switch time - Adding additives Add1 and Add2 decreases the switch time after 1 and 7 days - Increasing web speed decreases the switch time after 1 and 7 days - Encapsulation before UV-step decreases the switch time after 7 days
|
299 |
A Systematic Process for Adaptive Concept ExplorationNixon, Janel Nicole 29 November 2006 (has links)
This thesis presents a method for streamlining the process of obtaining and interpreting quantitative data for the purpose of creating a low-fidelity modeling and simulation environment. By providing a more efficient means for obtaining such information, quantitative analyses become much more practical for decision-making in the very early stages of design, where traditionally, quants are viewed as too expensive and cumbersome for concept evaluation.
The method developed to address this need uses a Systematic Process for Adaptive Concept Exploration (SPACE). In the SPACE method, design space exploration occurs in a sequential fashion; as data is acquired, the sampling scheme adapts to the specific problem at hand. Previously gathered data is used to make inferences about the nature of the problem so that future samples can be taken from the more interesting portions of the design space. Furthermore, the SPACE method identifies those analyses that have significant impacts on the relationships being modeled, so that effort can be focused on acquiring only the most pertinent information.
The results show that the combination of a tailored data set, and an informed model structure work together to provide a meaningful quantitative representation of the system while relying on only a small amount of resources to generate that information. In comparison to more traditional modeling and simulation approaches, the SPACE method provides a more accurate representation of the system using fewer resources to generate that representation. For this reason, the SPACE method acts as an enabler for decision making in the very early design stages, where the desire is to base design decisions on quantitative information while not wasting valuable resources obtaining unnecessary high fidelity information about all the candidate solutions. Thus, the approach enables concept selection to be based on parametric, quantitative data so that informed, unbiased decisions can be made.
|
300 |
A Hierarchical History Matching Method and its ApplicationsYin, Jichao 2011 December 1900 (has links)
Modern reservoir management typically involves simulations of geological models to predict future recovery estimates, providing the economic assessment of different field development strategies. Integrating reservoir data is a vital step in developing reliable reservoir performance models. Currently, most effective strategies for traditional manual history matching commonly follow a structured approach with a sequence of adjustments from global to regional parameters, followed by local changes in model properties. In contrast, many of the recent automatic history matching methods utilize parameter sensitivities or gradients to directly update the fine-scale reservoir properties, often ignoring geological inconsistency. Therefore, there is need for combining elements of all of these scales in a seamless manner.
We present a hierarchical streamline-assisted history matching, with a framework of global-local updates. A probabilistic approach, consisting of design of experiments, response surface methodology and the genetic algorithm, is used to understand the uncertainty in the large-scale static and dynamic parameters. This global update step is followed by a streamline-based model calibration for high resolution reservoir heterogeneity. This local update step assimilates dynamic production data.
We apply the genetic global calibration to unconventional shale gas reservoir specifically we include stimulated reservoir volume as a constraint term in the data integration to improve history matching and reduce prediction uncertainty. We introduce a novel approach for efficiently computing well drainage volumes for shale gas wells with multistage fractures and fracture clusters, and we will filter stochastic shale gas reservoir models by comparing the computed drainage volume with the measured SRV within specified confidence limits.
Finally, we demonstrate the value of integrating downhole temperature measurements as coarse-scale constraint during streamline-based history matching of dynamic production data. We first derive coarse-scale permeability trends in the reservoir from temperature data. The coarse information are then downscaled into fine scale permeability by sequential Gaussian simulation with block kriging, and updated by local-scale streamline-based history matching.
he power and utility of our approaches have been demonstrated using both synthetic and field examples.
|
Page generated in 0.105 seconds