1 |
NEPTUNE: an interactive system for the design of experiments : CRANIUM: a versatile expert system implementation toolDarmi, M. January 1992 (has links)
No description available.
|
2 |
Statistisk kvalitetsvärdering för optimering av processteg i aluminiumanodisering : Utvärdera avverkning på aluminiumdetaljer i avverkningsbadJungmalm, Kerstin January 2015 (has links)
The intention with this thesis was to evaluate how much pickling that have been achieved on aluminium details in the pickling bath before the anodization in an anodization process. As there were no earlier studies to use around pickling before anodization, statistical experimental design was chosen as planning tool. Statistical experimental design was used to plan the experiments in an organized way and to evaluate how the pickling process works together with the main effects and the interaction effects. Detailed scientific studies were performed on how aluminium is prepared and how the anodization process works. The studies were performed in reference books. Three different methods were designed. The first method, method 1, was based on a fractional factorial design with four design variables, temperature, sodium hydroxide and aluminium concentrations and the time the details was submerged into the pickling bath. The aluminium details was made from a square profile pipe. There was nine experiments performed in method 1. The measurements on the pickling was performed in two ways, first with a dial indicator where the pickling was compared with a reference surface before and after, and another method also performed with a dial indicator, where the measurements was performed over the edge between the pickled surface and the reference surface. A statistical control calculation was done on the surface smoothness of the square profile pipes. The control showed that the standard deviation was 11 µm. Method 2 was based on a complete factorial design where the design variables was temperature and the time the details was submerged into the pickling bath. All aluminium details were homogenous. There were seven experiments performed in method 2. The measurements on the pickling was performed in two ways, first with a dial indicator where the pickling was compared with a reference surface before and after, and another method also performed with a dial indicator, where the measurements was performed over the edge between the pickled surface and the reference surface. A statistical control calculation was done on the surface smoothness of the homogenous details. The control showed that the standard deviation was 14 µm. Method 3 was designed in a different way than method 1 and 2. In method 3 one experiment was performed and the design variable which was changed was the time when the details was submerged into the pickling bath. The aluminium details had the form of homogenous cubes. The measurements on the pickling was performed by measuring the weight of the details on an analytical scale before and after the pickling, and then calculate the pickling in µm in two different ways. The first way was to use the atomic radius of aluminium and the second way was to use the size of the unit cell of aluminium. The two first methods gave very different results than the third method. The result for method 1 showed very random values with great dispersion which resulted in a non-detectable pickling. The result for method 2 was very similar to the result from method 1, very random values with great dispersion and no pickling was detectable with any confidence. The result from method 3 gave a theoretical calculated result for the pickling, when the aluminium details was submerged in the pickling bath for 1 minute, and based on the atomic radius of aluminium, of 1,52 µm and with the same conditions but using the unit cell of aluminium showed a pickling of 1,62 µm. When the aluminium details were submerged in the pickling bath for 3 minutes, the theoretical calculation with the atomic radius of aluminium gave that the pickling was 4,51 µm and with the same conditions but using the unit cell of aluminium showed a pickling of 4,79 µm.
|
3 |
Nonlinear design of geophysical surveys and processing strategiesGuest, Thomas January 2010 (has links)
The principal aim of all scientific experiments is to infer knowledge about a set of parameters of interest through the process of data collection and analysis. In the geosciences, large sums of money are spent on the data analysis stage but much less attention is focussed on the data collection stage. Statistical experimental design (SED), a mature field of statistics, uses mathematically rigorous methods to optimise the data collection stage so as to maximise the amount of information recorded about the parameters of interest. The uptake of SED methods in geophysics has been limited as the majority of SED research is based on linear and linearised theories whereas most geophysical methods are highly nonlinear and therefore the developed methods are not robust. Nonlinear SED methods are computationally demanding and hence to date the methods that do exist limit the designs to be either very simplistic or computationally infeasible and therefore cannot be used in an industrial setting. In this thesis, I firstly show that it is possible to design industry scale experiments for highly nonlinear problems within a computationally tractable time frame. Using an entropy based method constructed on a Bayesian framework I introduce an iteratively-constructive method that reduces the computational demand by introducing one new datum at a time for the design. The method reduces the multidimensional design space to a single-dimensional space at each iteration by fixing the experimental setup of the previous iteration. Both a synthetic experiment using a highly nonlinear parameter-data relationship, and a seismic amplitude versus offset (AVO) experiment are used to illustrate that the results produced by the iteratively-constructive method closely match the results of a global design method at a fraction of the computational cost. This new method thus extends the class of iterative design methods to nonlinear problems, and makes fully nonlinear design methods applicable to higher dimensional industrial scale problems. Using the new iteratively-constructive method, I show how optimal trace profiles for processing amplitude versus angle (AVA) surveys that account for all prior petrophysical information about the target reservoir can be generated using totally nonlinear methods. I examine how the optimal selections change as our prior knowledge of the rock parameters and reservoir fluid content change, and assess which of the prior parameters has the largest effect on the selected traces. The results show that optimal profiles are far more sensitive to prior information about reservoir porosity than information about saturating fluid properties. By applying ray tracing methods the AVA results can be used to design optimal processing profiles from seismic datasets, for multiple targets each with different prior model uncertainties. Although the iteratively-constructive method can be used to design the data collection stage it has been used here to select optimal data subsets post-survey. Using a nonlinear Bayesian SED method I show how industrial scale amplitude versus offset (AVO) data collection surveys can be constructed to maximise the information content contained in AVO crossplots, the principal source of petrophysical information from seismic surveys. The results show that the optimal design is highly dependant on the model parameters when a low number of receivers is being used, but that a single optimal design exists for the complete range of parameters once the number of receivers is increased above a threshold value. However, when acquisition and processing costs are considered I find that, in the case of AVO experiments, a design with constant spatial receiver separation is close to optimal. This explains why regularly-spaced, 2D seismic surveys have performed so well historically, not only from the point of view of noise attenuation and imaging in which homogeneous data coverage confers distinct advantages, but also as providing data to constrain subsurface petrophysical information. Finally, I discuss the implications of the new methods developed and assess which areas of geophysics would benefit from applying SED methods during the design stage.
|
4 |
Statistical optimisation of medium constituent variables for biogas production from N-acetylglucosamine by Clostridium beijerinckii and Clostridium paraputrificumOwoh, Barnabas Chinyere January 2014 (has links)
Statistically based experimental designs were applied to optimise medium constituent for biogas production utilizing N-‐acetylglucosamine as a carbon source for Clostridium beijerinckii and Clostridium paraputrificum. The important medium constituents influencing total biogas produced, identified by the Plackett and Burman method, were FeSO4.7H2O and initial pH for C. beijerinckii cultures whilst for C. paraputrificum cultures N-‐acetylglucosamine, L-‐ cysteine.HCl.H2O and MgCl2. A one factor L-‐cysteine.HCl.H2O optimization design was applied to investigate the ideal concentration of L-‐cysteine.HCl.H2O required to achieve an anaerobic environment for optimum C. beijerinckii total biogas production. The Method of Steepest Ascent was then employed to locate the optimal area of the significant medium variables. Using the Box-‐behnken method, experimental results showed that there were significant linear effects of independent variables, N-‐acetylglucosamine for C. beijerinckii cultures and for C. paraputrificum cultures N-‐acetylglucosamine, L-‐cysteine.HCl.H2O and MgCl2 on total biogas volume. Significant curvature or quadratic effects of N-‐ acetylglucosamine and L-‐cysteine.HCl.H2O were identified for C. paraputrificum cultures. There were no significant interaction effects between medium constituent variables on resulting biogas volume. The optimal conditions for the maximum volume of biogas produced for C. beijerinckii cultures were 21 g/l of N-‐ acetylglucosamine, 0.1 g/l of FeSO4.7H2O and initial pH of 6.11 and for C. paraputrificum were 29 g/l of N-‐acetylglucosamine, 0.27 g/l of L-‐ cysteine.HCl.H2O and 0.4 g/l of MgCl2. Using this statistical optimization strategy, the total biogas volume from N-‐acetylglucosamine utilization increased from 150 ml/l to 6533 ml /l in the C. beijerinckii cultures and 100 ml/l to 5350 ml/l in the C. paraputificum cultures. The maximum yield of bio-‐hydrogen by C. paraputrificum from N-‐acetylglucosamine was 2.55 mol of H2 / mol of N-‐ acetylglucosamine and by C. beijerinckii was 2.43 mol of H2 / mol of N-‐ acetylglucosamine.
|
5 |
Modelling and optimisation of flexible PVC compound formulation for mine cablesFechter, Reinhard Heinrich January 2017 (has links)
The thermal stability, fire retardancy and basic mechanical properties, as a function of the mass fractions of the poly(vinyl chloride) (PVC) compound ingredients, can be modelled using 2nd order Scheffé polynomials. The empirical models for each response variable can be determined using statistical experimental design. The particular models for each response variable, which are selected for predictive ability using k-fold cross validation, can be interpreted using statistical analysis of the model terms. The statistical analysis of the model terms can reveal the synergistic or antagonistic interactions between ingredients, some of which have not been reported in literature. The interaction terms in the models also mean that the effect of a certain ingredient is dependent on the mass fractions of the other ingredients. Sensitivity analysis can be used to examine the overall effect of a change in a particular formulation on the response variables. The empirical models can be used to minimise the cost of the PVC compound by varying the formulation. The optimum formulation is a function of the costs of the various ingredients and the limits which are placed on the response variables. To analyse the system as a whole, parametric analysis can be used. The number of different parametric analyses which can be done is very large and depends on the specific questions which need to be answered. Parametric analysis can be used to gain insight into the complex behaviour of the system with changing requirements, as a decision making tool in a commercial environment or to determine the completeness of the different measuring techniques used to describe the thermal stability and fire retardancy of the PVC compound. Statistical experimental design allows for the above methods to be used which leads to significant time and labour savings over attempting to reach the same conclusions using the traditional one-factor-at-a-time experiments with changes in the phr of an ingredient. It is recommended that the data generated for this investigation is analysed in more detail using the methods outlined for this investigation. This can be facilitated by making the analysis of the data (and therefore the data itself) more accessible through a usable interface. The data set itself can also be expanded to include new ingredients requiring very few additional experiments. If a PVC compound that contains none of the ingredients that were used in this investigation is of interest a new separate data set needs to be generated. This can be done by following the same procedure used in this investigation. In fact the method that is used in this investigation can be generalised to optimise the proportions of the ingredients of any mixture. / Dissertation (MEng)--University of Pretoria, 2017. / Chemical Engineering / MEng / Unrestricted
|
6 |
Multivariate methods in tablet formulationGabrielsson, Jon January 2004 (has links)
<p>This thesis describes the application of multivariate methods in a novel approach to the formulation of tablets for direct compression. It begins with a brief historical review, followed by a basic introduction to key aspects of tablet formulation and multivariate data analysis. The bulk of the thesis is concerned with the novel approach, in which excipients were characterised in terms of multiple physical or (in most cases) spectral variables. By applying Principal Component Analysis (PCA) the descriptive variables are summarized into a few latent variables, usually termed scores or principal properties (PP’s). In this way the number of descriptive variables is dramatically reduced and the excipients are described by orthogonal continuous variables. This means that the PP’s can be used as ordinary variables in a statistical experimental design. The combination of latent variables and experimental design is termed multivariate design or experimental design in PP’s. Using multivariate design many excipients can be included in screening experiments with relatively few experiments.</p><p>The outcome of experiments designed to evaluate the effects of differences in excipient composition of formulations for direct compression is, of course, tablets with various properties. Once these properties, e.g. disintegration time and tensile strength, have been determined with standardised tests, quantitative relationships between descriptive variables and tablet properties can be established using Partial Least Squares Projections to Latent Structures (PLS) analysis. The obtained models can then be used for different purposes, depending on the objective of the research, such as evaluating the influence of the constituents of the formulation or optimisation of a certain tablet property.</p><p>Several examples of applications of the described methods are presented. Except in the first study, in which the feasibility of this approach was first tested, the disintegration time of the tablets has been studied more carefully than other responses. Additional experiments have been performed in order to obtain a specific disintegration time. Studies of mixtures of excipients with the same primary function have also been performed to obtain certain PP’s. Such mixture experiments also provide a straightforward approach to additional experiments where an interesting area of the PP space can be studied in more detail. The robustness of a formulation with respect to normal batch-to-batch variability has also been studied.</p><p>The presented approach to tablet formulation offers several interesting alternatives, for both planning and evaluating experiments.</p>
|
7 |
Aspects of Optimisation of Separation of Drugs by ChemometricsHarang, Valérie January 2003 (has links)
<p>Statistical experimental designs have been used for method development and optimisation of separation. Two reversed phase HPLC methods were optimised. Parameters such as the pH, the amount of tetrabutylammonium (TBA; co-ion) and the gradient slope (acetonitrile) were investigated and optimised for separation of erythromycin A and eight related compounds. In the second method, a statistical experimental design was used, where the amounts of acetonitrile and octane sulphonate (OSA; counter ion) and the buffer concentration were studied, and generation of an α-plot with chromatogram simulations optimised the separation of six analytes.</p><p>The partial filling technique was used in capillary electrophoresis to introduce the chiral selector Cel7A. The effect of the pH, the ionic strength and the amount of acetonitrile on the separation and the peak shape of R- and S-propranolol were investigated.</p><p>Microemulsion electrokinetic chromatography (MEEKC) is a technique similar to micellar electrokinetic chromatography (MEKC), except that the microemulsion has a core of tiny droplets of oil inside the micelles. A large number of factors can be varied when using this technique. A screening design using the amounts of sodium dodecyl sulphate (SDS), Brij 35, 1-butanol and 2-propanol, the buffer concentration and the temperature as factors revealed that the amounts of SDS and 2-propanol were the most important factors for migration time and selectivity manipulation of eight different compounds varying in charge and hydrophobicity. SDS and 2-propanol in the MEEKC method were further investigated in a three-level full factorial design analysing 29 different compounds sorted into five different groups. Different optimisation strategies were evaluated such as generating response surface plots of the selectivity/resolution of the most critical pair of peaks, employing chromatographic functions, simplex optimisation in MODDE and 3D resolution maps in DryLab™.</p><p>Molecular descriptors were fitted in a PLS model to retention data from the three-level full factorial design of the MEEKC system. Two different test sets were used to study the predictive ability of the training set. It was concluded that 86 – 89% of the retention data could be predicted correctly for new molecules (80 – 120% of the experimental values) with different settings of SDS and 2-propanol.</p><p>Statistical experimental designs and chemometrics are valuable tools for the development and optimisation of analytical methods. The same chemometric strategies can be employed for all types of separation techniques.</p>
|
8 |
Aspects of Optimisation of Separation of Drugs by ChemometricsHarang, Valérie January 2003 (has links)
Statistical experimental designs have been used for method development and optimisation of separation. Two reversed phase HPLC methods were optimised. Parameters such as the pH, the amount of tetrabutylammonium (TBA; co-ion) and the gradient slope (acetonitrile) were investigated and optimised for separation of erythromycin A and eight related compounds. In the second method, a statistical experimental design was used, where the amounts of acetonitrile and octane sulphonate (OSA; counter ion) and the buffer concentration were studied, and generation of an α-plot with chromatogram simulations optimised the separation of six analytes. The partial filling technique was used in capillary electrophoresis to introduce the chiral selector Cel7A. The effect of the pH, the ionic strength and the amount of acetonitrile on the separation and the peak shape of R- and S-propranolol were investigated. Microemulsion electrokinetic chromatography (MEEKC) is a technique similar to micellar electrokinetic chromatography (MEKC), except that the microemulsion has a core of tiny droplets of oil inside the micelles. A large number of factors can be varied when using this technique. A screening design using the amounts of sodium dodecyl sulphate (SDS), Brij 35, 1-butanol and 2-propanol, the buffer concentration and the temperature as factors revealed that the amounts of SDS and 2-propanol were the most important factors for migration time and selectivity manipulation of eight different compounds varying in charge and hydrophobicity. SDS and 2-propanol in the MEEKC method were further investigated in a three-level full factorial design analysing 29 different compounds sorted into five different groups. Different optimisation strategies were evaluated such as generating response surface plots of the selectivity/resolution of the most critical pair of peaks, employing chromatographic functions, simplex optimisation in MODDE and 3D resolution maps in DryLab™. Molecular descriptors were fitted in a PLS model to retention data from the three-level full factorial design of the MEEKC system. Two different test sets were used to study the predictive ability of the training set. It was concluded that 86 – 89% of the retention data could be predicted correctly for new molecules (80 – 120% of the experimental values) with different settings of SDS and 2-propanol. Statistical experimental designs and chemometrics are valuable tools for the development and optimisation of analytical methods. The same chemometric strategies can be employed for all types of separation techniques.
|
9 |
Multivariate methods in tablet formulationGabrielsson, Jon January 2004 (has links)
This thesis describes the application of multivariate methods in a novel approach to the formulation of tablets for direct compression. It begins with a brief historical review, followed by a basic introduction to key aspects of tablet formulation and multivariate data analysis. The bulk of the thesis is concerned with the novel approach, in which excipients were characterised in terms of multiple physical or (in most cases) spectral variables. By applying Principal Component Analysis (PCA) the descriptive variables are summarized into a few latent variables, usually termed scores or principal properties (PP’s). In this way the number of descriptive variables is dramatically reduced and the excipients are described by orthogonal continuous variables. This means that the PP’s can be used as ordinary variables in a statistical experimental design. The combination of latent variables and experimental design is termed multivariate design or experimental design in PP’s. Using multivariate design many excipients can be included in screening experiments with relatively few experiments. The outcome of experiments designed to evaluate the effects of differences in excipient composition of formulations for direct compression is, of course, tablets with various properties. Once these properties, e.g. disintegration time and tensile strength, have been determined with standardised tests, quantitative relationships between descriptive variables and tablet properties can be established using Partial Least Squares Projections to Latent Structures (PLS) analysis. The obtained models can then be used for different purposes, depending on the objective of the research, such as evaluating the influence of the constituents of the formulation or optimisation of a certain tablet property. Several examples of applications of the described methods are presented. Except in the first study, in which the feasibility of this approach was first tested, the disintegration time of the tablets has been studied more carefully than other responses. Additional experiments have been performed in order to obtain a specific disintegration time. Studies of mixtures of excipients with the same primary function have also been performed to obtain certain PP’s. Such mixture experiments also provide a straightforward approach to additional experiments where an interesting area of the PP space can be studied in more detail. The robustness of a formulation with respect to normal batch-to-batch variability has also been studied. The presented approach to tablet formulation offers several interesting alternatives, for both planning and evaluating experiments.
|
10 |
Mapping the consequenses of physical exercise and nutrition on human health : A predictive metabolomics approachChorell, Elin January 2011 (has links)
Human health is a complex and wide-ranging subject far beyond nutrition and physical exercise. Still, these factors have a huge impact on global health by their ability to prevent diseases and thus promote health. Thus, to identify health risks and benefits, it is necessary to reveal the underlying mechanisms of nutrition and exercise, which in many cases follows a complex chain of events. As a consequence, current health research is generating massive amounts of data from anthropometric parameters, genes, proteins, small molecules (metabolites) et cetera, with the intent to understand these mechanisms. For the study of health responses, especially related to physical exercise and nutrition, alterations in small molecules (metabolites) are in most cases immediate and located close to the phenotypic level and could therefore provide early signs of metabolic imbalances. Since there are roughly as many different responses to exercise and nutrients as there are humans, this quest is highly multifaceted and will benefit from an interpretation of treatment effects on a general as well as on an individual level. This thesis involves the application of chemometric methods to the study of global metabolic reactions, i.e. metabolomics, in a strategy coined predictive metabolomics. Via the application of predictive metabolomics an extensive hypothesis-free biological interpretation has been carried out of metabolite patterns in blood, acquired using gas chromatography-mass spectrometry (GC-MS), related to physical exercise, nutrition and diet, all in the context of human health. In addition, the chemometrics methodology have computational benefits concerning the extraction of relevant information from information-rich data as well as for interpreting general treatment effects and individual responses, as exemplified throughout this work. Health concerns all lifestages, thus this thesis presents a strategic framework in combination with comprehensive interpretations of metabolite patterns throughout life. This includes a broad range of human studies revealing metabolic patterns related to the impact of physical exercise, macronutrient modulation and different fitness status in young healthy males, short and long term dietary treatments in overweight post menopausal women as well as metabolic responses related to probiotics treatment and early development in infants. As a result, the studies included in the thesis have revealed metabolic patterns potentially indicative of an anti-catabolic response to macronutrients in the early recovery phase following exercise. Moreover, moderate differences in the metabolome associated with cardiorespiratory fitness level were detected, which could be linked to variation in the inflammatory and antioxidaive defense system. This work also highlighted mechanistic information that could be connected to dietary related weight loss in overweight and obese postmenopausal women in relation to short as well as long term dietary effects based on different macronutrient compositions. Finally, alterations were observed in metabolic profiles in relation to probiotics treatment in the second half of infancy, suggesting possible health benefits of probiotics supplementation at an early age. / Embargo until 2012-06-01
|
Page generated in 0.1686 seconds