• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 131
  • 38
  • 16
  • 11
  • 10
  • 7
  • 4
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 443
  • 443
  • 91
  • 86
  • 81
  • 76
  • 72
  • 48
  • 44
  • 42
  • 41
  • 40
  • 35
  • 30
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Benchmarked Hard Disk Drive Performance Characterization and Optimization Based on Design of Experiments Techniques

Lin, Yu-Wei 01 June 2010 (has links) (PDF)
This paper describes an experimental study offered by Designs of Experiments (DOE) within the defined factor domains to evaluate the factor effects of simultaneous characteristics on the benchmarked hard disk drive performance by proposing well-organized statistical models for optimizations. The numerical relations of the obtained models permit to predict the behaviors of benchmarked disk performances as functions of significant factors to optimize relevant criteria based on the needs. The experimental data sets were validated to be in satisfying agreement with predicted values by analyzing the response surface plots, contour plots, model equations, and optimization plots. The adequacy of the model equations were verified effectively by a prior generation disk drive within the same model family. The retained solutions for potential industrializations were the concluded response surface models of benchmarked disk performance optimizations. The comprehensive benchmarked performance modeling procedure for hard disk drives not only saves experimental costs on physical modeling but also leads to hard-to-find quality improvement solutions to manufacturing decisions.
2

Budget-constrained experimental optimization

Roshandelpoor, Athar 27 May 2021 (has links)
Many problems of design and operation in science and engineering can be formulated as optimization of a properly defined performance/objective function over a design space. This thesis considers optimization problems where information about the performance function can be obtained only through experimentation/function evaluation, in other words, optimization of black box functions. Furthermore, it is assumed that the optimization is performed with limited budget, namely, where only a limited number of function evaluations are feasible. Two classes of optimization approaches are considered. The first, consisting of Design of Experiment (DOE) and Response Surface Methodology (RSM), explores the design space locally by identifying directions of improvement and incrementally moving towards the optimum. The second, referred to as Bayesian Optimization (BO), corresponds to a global search of the design space based on a stochastic model of the function over the design space that is updated after each experimentation/function evaluation. Two independent projects related to the above optimization approaches are reported in the thesis. The first, the result of a collaborative effort with experimental and computational material scientists, involves adaptations of the above approaches in order to solve two specific new materials development projects. The goal of the first project was to develop an integrated computational-statistical-experimental methodology for calibration of an activated carbon adsorption bed. The second project consisted of the application and modification of existing DOE approaches to a highly data limited environment. The second part consists of a new contribution to the methodology of Bayesian Optimization (BO) by significantly generalizing a non-myopic approach to BO. Different BO algorithms vary based on their choice of stochastic model of the unknown objective function, referred to as the surrogate model, and that of the so-called acquisition function, which often represents an expected utility of sampling at various points of the design space. Various myopic BO approaches which evaluate the benefit of taking only a single sample from the objective function have been considered in the literature. More recently, a number of non-myopic approaches have been proposed that go beyond evaluating the benefit of a single sample. In this thesis, a non-myopic approach/algorithm, referred to as z* policy, is considered that takes a different approach to evaluating the benefits of sampling. The resulting search approach is motivated by a non-myopic index policy in a sequential sampling problem that is shown to be optimal in a non-adaptive setting. An analysis of the z* policy is presented and it is placed within the broader context of non-myopic policies. Finally, using empirical evaluations, it is shown that in some instances the z* policy outperforms a number of other commonly used myopic and non-myopic policies. / 2023-11-30T00:00:00Z
3

Analysis of the Design and Operation of Mix-Bank Resequencing Areas

Subramanian, Arunkumar 11 December 2004 (has links)
Automotive assembly plants work on a pre-planned job sequence in order to optimize the performance of the assembly line. However, the job sequence becomes scrambled due to factors such as plant layout, process design, variability and uncertainty. Assembly plants use either a mix-bank or an automatic storage and retrieval system to regenerate the sequence before final assembly. A mix-bank, which is a set of parallel lanes, is the most common method used in the automotive industry to reconstruct the sequence. Only the first vehicles on the lanes are available for sequencing in a mix-bank set-up. Hence the lane selection policy and the lane configuration of a mix-bank play crucial roles in recreating the sequence. This thesis addresses the problem of identifying a superior lane selection policy for a mix-bank re-sequencing area. Simulation models of a re-sequencing area are used to evaluate lane selection policies. Varying the lane configurations and the nature of sequence tests the effectiveness of the selection policies.
4

Optimization of the polishing procedure using a robot assisted polishing equipment

Gagnolet, Marielle January 2009 (has links)
<p>Today, manual polishing is the most common method to improve the surface finish of mould and dies for e.g. plastic injection moulding, although it is a cumbersome and time-consuming process. Therefore, automated robots are being developed in order to speed up and secure the final result of this important final process.</p><p>The purpose of this thesis is to find out some clues about the influence of different parameters for the polishing of a steel grade called Mirrax ESR (Uddeholm Tooling AB) using a Design of Experiment. The report starts with a brief description of mechanical polishing (the techniques and polishing mechanisms) and ends up with the optimization of the polishing procedure with a polishing machine, the Strecon RAP-200 made by Strecon A/S.</p><p>Even if all the runs of the Design of Experiments couldn’t be carried out, the surfaces studied revealed some information about the importance of the previous process (turning marks not removed) and about the link between the aspect of the surfaces and the roughness parameters.</p>
5

Bayesian Experimental Design Framework Applied to Complex Polymerization Processes

Nabifar, Afsaneh 26 June 2012 (has links)
The Bayesian design approach is an experimental design technique which has the same objectives as standard experimental (full or fractional factorial) designs but with significant practical benefits over standard design methods. The most important advantage of the Bayesian design approach is that it incorporates prior knowledge about the process into the design to suggest a set of future experiments in an optimal, sequential and iterative fashion. Since for many complex polymerizations prior information is available, either in the form of experimental data or mathematical models, use of a Bayesian design methodology could be highly beneficial. Hence, exploiting this technique could hopefully lead to optimal performance in fewer trials, thus saving time and money. In this thesis, the basic steps and capabilities/benefits of the Bayesian design approach will be illustrated. To demonstrate the significant benefits of the Bayesian design approach and its superiority to the currently practised (standard) design of experiments, case studies drawn from representative complex polymerization processes, covering both batch and continuous processes, are presented. These include examples from nitroxide-mediated radical polymerization of styrene (bulk homopolymerization in the batch mode), continuous production of nitrile rubber in a train of CSTRs (emulsion copolymerization in the continuous mode), and cross-linking nitroxide-mediated radical copolymerization of styrene and divinyl benzene (bulk copolymerization in the batch mode, with cross-linking). All these case studies address important, yet practical, issues in not only the study of polymerization kinetics but also, in general, in process engineering and improvement. Since the Bayesian design technique is perfectly general, it can be potentially applied to other polymerization variants or any other chemical engineering process in general. Some of the advantages of the Bayesian methodology highlighted through its application to complex polymerization scenarios are: improvements with respect to information content retrieved from process data, relative ease in changing factor levels mid-way through the experimentation, flexibility with factor ranges, overall “cost”-effectiveness (time and effort/resources) with respect to the number of experiments, and flexibility with respect to source and quality of prior knowledge (screening experiments versus models and/or combinations). The most important novelty of the Bayesian approach is the simplicity and the natural way with which it follows the logic of the sequential model building paradigm, taking full advantage of the researcher’s expertise and information (knowledge about the process or product) prior to the design, and invoking enhanced information content measures (the Fisher Information matrix is maximized, which corresponds to minimizing the variances and reducing the 95% joint confidence regions, hence improving the precision of the parameter estimates). In addition, the Bayesian analysis is amenable to a series of statistical diagnostic tests that one can carry out in parallel. These diagnostic tests serve to quantify the relative importance of the parameters (intimately related to the significance of the estimated factor effects) and their interactions, as well as the quality of prior knowledge (in other words, the adequacy of the model or the expert’s opinions used to generate the prior information, as the case might be). In all the case studies described in this thesis, the general benefits of the Bayesian design were as described above. More specifically, with respect to the most complex of the examples, namely, the cross-linking nitroxide-mediated radical polymerization (NMRP) of styrene and divinyl benzene, the investigations after designing experiments through the Bayesian approach led to even more interesting detailed kinetic and polymer characterization studies, which cover the second part of this thesis. This detailed synthesis, characterization and modeling effort, trigged by the Bayesian approach, set out to investigate whether the cross-linked polymer network synthesized under controlled radical polymerization (CRP) conditions had a more homogeneous structure compared to the network produced by regular free radical polymerization (FRP). In preparation for the identification of network homogeneity indicators based on polymer properties, cross-linking kinetics of nitroxide-mediated radical polymerization of styrene (STY) in the presence of a small amount of divinyl benzene (DVB; as the cross-linker) and N-tert-butyl-N-(2-methyl)-1-phenylpropyl)-O-(1-phenylethyl) hydroxylamine (I-TIPNO; as the unimolecular initiator) was investigated in detail and the results were contrasted with regular FRP of STY/DVB and homopolymerization of STY in the presence of I-TIPNO, as reference systems. The effect of [DVB], [I-TIPNO] and [DVB]/ [I-TIPNO] were investigated on rate, molecular weights, gel content and swelling index. In parallel to our experimental investigations, a detailed mathematical model was developed and validated with the respective experimental data. Not only did model predictions follow the general experimental trends very well but also were in good agreement with experimental observations. Pursuing our investigations for a more reliable indicator for network homogeneity, corresponding branched and cross-linked polymers were characterized. Thermo-mechanical analysis was used as an attempt to investigate the difference between polymer networks synthesized through FRP and NMRP. Results from both Differential Scanning Calorimetry (DSC) and Dynamic Mechanical Analysis (DMA) showed that at the same cross-link density and conversion level, polymer networks produced by FRP and NMRP exhibit indeed comparable structures. Overall, it was notable that a wealth of process information was generated by such a practical experimental design technique, and with minimal experimental effort compared to previous (undesigned) efforts (and associated, often not well founded, claims) in the literature!
6

Optimization of the polishing procedure using a robot assisted polishing equipment

Gagnolet, Marielle January 2009 (has links)
Today, manual polishing is the most common method to improve the surface finish of mould and dies for e.g. plastic injection moulding, although it is a cumbersome and time-consuming process. Therefore, automated robots are being developed in order to speed up and secure the final result of this important final process. The purpose of this thesis is to find out some clues about the influence of different parameters for the polishing of a steel grade called Mirrax ESR (Uddeholm Tooling AB) using a Design of Experiment. The report starts with a brief description of mechanical polishing (the techniques and polishing mechanisms) and ends up with the optimization of the polishing procedure with a polishing machine, the Strecon RAP-200 made by Strecon A/S. Even if all the runs of the Design of Experiments couldn’t be carried out, the surfaces studied revealed some information about the importance of the previous process (turning marks not removed) and about the link between the aspect of the surfaces and the roughness parameters.
7

Thermal Optimization of Veo+ Projectors (thesis work at Optea AB) : Trying to reduce noise of the Veo+ projector by DOE (Design of Experiment) tests to find anoptimal solution for the fan algorithm while considering the thermal specifics of the unit

Hizli, Cem January 2010 (has links)
The Veo+ projector is using a cooling system that consists of fan and blowers. This system is cooling the electronic components of the device and the lamp of the projector, however extracting a high noise. To lower this noise the rpm speeds (rotational speed) of the fan and blowers should be decreased. Thus, lowering the speed will result in higher temperature values in whole system (inside the device). While lowering the speed, the higher temperature values should be kept within the thermal design specifications of the electronic components. The purpose of this thesis work is to find an optimal solution with lower rpm speeds of the fan and blowers while keeping the temperatures of the various components of the device (touch temperature of the enclosure and electronic components) within the temperature design limits. Before testing the device to find the optimum state, the design limits of the device are determined. Then, by using the design of experiment methods like Taguchi, the optimum state for the device within the design specifications is obtained. Finally, additional tests are applied within the optimum state to demonstrate a fan algorithm as a final solution. While doing the experiments thermocouples are used for measuring the component temperatures.
8

Bayesian Experimental Design Framework Applied to Complex Polymerization Processes

Nabifar, Afsaneh 26 June 2012 (has links)
The Bayesian design approach is an experimental design technique which has the same objectives as standard experimental (full or fractional factorial) designs but with significant practical benefits over standard design methods. The most important advantage of the Bayesian design approach is that it incorporates prior knowledge about the process into the design to suggest a set of future experiments in an optimal, sequential and iterative fashion. Since for many complex polymerizations prior information is available, either in the form of experimental data or mathematical models, use of a Bayesian design methodology could be highly beneficial. Hence, exploiting this technique could hopefully lead to optimal performance in fewer trials, thus saving time and money. In this thesis, the basic steps and capabilities/benefits of the Bayesian design approach will be illustrated. To demonstrate the significant benefits of the Bayesian design approach and its superiority to the currently practised (standard) design of experiments, case studies drawn from representative complex polymerization processes, covering both batch and continuous processes, are presented. These include examples from nitroxide-mediated radical polymerization of styrene (bulk homopolymerization in the batch mode), continuous production of nitrile rubber in a train of CSTRs (emulsion copolymerization in the continuous mode), and cross-linking nitroxide-mediated radical copolymerization of styrene and divinyl benzene (bulk copolymerization in the batch mode, with cross-linking). All these case studies address important, yet practical, issues in not only the study of polymerization kinetics but also, in general, in process engineering and improvement. Since the Bayesian design technique is perfectly general, it can be potentially applied to other polymerization variants or any other chemical engineering process in general. Some of the advantages of the Bayesian methodology highlighted through its application to complex polymerization scenarios are: improvements with respect to information content retrieved from process data, relative ease in changing factor levels mid-way through the experimentation, flexibility with factor ranges, overall “cost”-effectiveness (time and effort/resources) with respect to the number of experiments, and flexibility with respect to source and quality of prior knowledge (screening experiments versus models and/or combinations). The most important novelty of the Bayesian approach is the simplicity and the natural way with which it follows the logic of the sequential model building paradigm, taking full advantage of the researcher’s expertise and information (knowledge about the process or product) prior to the design, and invoking enhanced information content measures (the Fisher Information matrix is maximized, which corresponds to minimizing the variances and reducing the 95% joint confidence regions, hence improving the precision of the parameter estimates). In addition, the Bayesian analysis is amenable to a series of statistical diagnostic tests that one can carry out in parallel. These diagnostic tests serve to quantify the relative importance of the parameters (intimately related to the significance of the estimated factor effects) and their interactions, as well as the quality of prior knowledge (in other words, the adequacy of the model or the expert’s opinions used to generate the prior information, as the case might be). In all the case studies described in this thesis, the general benefits of the Bayesian design were as described above. More specifically, with respect to the most complex of the examples, namely, the cross-linking nitroxide-mediated radical polymerization (NMRP) of styrene and divinyl benzene, the investigations after designing experiments through the Bayesian approach led to even more interesting detailed kinetic and polymer characterization studies, which cover the second part of this thesis. This detailed synthesis, characterization and modeling effort, trigged by the Bayesian approach, set out to investigate whether the cross-linked polymer network synthesized under controlled radical polymerization (CRP) conditions had a more homogeneous structure compared to the network produced by regular free radical polymerization (FRP). In preparation for the identification of network homogeneity indicators based on polymer properties, cross-linking kinetics of nitroxide-mediated radical polymerization of styrene (STY) in the presence of a small amount of divinyl benzene (DVB; as the cross-linker) and N-tert-butyl-N-(2-methyl)-1-phenylpropyl)-O-(1-phenylethyl) hydroxylamine (I-TIPNO; as the unimolecular initiator) was investigated in detail and the results were contrasted with regular FRP of STY/DVB and homopolymerization of STY in the presence of I-TIPNO, as reference systems. The effect of [DVB], [I-TIPNO] and [DVB]/ [I-TIPNO] were investigated on rate, molecular weights, gel content and swelling index. In parallel to our experimental investigations, a detailed mathematical model was developed and validated with the respective experimental data. Not only did model predictions follow the general experimental trends very well but also were in good agreement with experimental observations. Pursuing our investigations for a more reliable indicator for network homogeneity, corresponding branched and cross-linked polymers were characterized. Thermo-mechanical analysis was used as an attempt to investigate the difference between polymer networks synthesized through FRP and NMRP. Results from both Differential Scanning Calorimetry (DSC) and Dynamic Mechanical Analysis (DMA) showed that at the same cross-link density and conversion level, polymer networks produced by FRP and NMRP exhibit indeed comparable structures. Overall, it was notable that a wealth of process information was generated by such a practical experimental design technique, and with minimal experimental effort compared to previous (undesigned) efforts (and associated, often not well founded, claims) in the literature!
9

General blending models for mixture experiments : design and analysis

Brown, Liam John January 2014 (has links)
It is felt the position of the Scheffé polynomials as the primary, or sometimes sole recourse for practitioners of mixture experiments leads to a lack of enquiry regarding the type of blending behaviour that is used to describe the response and that this could be detrimental to achieving experimental objectives. Consequently, a new class of models and new experimental designs are proposed allowing a more thorough exploration of the experimental region with respect to different blending behaviours, especially those not associated with established models for mixtures, in particular the Scheffé polynomials. The proposed General Blending Models for Mixtures (GBMM) are a powerful tool allowing a broad range of blending behaviour to be described. These include those of the Scheffé polynomials (and its reparameterisations) and Becker's models. The potential benefits to be gained from their application include greater model parsimony and increased interpretability. Through this class of models it is possible for a practitioner to reject the assumptions inherent in choosing to model with the Scheffé polynomials and instead adopt a more open approach, flexible to many different types of behaviour. These models are presented alongside a fitting procedure, implementing a stepwise regression approach to the estimation of partially linear models with multiple nonlinear terms. The new class of models has been used to develop designs which allow the response surface to be explored fully with respect to the range of blending behaviours the GBMM may describe. These designs may additionally be targeted at exploring deviation from the behaviour described by the established models. As such, these designs may be thought to possess an enhanced optimality with respect to these models. They both possess good properties with respect to optimality criterion, but are also designed to be robust against model uncertainty.
10

Design and Manufacture of Molding Compounds for High Reliability Microelectronics in Extreme Conditions

Garcia, Andres 12 1900 (has links)
The widespread use of electronics in more avenues of consumer use is increasing. Applications range from medical instrumentation that directly can affect someone's life, down hole sensors for oil and gas, aerospace, aeronautics, and automotive electronics. The increased power density and harsh environment makes the reliability of the packaging a vital part of the reliability of the device. The increased importance of analog devices in these applications, their high voltage and high temperature resilience is resulting in challenges that have not been dealt with before. In particular packaging where insulative properties are vital use polymer resins modified by ceramic fillers. The distinct dielectric properties of the resin and the filler result in charge storage and release of the polarization currents in the composite that have had unpredictable consequences on reliability. The objective of this effort is therefore to investigate a technique that can be used to measure the polarization in filled polymer resins and evaluate reliable molding compounds. A valuable approach to measure polarization in polymers where charge release is tied to the glass transition in the polymer is referred to as thermally stimulated depolarization current (TSDC) technique. In this dissertation a new TSDC measurement system was designed and fabricated. The instrument is an assembly of several components that are automated via a LabVIEW program that gives the user flexibility to test different dielectric compounds at high temperatures and high voltage. The temperature control is enabled through the use of dry air convection heating at a very slow rate enabling controlled heating and cooling. Charge trapping and de-trapping processes were investigated in order to obtain information on insulating polymeric composites and how to optimize it. A number of material properties were investigated. First, polarization due to charges on the filer were investigated using composites containing charged and uncharged particles using quartz and ion exchange montmorillonite silicates in an epoxy matrix. The thermally-activated charge release shows a difference in the composite characteristics and preparation. This difference indicates that the trap levels depend on the de-trapping process and on the chemical nature of the trap site. Using a numerical approach to the release spectra, a model was developed to examine through short time testing, important parameters such as glass transition temperature, residual polarization, depolarization peak, window polarization modeling and activation energy of relaxations. Second the design of mold compounds that could combine manufacturing (temperature of molding), geometric (thickness of packaging material), composition (amount and size of filler) effects was developed using a novel design of experiments approach. The statistical DOE enabled the determination of which causes should be considered when designing a mold compound that has minimal polarization both as singular variables as well as combined variables. Finally, the DOE approach was used to develop a high temperature reliable molding compound through use of combined fillers of thermally conductive and nonconductive fillers having different shapes. Through the systematic approach to developing a technique and designing a mold compound addressing the multiple impacts on reliability of packaging, the dissertation provides an approach to the design, selection, performance and durability of molding compounds.

Page generated in 0.0902 seconds