• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 11
  • Tagged with
  • 44
  • 44
  • 44
  • 16
  • 15
  • 13
  • 10
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Multi-sensor Optimization Of The Simultaneous Turning And Boring Operation

Deane, Erick Johan 01 January 2011 (has links)
To remain competitive in today’s demanding economy, there is an increasing demand for improved productivity and scrap reduction in manufacturing. Traditional manufacturing metal removal processes such as turning and boring are still one of the most used techniques for fabricating metal products. Although the essential metal removal process is the same, new advances in technology have led to improvements in the monitoring of the process allowing for reduction of power consumption, tool wear, and total cost of production. Replacing used CNC lathes from the 1980’s in a manufacturing facility may prove costly, thus finding a method to modernize the lathes is vital. This research focuses on Phase I and II of a three phase research project where the final goal is to optimize the simultaneous turning and boring operation of a CNC Lathe. From the optimization results it will be possible to build an adaptive controller that will produce parts rapidly while minimizing tool wear and machinist interaction with the lathe. Phase I of the project was geared towards selecting the sensors that were to be used to monitor the operation and designing a program with an architecture that would allow for simultaneous data collection from the selected sensors at high sampling rates. Signals monitored during the operation included force, temperature, vibration, sound, acoustic emissions, power, and metalworking fluid flow rates. Phase II of this research is focused on using the Response Surface Method to build empirical models for various responses and to optimize the simultaneous cutting process. The simultaneous turning and boring process was defined by the four factors of spindle speed, feed rate, outer diameter depth of cut, and inner diameter depth of cut. A total of four sets of experiments were performed. The first set of experiments screened the experimental region to iii determine if the cutting parameters were feasible. The next three set s of designs of experiments used Central Composite Designs to build empirical models of each desired response in terms of the four factors and to optimize the process. Each design of experiments was compared with one another to validate that the results achieved were accurate within the experimental region. By using the Response Surface Method optimal machining parameter settings were achieved. The algorithm used to search for optimal process parameter settings was the desirability function. By applying the results from this research to the manufacturing facility, they will achieve reduction in power consumption, reduction in production time, and decrease in the total cost of each part.
32

Construction and properties of Box-Behnken designs

Jo, Jinnam 01 February 2006 (has links)
Box-Behnken designs are used to estimate parameters in a second-order response surface model (Box and Behnken, 1960). These designs are formed by combining ideas from incomplete block designs (BIBD or PBIBD) and factorial experiments, specifically 2<sup>k</sup> full or 2<sup>k-1</sup> fractional factorials. In this dissertation, a more general mathematical formulation of the Box-Behnken method is provided, a general expression for the coefficient matrix in the least squares analysis for estimating the parameters in the second order model is derived, and the properties of Box-Behnken designs with respect to the estimability of all parameters in a second-order model are investigated when 2<sup>k</sup>full factorials are used. The results show that for all pure quadratic coefficients to be estimable, the PBIB(m) design has to be chosen such that its incidence matrix is of full rank, and for all mixed quadratic coefficients to be estimable the PBIB(m) design has to be chosen such that the parameters λ₁, λ₂, ...,λ<sub>m</sub> are all greater than zero. In order to reduce the number of experimental points the use of 2<sup>k-1</sup> fractional factorials instead of 2<sup>k</sup> full factorials is being considered. Of particular interest and importance are separate considerations of fractions of resolutions III, IV, and V. The construction of Box-Behnken designs using such fractions is described and the properties of the designs concerning estimability of regression coefficients are investigated. Using designs obtained from resolution V factorials have the same properties as those using full factorials. Resolutions III and IV designs may lead to non-estimability of certain coefficients and to correlated estimators. The final topic is concerned with Box-Behnken designs in which treatments are applied to experimental units sequentially in time or space and in which there may exist a linear trend effect. For this situation, one wants to find appropriate run orders for obtaining a linear trend-free Box-Behnken design to remove a linear trend effect so that a simple technique, analysis of variance, instead of a more complicated technique, analysis of covariance, to remove a linear trend effect can be used. Construction methods for linear trend-free Box-Behnken designs are introduced for different values of block size (for the underlying PBIB design) k. For k= 2 or 3, it may not always be possible to find linear trend-free Box-Behnken designs. However, for k ≥ 4 linear trend-free Box-Behnken designs can always be constructed. / Ph. D.
33

Simulation-optimization studies: under efficient stimulationstrategies, and a novel response surface methodology algorithm

Joshi, Shirish 06 June 2008 (has links)
While attempting to solve optimization problems, the lack of an explicit mathematical expression of the problem may preclude the application of the standard methods of optimization which prove valuable in an analytical framework. In such situations, computer simulations are used to obtain the mean response values for the required settings of the independent variables. Procedures for optimizing on the mean response values, which are in turn obtained through computer simulation experiments, are called simulation-optimization techniques. The focus of this work is on the simulation-optimization technique of response surface methodology (RSM). RSM is a collection of mathematical and statistical techniques for experimental optimization. Correlation induction strategies can be employed in RSM to achieve improved statistical inferences on experimental designs and sequential experimentations. Also, the search procedures currently employed by RSM algorithms can be improved by incorporating gradient deflection methods. This dissertation has three major goals: (a) develop analytical results to quantitatively express the gains of using the common random number (CRN) strategy of variance reduction over direct simulation (independent streams or IS strategy) at each stage RSM, (b) develop a new RSM algorithm by incorporating gradient deflection methods in existing RSM algorithms, and (c) to conduct extensive empirical studies to quantify: (i) the use of eRN strategy over direct simulation in a standard RSM algorithm, and (ii) the gains of the new RSM algorithm over a standard existing RSM algorithm. / Ph. D.
34

Robust parameter optimization strategies in computer simulation experiments

Panis, Renato P. 06 June 2008 (has links)
An important consideration in computer simulation studies is the issue of model validity, the level of accuracy with which the simulation model represents the real world system under study. This dissertation addresses a major cause of model validity problems: the dissimilarity between the simulation model and the real system due to the dynamic nature of the real system that results from the presence of nonstationary stochastic processes within the system. This transitory characteristic of the system is typically not addressed in the search for an optimal solution. In reliability and quality control studies, it is known that optimizing with respect to the variance of the response is as important a concern as optimizing with respect to average performance response. Genichi Taguchi has been instrumental in the advancement of this philosophy. His work has resulted in what is now popularly known as the Taguchi Methods for robust parameter design. Following Taguchi's philosophy, the goal of this research is to devise a framework for finding optimum operating levels for the controllable input factors in a stochastic system that are insensitive to internal sources of variation. Specifically, the model validity problem of nonstationary system behavior is viewed as a major internal cause of system variation. In this research the typical application of response surface methodology (RSM) to the problem of simulation optimization is examined. Simplifying assumptions that enable the use of RSM techniques are examined. The relaxation of these assumptions to address model validity leads to a modification of the RSM approach to properly handle the problem of optimization in the presence of nonstationarity. Taguchi's strategy and methods are then adapted and applied to this problem. Finally, dual-response RSM extensions of the Taguchi approach separately modeling the process performance mean and variance are considered and suitably revised to address the same problem. A second cause of model validity problems is also considered: the random behavior of the supposedly controllable input factors to the system. A resolution to this source of model invalidity is proposed based on the methodology described above. / Ph. D.
35

Outliers and robust response surface designs

O'Gorman, Mary Ann January 1984 (has links)
A commonly occurring problem in response surface methodology is that of inconsistencies in the response variable. These inconsistencies, or maverick observations, are referred to here as outliers. Many models exist for describing these outliers. Two of these models, the mean shift and the variance inflation outlier models, are employed in this research. Several criteria are developed for determining when the outlying observation is detrimental to the analysis. These criteria all lead to the same condition which is used to develop statistical tests of the null hypothesis that the outlier is not detrimental to the analysis. These results are extended to the multiple outlier case for both models. The robustness of response surface designs is also investigated. Robustness to outliers, missing data and errors in control are examined for first order models. The orthogonal designs with large second moments, such as the 2ᵏ factorial designs, are optimal in all three cases. In the second order case, robustness to outliers and to missing data are examined. Optimal design parameters are obtained by computer for the central composite, Box-Behnken, hybrid, small composite and equiradial designs. Similar results are seen for both robustness to outliers and to missing data. The central composite turns out to be the optimal design type and of the two economical design types the small composite is preferred to the hybrid. / Ph. D.
36

Effective design augmentation for prediction

Rozum, Michael A. 03 August 2007 (has links)
In a typical response surface study, an experimenter will fit a first order model in the early stages of the study and obtain the path of steepest ascent. The path leads the experimenter out of this initial region of interest and into a new region of interest. The experimenter may fit another first order model here or, if curvature is believed to be present in the underlying system, a second order model. In the final stages of the study, the experimenter fits a second order model and typically contracts the region of interest as the levels of the factors that optimize the response are nearly determined. Due to the sequential nature of experimentation in a typical response surface study, the experimenter may find himself/herself wanting to augment some initial design with additional runs within the current region of interest. The little discussion that exists in the statistical literature suggests adding runs sequentially in a conditional D-optimal manner. Four prediction oriented criteria, I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup> and G, and two estimation oriented criteria, A and E, are studied here as other possible sequential design augmentation optimality criteria. Analytical properties of I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub>, and A are developed within the context of the design augmentation problem. I<sub>SV</sub><sub>r</sub> is found to be somewhat ineffective in actual sequential design augmentation situations. A new more effective criterion,I<sub>SV</sub><sub>r</sub><sup>ADJ</sup> is introduced and thoroughly developed. Software is developed which allows sequential design augmentation via these seven criteria. Unlike existing design augmentation software, all locations within the current region of interest are eligible for inclusion in the augmenting design (a continuous candidate list). Case studies were performed. For a first order model there was negligible difference in the prediction variance properties of the designs generated via sequential augmentation by D and the A best of the other criteria, I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, and A. For a second order model, however, the designs generated via sequential augmentation by D place too few runs too late in the interior of the region of interest. Thus, designs generated via sequential augmentation by D yield inferior prediction variance properties to the designs generated via I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, and A. The D-efficiencies of the designs generated via sequential augmentation by I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, and A range from the reasonable to fully D-optimum. Therefore, the I<sub>IV</sub>, I<sub>SV</sub><sub>r</sub><sup>ADJ</sup>, optimality criteria are recommended for sequential design augmentation when quality of prediction is more important than quality in estimation of coefficients. / Ph. D.
37

Applications and optimization of response surface methodologies in high-pressure, high-temperature gauges

Hässig Fonseca, Santiago 05 July 2012 (has links)
High-Pressure, High-Temperature (HPHT) pressure gauges are commonly used in oil wells for pressure transient analysis. Mathematical models are used to relate input perturbation (e.g., flow rate transients) with output responses (e.g., pressure transients), and subsequently, solve an inverse problem that infers reservoir parameters. The indispensable use of pressure data in well testing motivates continued improvement in the accuracy (quality), sampling rate (quantity), and autonomy (lifetime) of pressure gauges. This body of work presents improvements in three areas of high-pressure, high-temperature quartz memory gauge technology: calibration accuracy, multi-tool signal alignment, and tool autonomy estimation. The discussion introduces the response surface methodology used to calibrate gauges, develops accuracy and autonomy estimates based on controlled tests, and where applicable, relies on field gauge drill stem test data to validate accuracy predictions. Specific contributions of this work include: - Application of the unpaired sample t-test, a first in quartz sensor calibration, which resulted in reduction of uncertainty in gauge metrology by a factor of 2.25, and an improvement in absolute and relative tool accuracies of 33% and 56%, accordingly. Greater accuracy yields more reliable data and a more sensitive characterization of well parameters. - Post-processing of measurements from 2+ tools using a dynamic time warp algorithm that mitigates gauge clock drifts. Where manual alignment methods account only for linear shifts, the dynamic algorithm elastically corrects nonlinear misalignments accumulated throughout a job with an accuracy that is limited only by the clock's time resolution. - Empirical modeling of tool autonomy based on gauge selection, battery pack, sampling mode, and average well temperature. A first of its kind, the model distills autonomy into two independent parameters, each a function of the same two orthogonal factors: battery power capacity and gauge current consumption as functions of sampling mode and well temperature -- a premise that, for 3+ gauge and battery models, reduces the design of future autonomy experiments by at least a factor of 1.5.
38

Analytical Fragility Curves for Highway Bridges in Moderate Seismic Zones

Nielson, Bryant G. 23 November 2005 (has links)
Historical seismic events such as the San Fernando earthquake of 1971 and the Loma Prieta earthquake of 1989 did much to highlight the vulnerabilities in many existing highway bridges. However, it was not until 1990 that this awareness extended to the moderate seismic regions such as the Central and Southeastern United States (CSUS). This relatively long neglect of seismic issues pertaining to bridges in these moderate seismic zones has resulted in a portfolio of existing bridges with seismic deficiencies which must be assessed and addressed. An emerging decision tool, whose use is becoming ever increasingly popular in the assessment of this seismic risk, is that of seismic fragility curves. Fragility curves are conditional probability statements which give the probability of a bridge reaching or exceeding a particular damage level for an earthquake of a given intensity level. As much research has been devoted to the implementation of fragility curves in risk assessment packages, a great need has arisen for bridge fragility curves which are reliable, particularly for those in moderate seismic zones. The purpose of this study is to use analytical methods to generate fragility curves for nine bridge classes which are most common to the CSUS. This is accomplished by first considering the existing bridge inventory and assessing typical characteristics and details from which detailed 3-D analytical models are created. The bridges are subjected to a suite of synthetic ground motions which were developed explicitly for the region. Probabilistic seismic demand models (PSDM) are then generated using these analyses. From these PSD models, fragility curves are generated by considering specific levels of damage which may be of interest. The fragility curves show that the most vulnerable of all the bridge nine bridge classes considered are those utilizing steel girders. Concrete girder bridges appear to be the next most vulnerable followed by single span bridges of all types. Various sources of uncertainty are considered and tracked throughout this study, which allows for their direct implementation into existing seismic risk assessment packages.
39

Seismic Vulnerability Assessment of Retrofitted Bridges Using Probabilistic Methods

Padgett, Jamie Ellen 09 April 2007 (has links)
The central focus of this dissertation is a seismic vulnerability assessment of retrofitted bridges. The objective of this work is to establish a methodology for the development of system level fragility curves for typical classes of retrofitted bridges using a probabilistic framework. These tools could provide valuable support for risk mitigation efforts in the region by quantifying the impact of retrofit on potential levels of damage over a range of earthquake intensities. The performance evaluation includes the development of high-fidelity three-dimensional nonlinear analytical models of bridges retrofit with a range of retrofit measures, and characterization of the response under seismic loading. Sensitivity analyses were performed to establish an understanding of the appropriate level of uncertainty treatment to model, assess, and propagate sources of uncertainty inherent to a seismic performance evaluation for portfolios of structures. Seismic fragility curves are developed to depict the impact of various retrofit devices on the seismic vulnerability of bridge systems. This work provides the first set of fragility curves for a range of bridge types and retrofit measures. Framework for their use in decision making for identification of viable retrofit measures, performance-based retrofit of bridges, and cost-benefit analyses are illustrated. The fragility curves developed as a part of this research will fill a major gap in existing seismic risk assessment software, and enable decision makers to quantify the benefits of various retrofits.
40

An efficient technique for structural reliability with applications

Janajreh, Ibrahim Mustafa 28 July 2008 (has links)
An efficient reliability technique has been developed based on Response Surface Methodology (RSM) in conjunction with the First Order Second Moment (FOSM) reliability method. The technique is applied when the limit state function cannot be obtained explicitly in terms of the design variables, i.e., when the analysis is performed using numerical techniques such as finite elements. The technique has proven to be efficient because it can handle problems with large numbers of design variables and correlated as well as nonnormal random variables. When compared with analytical results, the method has shown excellent agreement. The technique contains a sensitivity analysis scheme which can be used to reduce the computation time resulting in nearly the same accuracy. This technique allows the extension of most finite element codes to account for probabilistic analysis, where statistical variations can be added to the design variables. An explicit solution for rocket motors consisting of propellant and steel case under environmental temperature variations is compared to the RSM technique. The method is then used for the analysis of rocket motors subjected to mechanical loads for which the stress analysis is performed using the finite element method. The technique is also applied to study the reliability of a laminated composite plate with geometric nonlinearity subjected to static and time dependent loadings. Different failure modes were considered as well as different meshes. Results have shown that when the relative size of the element is introduced into the probabilistic model, the same reliability value is obtained regardless of the number of elements in the mesh. This is good because it allows the technique to be used for problems where the failure region is unknown. / Ph. D.

Page generated in 0.0861 seconds