Spelling suggestions: "subject:"engineering - bindustrial"" "subject:"engineering - 0industrial""
171 |
Design equation for the lip opening of a film extrusion dieJones, Andrea Cecil 07 January 2008 (has links)
No description available.
|
172 |
Theoretically and computationally improving branch and bound through multivariate branching with internal cutting planesLee, Jin Hua January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Integer Programs (IP) are a class of discrete optimization problems that are utilized
commercially to improve the function of various systems. Implementation is often
aimed at reaching optimal financial objectives with constraints on resources and operation.
While incredibly beneficial, IPs are NP-complete, with many IP models being
unsolvable.
Branch and bound (BB) is the primary method employed to solve IPs to optimality.
BB is an exhaustive approach to enumerating all potential integer solutions for a given
IP. By utilizing a hierarchical tree structure to tabulate progression of enumeration, BB
can guarantee an optimal solution in finite time. However, BB can take an exponential
number of iterations to solve an IP. Computationally, this can result in a tree structure
that exceeds a computer’s memory capacity, or a prohibitively long solution time.
This thesis introduces a modified version of BB call the Quaternary Hyperplane
Branching Algorithm (QHBA). QHBA employs a quaternary branching scheme, utilizes
hyperplane branching constraints, and generates internal cutting planes to increase efficiency.
Implementation of these advancements theoretically improves QHBA in comparison
to traditional BB. It can also be shown that QHBA guarantees an optimal solution
in a finite number of iterations. A short computational study shows that QHBA results
in a 26.7% decrease in solution times when compared to CPLEX, a commercially
available IP solver.
|
173 |
Optimizing defensive alignments in baseball through integer programming and simulationBecker, Kyle William January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Baseball is an incredibly complex game where the managers of the baseball teams have numerous decisions to make. The managers are in control of the offense and defense of a team. Some managers have ruined their teams’ chances of a victory by removing their star pitcher too soon in a game or leaving them in too long; managers also choose to pinch hit for batters or pinch run for base runners in order to set up a “favorable match-up” such as a left handed pitcher versus a right handed batter. This research’s goal is to aid managers by providing an optimal positioning of defensive players on the field for a particular batter.
In baseball, every ball that is hit onto the field of play can be an out if the fielders are positioned correctly. By positioning the fielders in an optimal manner a team will directly reduce the number of runs that it gives up, which increases the chances of a win.
This research describes an integer program that can determine the optimal location of defensive players. This integer program is based off of a random set of hits that the player has produced in the past. The integer program attempts to minimize the expected costs associated with each hit where the cost is defined by a penalty (single, double or triple) or benefit (out) of the person’s hit. By solving this integer program in Opl Studio 4.2, a commercial integer programming software, an optimal defensive positioning is derived for use against this batter.
To test this defense against other standard defenses that teams in the MLB currently use, a simulation was created. This simulation uses Derek Jeter’s actual statistics; including his 2009 regular season hit chart. The simulation selects a hit at random according to his hit chart and determines the outcome of the hit (single, double, out, double play, etc.). Once this simulation is complete a printout shows the batter’s statistics; including his average and slugging percentage.
VI
By comparing the optimized defensive alignment with some commonly used major league alignments, it can be shown that this optimal alignment would decrease Jeter’s average by nearly 13% and decrease his slugging by 35%. It is my opinion that managers should use this tool to help them win more games. These defenses can be seamlessly implemented by any coach or team.
|
174 |
Fitting the Weibull and lognormal log-linear models to accelerated life test dataWang, Wendai January 1999 (has links)
Accelerated life tests, in which more than one stress is often involved, have become widely used in today's industries to obtain the time-to-failure and reliability information at normal use conditions. Tests are conducted at higher than normal levels of stresses to shorten the test duration. A physical-statistical model is needed to extrapolate the results from test conditions to usage conditions. The generalized Weibull and lognormal log-linear models, as two general ALT families, cover almost all ALT models which are current in use in reliability engineering for this purpose. However, the development of multiple-stress ALTs has been hindered by the difficulty of performing adequate and satisfactory model fitting. This study presents an extensive research on both point and interval estimates of model parameters. The maximum likelihood estimates (MLE), as the first choice of the point estimate, have preferable statistical properties and well-developed theories. Due to complication of the models and data patterns, a robust and efficient algorithm is essential to successful implementation of the ML estimation. Unfortunately, the current methods get impractical, and no effective and practical approach has been developed yet for the generalized Weibull and lognormal log-linear models. A new approach to obtain ML point estimators of the parameters for both models, which takes advantage of generalized linear model (GLM), has been proposed and extensively studied in this research. The algorithm is generally numerically stable and easily programmed. The superiority is that it does not depend much on the starting values. This proposed method might generate a long-standing method to obtain the MLE for the ALT and other models which have two sets of unknown parameters, one in the mean function and other in the variance function. The likelihood ration confidence intervals have been concluded generally to be the best among the available approximate confidence methods, based on recent researches. The LR confidence bound method is successfully applied to calculate the confidence limits on the reliability under the use conditions in this study. Furthermore, the study has established a general method to calculate the LR ratio confidence limits on a function of unknown parameters. The procedures of point and interval estimates have been developed and their virtues have been demonstrated with several numerical examples of actual accelerated life test data.
|
175 |
Performance evaluation of manufacturing systems using stochastic activity networksShah, Hemal Vinodchandra, 1967- January 1991 (has links)
In this thesis, Stochastic Activity Networks (SANs), which are an extension to the Petri Nets, are used for performance evaluation of manufacturing systems. Using our formalism, a manufacturing system is hierarchically represented in three different layers: the manufacturing flow layer, the control layer and the network layer. SAN models are constructed for each of these layers. To simplify the understanding of the manufacturing flow, a new graphical representation, the Manufacturing Flow Network (MFN) has been developed. Conversion of MFN into SAN models simplifies the modeling of manufacturing flow layer. When MFN at the product level is very complex, a decomposition technique is applied to reduce complexity of the model under specific conditions. The accuracy of this technique is shown for specific conditions. Finally, a performance evaluation of a sample manufacturing system is shown, using the simulation for solution of the model. Performance variables of interest such as machine utilization, machine availability and operation queue length are discussed.
|
176 |
Intersymbol interference characterization and equalization for high density optical data storageGupta, Sunil, 1966- January 1992 (has links)
In optical data storage, intersymbol interference (ISI) is one of the limiting factors for high storage density. To reduce ISI, we need to understand its characteristics and use appropriate signal processing techniques. In this thesis, we characterize ISI for various data patterns and compare Signal to Noise Ratio (SNR) with and without ISI equalization. To characterize ISI, we sample the readback waveforms and use a computer program for data processing. To compare SNR with and without equalization, we record a pseudo random sequence and compare the detected marks with original marks. We also compute SNR using the additive interleaving detection (AID) technique without ISI equalization. From our studies, ISI is found to be more significant as mark widths become smaller. We also found that SNR can be increased by 3-9 db with equalization. More interestingly, we found that AID gives good SNR even without ISI equalization. This indicates the simplicity of using AID for higher storage density.
|
177 |
Models for a carbon constrained, reliable biofuel supply chain network design and managementMarufuzzaman, Mohammad 01 October 2014 (has links)
<p> This dissertation studies two important problems in the field of biomass supply chain network. In the first part of the dissertation, we study the impact of different carbon regulatory policies such as carbon cap, carbon tax, carbon cap-and-trade and carbon offsetmechanism on the design and management of a biofuel supply chain network under both deterministic and stochastic settings. These mathematical models identify locations and production capacities for biocrude production plants by exploring the trade-offs that exist between transportations costs, facility investment costs and emissions. The model is solved using a modified L-shaped algorithm. We used the state of Mississippi as a testing ground for our model. A number of observations are made about the impact of each policy on the biofuel supply chain network. </p><p> In the second part of the dissertation, we study the impact of intermodal hub disruption on a biofuel supply chain network. We present mathematical model that designs multimodal transportation network for a biofuel supply chain system, where intermodal hubs are subject to site-dependent probabilistic disruptions. The disruption probabilities of intermodal hubs are estimated by using a probabilistic model which is developed using real world data. We further extend this model to develop a mixed integer nonlinear program that allocates intermodal hub dynamically to cope with biomass supply fluctuations and to hedge against natural disasters. We developed a rolling horizon based Benders decomposition algorithm to solve this challenging NP-hard problem. Numerical experiments show that this proposed algorithm can solve large scale problem instances to a near optimal solution in a reasonable time. We applied the models to a case study using data from the southeast region of U.S. Finally, a number of managerial insights are drawn into the impact of intermodal-related risk on the supply chain performance.</p>
|
178 |
Blood Glucose Management Streptozotocin-Induced Diabetic Rats by Artificial Neural Network Based Model Predictive ControlBahremand, Saeid 12 March 2017 (has links)
<p> Diabetes is a group of metabolic diseases where the body’s pancreas does not produce enough insulin or does not properly respond to insulin produced, resulting in high blood sugar levels over a prolonged period. There are several different types of diabetes, but the most common forms are type 1 and type 2 diabetes. Type 1 diabetes Mellitus (T1DM) can occur at any age, but is most commonly diagnosed from infancy to late 30s. If a person is diagnosed with type 1 diabetes, their pancreas produces little to no insulin, and the body’s immune system destroys the insulin-producing cells in the pancreas. Those diagnosed with type 1 diabetes must inject insulin several times every day or continually infuse insulin through a pump, as well as manage their diet and exercise habits. If not treated appropriately, it can cause serious complications such as cardiovascular disease, stroke, kidney failure, foot ulcers, and damage to eyes.</p><p> During the past decade, researchers have developed artificial pancreas (AP) to ease management of diabetes. AP has three components: continuous glucose monitor (CGM), insulin pump, and closed-loop control algorithm. Researchers have developed algorithms based on control techniques such as Proportional Integral Derivative (PID) and Model Predictive Control (MPC) for blood glucose level (BGL) control; however, variability in metabolism between or within individuals hinders reliable control. </p><p> This study aims to develop an adaptive algorithm using Artificial Neural Networks (ANN) based Model Predictive Control (NN-MPC) to perform proper insulin injections according to BGL predictions in diabetic rats. This study is a ground work to implement NN-MPC algorithm on real subjects. BGL data collected from diabetic rats using CGM are used with other inputs such as insulin injection and meal information to develop a virtual plant model based on a mathematical model of glucose–insulin homeostasis proposed by Lombarte et al. Since this model is proposed for healthy rats; a revised version on this model with three additional equations representing diabetic rats is used to generate data for training ANN which is applicable for the identi?cation of dynamics and the glycemic regulation of rats. The trained ANN is coupled with MPC algorithm to control BGL of the plant model within the normal range of 100 to 130 mg/dl by injecting appropriate amount of insulin. The ANN performed well with less than 5 mg/dl error (2%) for 5-minute prediction and about 15 mg/dl error (7%) for 30-minute prediction. In ¬¬addition, the NN-MPC algorithm kept BGL of diabetic rats more than 90 percent of the time within the normal range without hyper/hypo-glycaemia.</p>
|
179 |
Integrated product and process design for resin transfer molded (RTM) partsUnknown Date (has links)
Composite materials have gained increasing attention in the past several years due to their superior mechanical properties and improved strength-to-weight ratio over traditional materials. With this focus on composite materials, a concentration on resin transfer molding (RTM) has followed. RTM is an attractive processing method due to its potential for providing consistently superior parts at a lower cost than other manufacturing techniques. / The resin transfer molding process involves a large number of variables that are linked to the design of the component, the selection and formulation of the constituent materials, such as resin and fiber, and the design of the mold and molding process. These variables are strongly related to the system performance, for example mold filling time, and RTM product quality. The need for understanding the impact of RTM product and process design variables on part quality and process performance is crucial. This is accomplished through an integrated product and process design (IPPD) approach. Genetic algorithms (GA), in conjunction with the cascade correlation neural network architecture (CCA-NN), are utilized for the following purposes: (1) to establish a working model that predicts performance and quality measures in RTM given a set of product and process design parameters, and (2) to determine the optimal settings of the product and process design parameters to enhance the RTM process and improve part quality. / Optimum design of RTM product and process design variables will result in high quality parts and enhance the efficiency and robustness of the RTM process. An intelligent, adaptive process control procedure yields consistently high quality parts in the presence of interactions and nonlinearities among RTM parameters. / The proposed research outlines two major tasks, (1) the integration of modeling and simulation technologies that support an integrated product and process design (IPPD) approach, and (2) the intelligent, adaptive control of the RTM process. The goal of the proposed research is to achieve optimum design of RTM parts through the development of a robust process and an intelligent, adaptive process control procedure. This is the vision underlying this research. / Source: Dissertation Abstracts International, Volume: 56-10, Section: B, page: 5703. / Major Professor: Ben Wang. / Thesis (Ph.D.)--The Florida State University, 1995.
|
180 |
Integration of machinery condition monitoring and reliability modeling: A prelude to predictive maintenanceUnknown Date (has links)
Condition Based Maintenance (CBM) is a philosophical approach that uses the most cost effective methodology for the performance of machinery maintenance. The idea is to ensure maximum operational life and minimum downtime of machinery within predefined cost, safety and availability constraints. When machinery life extension is a major consideration the CBM approach usually involves predictive maintenance. In this research a two-level approach for predictive maintenance has been defined: (1) to develop a Condition Monitoring and Diagnostic System (CMDS) for machine fault detection and maintenance suggestion, and (2) to develop a machine performance estimation model for machine reliability modeling and failure rate analysis. The objective is to provide a new and practicable solution for condition-based predictive maintenance. / In this research artificial neural network (ANN) technologies and analytical models have been investigated and incorporated to increase the effectiveness and efficiency of CMDS. Several advanced vibration trending methods have been studied and used to quantify machine operating conditions. An on-line, multi-channel condition monitoring procedure has been developed and coded. The major technique used for fault diagnostics is a modified ARTMAP neural network. In the second part of this research a new method of obtaining maintenance information has been developed. A Cerebellar Model Articulation Controller (CMAC) neural network has been employed to estimate and quantify machine performance. By combining reliability theory with a real-time, on-line CMAC Performance Estimation Model (CMAC-PEM), machine reliability statistics such as failure rate and mean time between failures (MTBF) can be calculated. CMAC-PEM may provide a practicable solution for condition-based predictive maintenance since it estimates machine reliability measures on-line. In addition, Weibull Proportional Hazards Model (WPHM), has been implemented as a proven tool to verify CMAC-PEM results. Real-world data obtained from a bearing fault experiment and a bearing deterioration process were provided to test the proposed methodologies. / Essentially, this research presents an innovative method to synthesize low level information, such as vibration signals, with high level information, like reliability statistics, to form a rigorous theoretical base for condition-based predictive maintenance. / Source: Dissertation Abstracts International, Volume: 56-04, Section: B, page: 2251. / Major Professor: Hsu-Pin (Ben) Wang. / Thesis (Ph.D.)--The Florida State University, 1995.
|
Page generated in 0.1062 seconds