• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 26
  • 26
  • 26
  • 8
  • 6
  • 5
  • 5
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Using lean principles and simulation to enhance the effectiveness of a failure analysis laboratory in a manufacturing environment

Tashtoush, Tariq Husni. January 2009 (has links)
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009. / Includes bibliographical references.
12

The application of constraint management to a simulated manufacturing environment

Van der Merwe, Karl Robert January 2005 (has links)
South Africa endorsed a world trade accord ratified in Geneva on December 13, 1993. To promote world economic growth, the General Agreement on Tariffs and Trade (GATT) aimed to slash duties on 8000 categories of manufactured goods. Tariff barriers have declined significantly and are now approaching trivial levels (Hill, 1999, p163). Unfortunately, South Africa is ranked near the bottom of the World Competitiveness Report (Cheales, 1995, p8). Increased foreign competition has caused many South African companies to search for techniques that will improve their overall performance. Unless these techniques can be identified and implemented timeously, some companies may fail to remain competitive. This research addresses the lack of awareness in the Eastern Cape (SA) and utilisation of two techniques used successfully in the USA (and elsewhere in developed countries) namely, Constraint Management and simulation. The overall objective was to develop a method of convincing industry management of the benefits of the two techniques. The approach adopted was to use simulation to prove Constraint Management. To achieve this objective a comprehensive literature survey was performed to determine the logic of each technique and the associated benefits. The next step was to determine the levels of awareness among industry practitioners and managers. Managers, engineers and academics were requested to complete questionnaires in order to determine awareness and utilisation of each technique as well as factors that prevented the application of both. The simulation modeling process was examined in order to verify the logic of simulation and a model of a manufacturing system was developed. Constraint Management principles were then applied to the model in a series of experiments. This process was then developed into a manual that could be used to address the lack of awareness and utilisation of both Constraint Management and simulation. The manual was tested on a group of BTech students and industry practitioners in order to establish whether its use would be effective in raising awareness, understanding and utilisation. The outcome was positive and it was established that this technique was effective.
13

Freeway Control Via Ramp Metering: Development of a Basic Building Block for an On-Ramp, Discrete, Stochastic, Mesoscopic, Simulation Model within a Contextual Systems Approach

Alkadri, Mohamed Yaser 01 January 1991 (has links)
One of the most effective measures of congestion control on freeways has been ramp metering, where vehicle entry to the freeway is regulated by traffic signals (meters). Meters are run with calibrated influx rates to prevent highway saturation. However, recent observations of some metering sites in San Diego, CA indicate that metering, during peak hour demand, is helping freeway flow while sometimes creating considerable traffic back-ups on local streets, transferring congestion problems from the freeway to intersections. Metering problems stem largely from the difficulty of designing an integrated, dynamic metering scheme that responds not only to changing freeway conditions but also to fluctuating demand throughout the ramp network; a scheme whose objective is to maintain adequate freeway throughput as well as minimize disproportionate ramp delays and queue overspills onto surface streets. Simulation modeling is a versatile, convenient, relatively inexpensive and safe systems analysis tool for evaluating alternative strategies to achieve the above objective. The objective of this research was to establish a basic building block for a discrete system simulation model, ONRAMP, based on a stochastic, mesoscopic, queueing approach. ONRAMP is for modeling entrance ramp geometry, vehicular generation, platooning and arrivals, queueing activities, meters and metering rates. The architecture of ONRAMP's molecular unit is designed in a fashion so that it can be, with some model calibration, duplicated for a number of ramps and, if necessary, integrated into some other larger freeway network models. SLAM.II simulation language is used for computer implementation. ONRAMP has been developed and partly validated using data from eight ramps at Interstate-B in San Diego. From a systems perspective, simulation will be short-sided and problem analysis is incomplete unless the other non-technical metering problems are explored and considered. These problems include the impacts of signalizing entrance ramps on the vitality of adjacent intersections, land use and development, "fair" geographic distribution of meters and metering rates throughout the freeway corridor, public acceptance and enforcement, and the role and influence of organizations in charge of decision making in this regard. Therefore, an outline of a contextual systems approach for problem analysis is suggested. Benefits and problems of freeway control via ramp metering, both operational short-term and strategic long-term, are discussed in two dimensions: global (freeway) and local (intersection). The results of a pilot study which includes interviews with field experts and law enforcement officials and a small motorist survey are presented.
14

A finite element model for stress analysis of underground openings /

Chau, Kam Shing Patrick January 1988 (has links)
No description available.
15

Development of an interactive simulation game for ISE-5204 Manufacturing Systems Engineering

Ketelhohn, Niels 24 March 2009 (has links)
The purpose of this research was to take the first steps in the creation of a simulation game, tailored for the needs of ISE-5204 Manufacturing Systems Engineering, that will provide students with the opportunity of applying their knowledge in realistic situations. The needs of ISE-5204 were established based on the course material and on interviews with appropriate faculty members. A game review showed that there is not a game available which combines all of the characteristics desirable to fit these needs. Therefore the a new simulation game is needed for use in the course. This research developed a simulation game framework, unique in driving a strategic business type game by low level production decisions. The framework consists of three components: conceptual, organizational and structural framework. The conceptual framework is based on a competitive game with a multiproduct environment, with operational decisions being the driving force. The organizational framework specifies that periodic decision are made by competing student companies and input into the game for production simulation and generation of status reports. The structural framework specifies that a discrete, next event simulation model of shop floor operation is used to model the production system and create output reports. A prototype model demonstrated the feasibility of running a high level strategic game by low level production modeling. Three competing companies were simulated for three production periods. Each company made decisions that were representative of a different strategy. Simulation outputs were indicative of the behavior characterized by the company decisions and inputs. / Master of Science
16

A computer simulation for the design of percussive hydraulic drills

Hunt, Clive Wilfred 11 September 2013 (has links)
Thesis (M.Sc.(Engineering)--University of the Witwatersrand, 1990.
17

The simulated effect of the lightning first short stroke current on a multi-layered cylindrical model of the human leg

Lee, Yuan-chun Harry January 2015 (has links)
A dissertation submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in ful lment of the requirements for the degree of Master of Science in Engineering. Johannesburg, 2015 / This research investigates the e ects of the frequency components of the lightning First Short Stroke (FSS) on the current pathway through human tissues using frequency domain analysis. A Double Exponential Function (DEF) is developed to model the FSS with frequency components in the range 10 Hz 100 kHz. Human tissues are simulated using Finite Element Analysis (FEA) in COMSOL and comprises of two types of models: Single Layer Cylindrical Model (SLCM) and Multi-layered Cylindrical Model (MLCM). The SLCM models 54 human tissues independently and the MLCM models the human leg with ve tissue layers: bone marrow, cortical bone, muscle, blood and fat. Three aspects are analysed: current density, complex impedance and power dissipation. From the SLCM results, aqueous tissues have the lowest impedances and tissue heat dissipation is proportional to tissue impedance. Results from the MLCM show that 85% of the FSS current ows through muscle, 11% ows through blood, 3:5% through fat and the rest through cortical bone and bone marrow. From the results, frequency dependent equivalent circuit models consisting of resistors and capacitors connected in series are proposed. The simulation results are correlated with three main clinical symptoms of lightning injuries: neurological, cardiovascular and external burns. The results of this work are applicable to the analysis of High Voltage (HV) injuries at power frequencies. / MT2017
18

Post-silicon Functional Validation with Virtual Prototypes

Cong, Kai 03 June 2015 (has links)
Post-silicon validation has become a critical stage in the system-on-chip (SoC) development cycle, driven by increasing design complexity, higher level of integration and decreasing time-to-market. According to recent reports, post-silicon validation effort comprises more than 50% of the overall development effort of an 65nm SoC. Though post-silicon validation covers many aspects ranging from electronic properties of hardware to performance and power consumption of whole systems, a central task remains validating functional correctness of both hardware and its integration with software. There are several key challenges to achieving accelerated and low-cost post-silicon functional validation. First, there is only limited silicon observability and controllability; second, there is no good test coverage estimation over a silicon device; third, it is difficult to generate good post-silicon tests before a silicon device is available; fourth, there is no effective software robustness testing approaches to ensure the quality of hardware/software integration. We propose a systematic approach to accelerating post-silicon functional validation with virtual prototypes. Post-silicon test coverage is estimated in the pre-silicon stage by evaluating the test cases on the virtual prototypes. Such analysis is first conducted on the initial test suite assembled by the user and subsequently on the expanded test suite which includes test cases that are automatically generated. Based on the coverage statistics of the initial test suite on the virtual prototypes, test cases are automatically generated to improve the test coverage. In the post-silicon stage, our approach supports coverage evaluation of test cases on silicon devices to ensure fidelity of early coverage evaluation. The generated test cases are issued to silicon devices to detect inconsistencies between virtual prototypes and silicon devices using conformance checking. We further extend the test case generation framework to generate and inject fault scenario with virtual prototypes for driver robustness testing. Besides virtual prototype-based fault injection, an automatic driver fault injection approach is developed to support runtime fault generation and injection for driver robustness testing. Since virtual prototype enables early driver development, our automatic driver fault injection approach can be applied to driver testing in both pre-silicon and post-silicon stages. For preliminary evaluation, we have applied our coverage evaluation and test generation to several network adapters and their virtual prototypes. We have conducted coverage analysis for a suite of common tests on both the virtual prototypes and silicon devices. The results show that our approach can estimate the test coverage with high fidelity. Based on the coverage estimation, we have employed our automatic test generation approach to generate additional tests. When the generated test cases were issued to both virtual prototypes and silicon devices, we observed significant coverage improvement. And we detected 20 inconsistencies between virtual prototypes and silicon devices, each of which reveals a virtual prototype or silicon device defect. After we applied virtual prototype-based fault injection approach to virtual prototypes for three widely-used network adapters, we generated and injected thousands of fault scenarios and found 2 driver bugs. For automatic driver fault injection, we have applied our approach to 12 widely used drivers with either virtual prototypes or silicon devices. After testing all these drivers, we found 28 distinct bugs.
19

Fast and robust phase behavior modeling for compositional reservoir simulation

Li, Yinghui, 1976- 29 August 2008 (has links)
A significant percentage of computational time in compositional simulations is spent performing flash calculations to determine the equilibrium compositions of hydrocarbon phases in situ. Flash calculations must be done at each time step for each grid block; thus billions of such calculations are possible. It would be very important to reduce the computational time of flash calculations significantly so that more grid blocks or components may be used. In this dissertation, three different methods are developed that yield fast, robust and accurate phase behavior calculations useful for compositional simulation and other applications. The first approach is to express the mixing rule in equations-of-state (EOS) so that a flash calculation is at most a function of six variables, often referred to as reduced parameters, regardless of the number of pseudocomponents. This is done without sacrificing accuracy and with improved robustness compared with the conventional method. This approach is extended for flash calculations with three or more phases. The reduced method is also derived for use in stability analysis, yielding significant speedup. The second approach improves flash calculations when K-values are assumed constant. We developed a new continuous objective function with improved linearity and specified a small window in which the equilibrium compositions must lie. The calculation speed and robustness of the constant K-value flash are significantly improved. This new approach replaces the Rachford-Rice procedure that is embedded in the conventional flash calculations. In the last approach, a limited compositional model for ternary systems is developed using a novel transformation method. In this method, all tie lines in ternary systems are first transformed to a new compositional space where all tie lines are made parallel. The binodal curves in the transformed space are regressed with any accurate function. Equilibrium phase behavior calculations are then done in this transformed space non-iteratively. The compositions in the transformed space are translated back to the actual compositional space. The new method is very fast and robust because no iteration is required and thus always converges even at the critical point because it is a direct method. The implementation of some of these approaches into compositional simulators, for example UTCOMP or GPAS, shows that they are faster than conventional flash calculations, without sacrificing simulation accuracy. For example, the implementation of the transformation method into UTCOMP shows that the new method is more than ten times faster than conventional flash calculations.
20

Diesel engine performance modelling using neural networks

Rawlins, Mark Steve January 2005 (has links)
Thesis (D.Tech.: Mechanical Engineering)-Dept. of Mechanical Engineering, Durban Institute of Technology, 2005 xxi, 265 leaves / The aim of this study is to develop, using neural networks, a model to aid the performance monitoring of operational diesel engines in industrial settings. Feed-forward and modular neural network-based models are created for the prediction of the specific fuel consumption on any normally aspirated direct injection four-stroke diesel engine. The predictive capability of each model is compared to that of a published quadratic method. Since engine performance maps are difficult and time consuming to develop, there is a general scarcity of these maps, thereby limiting the effectiveness of any engine monitoring program that aims to manage the fuel consumption of an operational engine. Current methods applied for engine consumption prediction are either too complex or fail to account for specific engine characteristics that could make engine fuel consumption monitoring simple and general in application. This study addresses these issues by providing a neural network-based predictive model that requires two measured operational parameters: the engine speed and torque, and five known engine parameters. The five parameters are: rated power, rated and minimum specific fuel consumption bore and stroke. The neural networks are trained using the performance maps of eight commercially available diesel engines, with one entire map being held out of sample for assessment of model generalisation performance and application validation. The model inputs are defined using the domain expertise approach to neural network input specification. This approach requires a thorough review of the operational and design parameters affecting engine fuel consumption performance and the development of specific parameters that both scale and normalize engine performance for comparative purposes. Network architecture and learning rate parameters are optimized using a genetic algorithm-based global search method together with a locally adaptive learning algorithm for weight optimization. Network training errors are statistically verified and the neural network test responses are validation tested using both white and black box validation principles. The validation tests are constructed to enable assessment of the confidence that can be associated with the model for its intended purpose. Comparison of the modular network with the feed-forward network indicates that they learn the underlying function differently, with the modular network displaying improved generalisation on the test data set. Both networks demonstrate improved predictive performance over the published quadratic method. The modular network is the only model accepted as verified and validated for application implementation. The significance of this work is that fuel consumption monitoring can be effectively applied to operational diesel engines using a neural network-based model, the consequence of which is improved long term energy efficiency. Further, a methodology is demonstrated for the development and validation testing of modular neural networks for diesel engine performance prediction.

Page generated in 0.2003 seconds