Spelling suggestions: "subject:"computeraided engineering"" "subject:"computeraided ingineering""
61 |
A hybrid reconfigurable computer integrated manufacturing cell for mass customisation.Hassan, N. January 2011 (has links)
Mass producing custom products requires an innovative type of manufacturing environment.
Manufacturing environments at present do not possess the flexibility to generate mass
produced custom products. Manufacturers’ rapid response in producing these custom
products in relation to demand, yields several beneficial results from both a customer and
financial perspective. Current reconfigurable manufacturing environments are yet neither
financially feasible nor viable to implement. To provide a solution to the production of mass
customised products, research can facilitate the development of a distinctive hybrid
manufacturing cell, composed of characteristics inherent in existing manufacturing
paradigms.
Distinctive hybrid manufacturing cell research and development forms an environment
where Computer Integrated Manufacturing (CIM) cells operate in a Reconfigurable
Manufacturing environment. The development of this Hybrid Reconfigurable Computer
Integrated Manufacturing (HRCIM) cell resulted in functionalities that enabled the
production of mass customised products. Manufacturing characteristics of the HRCIM cell
were composed of key Reconfigurable Manufacturing System (RMS) features and CIM
capabilities.
This project required hardware to be used in developing an integrated HRCIM cell.
The cell consisted of storage systems, material handling equipment and processing stations.
Specific material handling equipment was enhanced in its functionality by incorporating
RMS characteristics to its existing structure. The hardware behaviour was coordinated from
software. This facilitated the autonomous HRCIM cell behaviour which was derived from
the mechatronic approach. The software composed of HRCIM events that were defined by
its unique programming language. Highlighted software functionalities included
prioritisation scheduling that resulted from customer order input. Performance data, extracted
from each type of equipment, were used to parameterise a simulated HRCIM cell. During
operation, the cell was frequently introduced to an irregular flow of different product
geometries, which required different processing requirements. This irregularity represented
mass customisation. The simulated HRCIM cell provided detailed manufacturing results.
Significant results consisted of storage times, queueing times and cycle times. / Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2011.
|
62 |
NUMERICAL ANALYSIS OF DROPLET FORMATION AND TRANSPORT OF A HIGHLY VISCOUS LIQUIDWang, Peiding 01 January 2014 (has links)
Drop-on-demand (DOD) inkjet print-head has a major share of the market due to simplicity and feasibility of miniature system. The efficiency of droplet generation from DOD print-head is a result of several factors, include viscosity, surface tension, nozzle size, density, driving waveform (wave shape, frequency, and amplitude), etc. Key roles in the formation and behavior of liquid jets and drops combine three dimensionless groups: Reynolds number, Weber number and Ohnesorge number. These dimensionless groups provide some bounds to the “printability” of the liquid. Adequate understanding of these parameters is essential to improve the quality of droplets and provide guidelines for the process optimization. This thesis research describes the application of computational fluid dynamics (CFD) to simulate the creation and evolution process of droplet generation and transport of a highly viscous Newtonian fluid. The flow field is governed by unsteady Navier-Stokes equations. Volume of Fluid (VOF) model is used to solve this multi-phase (liquid-gas) problem.
|
63 |
Vapour-liquid equilibrium measurements at moderate pressures using a semi-automatic glass recirculating still.Lilwanth, Hitesh. 15 September 2014 (has links)
Vapour-liquid equilibrium (VLE) data of high accuracy and reliability is essential in the development and optimization of separation and chemical processes. This study focuses on satisfying the growing demand for precise VLE data at low to moderate pressures, by development of a computer-aided dynamic glass still which is semi-automated. The modified dynamic glass still of Joseph et al. (2001) was employed to achieve precise measurement of phase equilibrium data for a pressure range of 0 to 500 kPa.
The study involved the assembling and commissioning of a new moderate pressure dynamic still and various peripheral apparati. The digital measurement and control systems were developed in the object-oriented graphical programming language LabVIEW. The digital proportional controller with integral action developed by Eitelberg (2009) was adapted for the control of pressure and temperature. Pressure and temperature measurements were obtained by using a WIKA TXM pressure transducer and Pt-100 temperature sensor respectively.
The calculated combined standard uncertainties in pressure measurements were ±0.005 kPa, ±0.013kPa and ±0.15kPa for the 0-10 kPa, 10-100 kPa and 100-500 kPa pressure ranges respectively. A combined standard uncertainty in temperature of ±0.02 K was calculated.
The published data of Joseph et al., (2001) and Gmehling et al,. (1995) for the cyclohexane (1) and ethanol (2) system at 40kPa and 1-hexene (1) + N-methyl pyrrolidone-2 (NMP) (2) system at 363.15 K respectively served as test systems. NMP is regarded as one of the most commonly used solvents in the chemical industry due to its unique properties such as low volatility, thermal and chemical stability. As a result the isothermal measurement of 1-hexene (1) + N-methyl pyrrolidone-2 (NMP) (2) system were conducted at 373.15 K constituting new VLE data. A further system comprising 1-propanol (1) and 2-butanol (2) was also measured at an isothermal temperature of 393.15 K.
The measured data were regressed using the combined and direct methods. The equations of state of Peng-Robinson (1976) and Soave-Redlich-Kwong (1972) combined with the mixing rules of Wong-Sandler (1992) in conjunction with a Gibbs
excess energy model was utilized for the direct method. The activity coefficient models namely Wilson (1964) and NRTL (Renon and Prausnitz, 1968) were chosen to describe the liquid non-idealities while the vapour phase non-ideality was described with the virial equation of state with the Hayden and O’ Connell (1975) correlation. Thermodynamic consistency of the measured data was confirmed using the point test of Van Ness et al. (1973) and the direct test of Van Ness (1995). / M.Sc.Eng. University of KwaZulu-Natal, Durban 2014.
|
64 |
Distributed control synthesis for manufacturing systems using customers' decision behaviour for mass customisation.Walker, Anthony John. January 2013 (has links)
The mass customisation manufacturing (MCM) paradigm has created a problem in manufacturing
control implementation, as each individual customer has the potential to disrupt
the operations of production. The aim of this study was to characterise the manufacturing
effects of customers’ decisions in product configuration, in order to research steady state
control requirements and work-in-process distributions for effective MCM operations. A
research method involving both analytic and empirical reasoning was used in characterising
the distributed control environment of manufacturing systems involved in MCM.
Sequences of job arrivals into each manufacturing system, due to customers’ decisions in
product configuration, were analysed as Bernoulli processes. A customer model based on
this analysis captured the correlation in product configuration decisions over time. Closed
form analytic models were developed from first principles, which described the steady state
behaviour of flow controlled manufacturing systems under generalised clearing policy and
uncorrelated job arrival sequences. Empirical analysis of data sets achieved through discrete
event simulation was used in adjusting the models to account for more complex cases
involving multiple job types and varying correlation. Characteristic response surfaces were
shown to exist over the domains of manufacturing system load and job arrival sequence
correlation.
A novel manufacturing flow control method, termed biased minimum feedback (BMF) was
developed. BMF was shown to posses the capability to distribute work-in-process within
the entire manufacturing facility through work-in-process regulation at each manufacturing
system, so as to increase the performance of downstream assembly stations fed from parallel
upstream processing stations. A case study in the production of a configurable product
was used in presenting an application for the models and methods developed during this
research. The models were shown to be useful in predicting steady state control requirements
to increase manufacturing performance. / Thesis (Ph.D.)-University of KwaZulu-Natal, Durban, 2013.
|
65 |
Forecasting water resources variables using artificial neural networks /Bowden, G. J. January 2003 (has links) (PDF)
Thesis (Ph.D.)--University of Adelaide, School of Civil and Environmental Engineering, 2003. / "February 2003." Corrigenda for, inserted at back. Includes bibliographical references (leaves 475-524 ).
|
66 |
Automatic CAD model processing for downstream applicationsPatel, Paresh S., January 2005 (has links)
Thesis (Ph.D.) -- Mississippi State University. Computational Engineering. / Title from title screen. Includes bibliographical references.
|
67 |
Optimisation of an operating policy for variable speed pumps using genetic algorithmsEusuff, M. Muzaffar. January 1995 (has links) (PDF)
Undertaken in conjunction with JUMP (Joint Universities Masters Programme in Hydrology and Water Resources). Bibliography: leaves 76-83. Establishes a methodology using genetic algorithms to find the optimum operating policy for variable speed pumps in a water supply network over a period of 24 hours.
|
68 |
Investigation of methodologies for fault detection and diagnosis in electric power system protectionAdewole, Adeyemi Charles January 2012 (has links)
Thesis Submitted in fulfilment of the requirements for the degree
Master of Technology: Electrical Engineering
in the Faculty of Engineering
at the Cape Peninsula University of Technology, 2012 / The widespread deregulation and restructuring of electric power utilities throughout the world and the surge in competition amongst utility companies has brought about the desire for improved economic efficiency of electric utilities and the provision of better service to energy consumers. These end users are usually connected to the distribution network. Thus, there is a growing research interest in distribution network fault detection and diagnosis algorithms for reducing the down-time due to faults. This is done so as to improve the reliability indices of utility companies and enhance the availability of power supply to customers.
The application of signal processing and computational intelligence techniques in power systems protection, automation, and control cannot be overemphasized. This research work focuses on power system distribution network and is aimed at the development of versatile algorithms capable of accurate fault detection and diagnosis of all fault types for operation in balanced/unbalanced distribution networks, under varying fault resistances, fault inception angles, load angles, and system operating conditions.
Therefore, different simulation scenarios encompassing various fault types at several locations with different load angles, fault resistances, fault inception angles, capacitor switching, and load switching were applied to the IEEE 34 Node Test Feeder in order to generate the data needed. In particular, the effects of system changes were investigated by integrating various Distributed Generators (DGs) into the distribution feeder. The length of the feeder was also extended and investigations carried out. This was implemented by modelling the IEEE 34-node benchmark test feeder in DIgSILENT PowerFactory (DPF).
In the course of this research, a hybrid combination of Discrete Wavelet Transform (DWT), decision-taking rule-based algorithms, and Artificial Neural Networks (ANNs) algorithms for electric power distribution network fault detection and diagnosis was developed. The integrated algorithms were capable of fault detection, fault type classification, identification of the faulty line segment, and fault location respectively.
Several scenarios were simulated in the test feeder. The resulting waveforms were exported as ASCII or COMTRADE files to MATLAB for DWT signal processing. Experiments with various DWT mother wavelets were carried out on the waveforms obtained from the simulations. In particular, Daubechies db-2, db-3, db-4, db-5, and db-8 were considered. Others are Coiflet-3 and Symlet-4 mother wavelets respectively. The energy and entropy of the detail coefficients for each decomposition level based on a sampling frequency of 7.68 kHz were analysed. The best decomposition level for the diagnostic tasks was then selected
based on the analysis of the wavelet energies and entropy in each level of decomposition. Consequently, level-1 db-4 detail coefficients were selected for the fault detection task, while level-5 db4 detail coefficients were used to compute the wavelet entropy per unit indices which were then used for fault classification, fault section identification, and fault location tasks respectively.
Decision-taking rule-based algorithms were used for the fault detection and fault classification tasks respectively. The fault detection task verifies if a fault did indeed occur or not, while the fault classification task determines the fault class and the faulted phase(s). Similarly, Artificial Neural Networks (ANNs) were used for the fault section identification and fault location tasks respectively. For the fault section identification task, the ANNs were trained for pattern classification to identify the lateral or segment affected by the fault. Conversely, the fault location ANNs were trained for function approximation to predict the location of the fault from the substation in kilometres.
Also, the IEEE 13 Node Benchmark Test Feeder was modelled in RSCAD software and batch mode simulations were carried out using the Real-Time Digital Simulator (RTDS) as a ‘proof of concept’ for the proposed method, in order to demonstrate the scalability, and to further validate the developed algorithms. The COMTRADE files of disturbance records retrieved from an external IED connected in closed-loop with the RTDS and the runtime simulation waveforms were used as test inputs to the developed Hybrid Fault Detection and Diagnosis (HFDD) method.
Comparison of the method based on entropy with statistical methods based on standard deviation and Mean Absolute Deviation (MAD) has shown that the method based on entropy is very reliable, accurate, and robust. Results of preliminary studies carried out showed that the proposed HFDD method can be applied to any power system network irrespective of changes in the operating characteristics. However, certain decision indices would change and the decision-taking rules and ANN algorithms would need to be updated.
The HFDD method is promising and would serve as a useful decision support tool for system operators and engineers to aid them in fault diagnosis thereby helping to reduce system down-time and improve the reliability and availability of electric power supply.
Key words: Artificial neural network, discrete wavelet transform, distribution network, fault simulation, fault detection and diagnosis, power system protection, RTDS.
|
69 |
Data-Driven Adaptive Reynolds-Averaged Navier-Stokes <em>k - ω</em> Models for Turbulent Flow-Field SimulationsLi, Zhiyong 01 January 2017 (has links)
The data-driven adaptive algorithms are explored as a means of increasing the accuracy of Reynolds-averaged turbulence models. This dissertation presents two new data-driven adaptive computational models for simulating turbulent flow, where partial-but-incomplete measurement data is available. These models automatically adjust (i.e., adapts) the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k-ω turbulence equations to improve agreement between the simulated flow and a set of prescribed measurement data.
The first approach is the data-driven adaptive RANS k-ω (D-DARK) model. It is validated with three canonical flow geometries: pipe flow, the backward-facing step, and flow around an airfoil. For all 3 test cases, the D-DARK model improves agreement with experimental data in comparison to the results from a non-adaptive RANS k-ω model that uses standard values of the closure coefficients.
The second approach is the Retrospective Cost Adaptation (RCA) k-ω model. The key enabling technology is that of retrospective cost adaptation, which was developed for real-time adaptive control technology, but is used in this work for data-driven model adaptation. The algorithm conducts an optimization, which seeks to minimize the surrogate performance, and by extension the real flow-field error. The advantage of the RCA approach over the D-DARK approach is that it is capable of adapting to unsteady measurements. The RCA-RANS k-ω model is verified with a statistically steady test case (pipe flow) as well as two unsteady test cases: vortex shedding from a surface-mounted cube and flow around a square cylinder. The RCA-RANS k-ω model effectively adapts to both averaged steady and unsteady measurement data.
|
70 |
The Development of a Hybrid Optimization Algorithm for the Evaluation and Optimization of the Asynchronous Pulse UnitInclan, Eric 01 January 2014 (has links)
The effectiveness of an optimization algorithm can be reduced to its ability to navigate an objective function’s topology. Hybrid optimization algorithms combine various optimization algorithms using a single meta-heuristic so that the hybrid algorithm is more robust, computationally efficient, and/or accurate than the individual algorithms it is made of. This thesis proposes a novel meta-heuristic that uses search vectors to select the constituent algorithm that is appropriate for a given objective function. The hybrid is shown to perform competitively against several existing hybrid and non-hybrid optimization algorithms over a set of three hundred test cases. This thesis also proposes a general framework for evaluating the effectiveness of hybrid optimization algorithms. Finally, this thesis presents an improved Method of Characteristics Code with novel boundary conditions, which better characterizes pipelines than previous codes. This code is coupled with the hybrid optimization algorithm in order to optimize the operation of real-world piston pumps.
|
Page generated in 0.1041 seconds