11 |
A hierarchical, integrated process-, resource- and object-modelHeinl, Hans January 2001 (has links)
No description available.
|
12 |
Complexity characteristics and measurement within engineering systemsRead, Craig January 2008 (has links)
Complexity is a significant factor in the development of new products and systems; generally speaking, the higher the complexity, the more difficult products and systems are going to be to design and develop. There are a number of different factors that influence complexity within systems, namely: interoperability; upgradability; adaptability; evolving requirements; system size; automation requirements; performance requirements; support requirements; sustainability; reliability; the need for increased product lifespan; and finally, the length of time systems take to develop. There is, at present, no common language to describe complexity within engineered systems; this language needs to be developed in order to help industry cope with increasing product complexity and thus meet customer demands. This thesis represents a start in the development of that language, and thus an understanding of systems complexity. The thesis offers a framework for complexity analysis within systems, one which identifies some of the key complexity characteristics that need to be taken into consideration, and which embraces complexity problems, definitions, concepts and classifications, origins and coping mechanisms. It has also has been developed in terms of a measurement approach, thereby allowing for a meaningful comparison between products, and an understanding of the complexities within them. This framework was developed using information collected from academic literature and from more specific case studies. Each complexity characteristic was investigated, and the interactions between characteristics were identified; these interactions allow us to understand complexity and help to develop a common language. The thesis develops a measurement technique that quantifies various complexity characteristics in terms of the framework laid down, thus enabling a quantified understanding of complexity within systems. This new measurement approach was tested on a set of recent case studies, and the complexity characteristics produced by the measurement technique were, in turn, tested against attributes of the system. The framework itself is always evolving - it incorporates new complexity characteristics. Nevertheless, such evolution can only further our understanding of complexity. Further work, to explore and integrate the approach demonstrated in this thesis into an automated tool, and test its robustness, along with a continual development of other elements of the framework, such as a classification of complexity, is recommended.
|
13 |
Product modularity : a multi-objective configuration approachLee, Michael January 2010 (has links)
Product modularity is often seen as a means by which a product system can be decomposed into smaller, more manageable chunks in order to better manage design, manufacturing and after-sales complexity. The most common approach is to decompose the product down to component level and then group the components to form modules. The rationale for module grouping can vary, from the more technical physical and functional component interactions, to any number of strategic objectives such as variety, maintenance and recycling. The problem lies with the complexity of product modularity under these multiple (often conflicting) objectives. The research in this thesis presents a holistic multi-objective computer aided modularity optimisation (CAMO) framework. The framework consists of four main steps: 1) product decomposition; 2) interaction analysis; 3) formation of modular architectures and; 4) scenario analysis. In summary of these steps: the product is first decomposed into a number a basic components by analysis of both the physical and functional product domains. The various dependencies and strategic similarities that occur between the product s components are then analysed and entered into a number of interaction matrixes. A specially developed multi-objective grouping genetic algorithm (MOGGA) then searches the matrices and provides a whole set of alternative (yet optimal) modular product configurations. The solution set is then evaluated and explored (scenario analysis) using the principles of Analytic Hierarchy Process. A software prototype has been created for the CAMO framework using Visual Basic to create a multi-objective genetic algorithm (GA) based optimiser within an excel environment. A case study has been followed to demonstrate the various steps of the framework and make comparisons with previous works. Unlike previous works, that have used simplistic optimisation algorithms and have in general only considered a limited number of modularisation objectives, the developed framework provides a true multi-objective approach to the product modularisation problem.
|
14 |
Novel modelling and decomposition for overall refinery optimisation and debottleneckingZhang, Nan January 2000 (has links)
No description available.
|
15 |
Acoustic Condition Monitoring In Industrial EnvironmentsJiang, Jian January 2008 (has links)
Date of submission DDjr4f.1/VVVV· Condition monitoring (CM) based Maintenance is an effective way to ensure a machine's reliability during its useful life,' hence reducing the occurrence of unexpected accidents and bringing economic benefits. Awareness of condition monitoring has grown in recent years and so more attentions have been paid to the subject. Although traditional condition monitoring systems (using vibration, speed measurement, temperature monitoring, .etc) are still very effective in particular fields, they have some deficiencies, such as difficulty in implementation and the inability to provide remote and contactless measurement, which limit the condition monitoring systems' wide application. Acoustic measurements do not have these deficiencies as well as providing information richness. One major drawback limiting the use of acoustic measurements in industry, howev~r,. is that the signal can be seriously influenced by environmental factor§,~.:SUeli.l~s,lreverberation and background noise. Fortunately, developments in h'ardware)t~'chnology and signal processing have shown a potential for reducing these influences by using array technologies. However, large scale acoustic arrays are too expensive to be applied in many situations, and so small scale arrays offer a cost-effective alternative. This thesis concentrates on developing a new condition monitoring system using a small scale acoustic array. This system first localises the acoustic source, which is the object of condition monitoring, in the industrial environment. Then, based upon this localisation, the system uses beam-forming to enhance the signal propagating from the source direction and attenuate signals from other directions. Since reverberation signals often propagate from directions other than the source, this procedure has the potential to reduce the effects of reverberation. Other pre-processing methods, such as band-pass filters and temporal averaging are then used to further enhance the signal to noise ratio. Finally, the processed signals are used to monitor the condition of desired machine and diagnose the potential faults. The thesis is divided into 4 parts. In part 1, the motivation of this project and the reasons why acoustic method and small scale array technologies are used are introduced. In part 2, the technologies relating to ,t~e establishment of this new condition monitoring system are investig,!ted;, .rhe;: cpp.~ents include 1) investigation of the array's configuration, 2) time delay estimation algorithms used in industrial environment and 3) other issues relating to acoustic condition monitoring using the new system. In part 3, the new system is established and used to monitor the condition and diagnose the potential faults of a 4-cylinder diesel engine in industrial environments. The experimental results show that this system is effective in this practical application and has potential to be developed further. In part 4, a summary of this thesis is given.
|
16 |
Analysis and optimisation of total site utility systemsShang, Z. January 2000 (has links)
No description available.
|
17 |
Knowledge-based product support systemsLagos, Nikolaos January 2007 (has links)
This research helps bridge the gap between conventional product support, where the support system is considered as a stand-alone application, and the new paradigm of responsive one, where the support system frequently communicates with its environment and reacts to stimuli. This new paradigm would enable product support knowledge to be captured, stored, processed, and updated automatically, being delivered to the users when, where and in the form they need it. The research reported in this thesis first defines Product Support Systems (PRSSs) as electronic means that provide accurate and up-to-date information to the user in a coherent and personalised manner. Product support knowledge is then identified as the integration of product, task, user, and support documentation knowledge. Next, the thesis focuses on an ontology-based model of the structure, relations, and attributes of product support knowledge. In that model product support virtual documentation (PSVD) is presented as an aggregation of Information Objects (IOs) and Information Object Clusters (IOCs). The description of PSVD is followed by an analysis of the relation between IOs, IOCs, and domain knowledge. Then, the thesis builds on the ontology-based representation of product support knowledge and explores the synergy between product support, problem solving, and knowledge engineering. As a result, a structured problem solving approach is introduced that combines case-based adaptation and model-based generation techniques. Based on that approach a knowledge engineering framework for product support systems is developed. A conceptual model of context-aware product support systems that extends the framework is then introduced. The conceptual model includes an ontology-based representation of knowledge related to the users, their activities, the support environment, and the device being used. An approach to semi-automatically integrating design and documentation data is also proposed as part of context-aware product support systems development process.
|
18 |
Laser milling : surface integrity, removal strategies and process accuracyPetkov, Petko January 2011 (has links)
Laser milling is capable of processing a large range of materials which are not machinable with conventional manufacturing processes. Engineering materials such as glass, metals and ceramics can be machined without requiring expensive special tools and without any limitations on the 3D complexity of the component. Laser milling is still in its infancy. Laser material interactions are not yet fully understood. Much effort in research and development of the available laser sources is still needed. Ultrafast lasers are beginning to be applied. They can offer more precise machining without the thermal damage that accompanies long-pulse laser manufacturing. Laser pulse duration and its effect on resulting surface integrity has been studied as well as material removal strategy and process accuracy. In order to characterise the resulting surface after laser ablation, the heat affected zone is usually specified. In most cases, visual inspection would be performed without further analysis, resulting in variance of the findings attributed to the operator. A new methodology was required to accurately and impartially assess the heat penetration and quantify the findings. Based on material grain refinement, a comprehensive new methodology was created. By monitoring the changes in grain sizes, a chart of the heat penetration could be created accurately with automated routines. Surface integrity is a critical factor for many applications and a methodology based on analysis of grain refinement in the vicinity of the processed area would create a full map of - iii - the changes happening after laser ablation. Furthermore, the impact of the laser pulse duration is studied utilising the above mentioned development. Further to the surface roughness and heat affected zone, an in-depth analysis was completed on the micro hardness of the material in order to create a comprehensive chart of the changes induced by the laser milling process. Material removal is based on the overlapping of single craters, and the way the craters overlap is referred to as material removal strategy. Generally there are many strategies formulated for material removal but none of them takes into account the specifics of laser milling. Based on surface orientation, dimensions and feature importance, an assessment of material removal strategies is presented. Although ‘laser milling’ is a term used for a number of material removal processes, there are significant differences between them. New strategies for material removal are formulated and reported based on surface topography and orientation. Advanced programming is realised using a commercially available generic CAM package but taking into account the specifics of the laser milling process. The accuracy of the laser milling process depends on the laser-material interaction, and also on the machine hardware, control system and software. Most of the factors affecting accuracy cannot be changed once the machine is built, but there are some that can be optimised to improve process accuracy. The laser source with its characteristics is as important as the material being processed. The relationship between pulse duration, pulse shape and accuracy of the process was demonstrated through a series of experiments designed to expose the correlation between, and impact of, these parameters.
|
19 |
Data clustering using the Bees Algorithm and the Kd-tree structureAl-Jabbouli, Hasan January 2009 (has links)
Data clustering has been studied intensively during the past decade. The K-means and C-means algorithms are the most popular of clustering techniques. The former algorithm is suitable for 'crisp' clustering and the latter, for 'fuzzy' clustering. Clustering using the K-means or C-means algorithms generally is fast and produces good results. Although these algorithms have been successfully implemented in several areas, they still have a number of limitations. The main aim of this work is to develop flexible data management strategies to address some of those limitations and improve the performance of the algorithms. The first part of the thesis introduces improvements to the K-means algorithm. A flexible data structure was applied to help the algorithm to find stable results and to decrease the number of nearest neighbour queries needed to assign data points to clusters. The method has overcome most of the deficiencies of the K-means algorithm. The second and third parts of the thesis present two new clustering algorithms that are capable of locating near optimal solutions efficiently. The proposed algorithms combine the simplicity of the K-means algorithm and the C-means algorithm with the capability of a new optimisation method called the Bees Algorithm to avoid local optima in crisp and fuzzy clustering, respectively. Experimental results for different data sets have demonstrated that the new clustering algorithms produce better performances than those of other algorithms based upon combining an evolutionary optimisation tool and the K-means and C-means clustering methods. The fourth part of this thesis presents an improvement to the basic Bees Algorithm by applying the concept of recursion to reduce the randomness of its local search procedure. The improved Bees Algorithm was applied to crisp and fuzzy data clustering of several data sets. The results obtained confirm the superior performance of the new algorithm.
|
20 |
Optimum allocation of inspection stations in multistage manufacturing processes by using Max-Min Ant SystemShetwan, Ali Gassim M. January 2013 (has links)
In multistage manufacturing processes it is common to locate inspection stations after some or all of the processing workstations. The purpose of the inspection is to reduce the total manufacturing cost, resulted from unidentified defective items being processed unnecessarily through subsequent manufacturing operations. This total cost is the sum of the costs of production, inspection and failures (during production and after shipment). Introducing inspection stations into a serial multistage manufacturing process, although constituting an additional cost, is expected to be a profitable course of action. Specifically, at some positions the associated inspection costs will be recovered from the benefits realised through the detection of defective items, before wasting additional cost by continuing to process them. In this research, a novel general cost modelling for allocating a limited number of inspection stations in serial multistage manufacturing processes is formulated. In allocation of inspection station (AOIS) problem, as the number of workstations increases, the number of inspection station allocation possibilities increases exponentially. To identify the appropriate approach for the AOIS problem, different optimisation methods are investigated. The MAX-MIN Ant System (MMAS) algorithm is proposed as a novel approach to explore AOIS in serial multistage manufacturing processes. MMAS is an ant colony optimisation algorithm that was designed originally to begin an explorative search phase and, subsequently, to make a slow transition to the intensive exploitation of the best solutions found during the search, by allowing only one ant to update the pheromone trails. Two novel heuristics information for the MMAS algorithm are created. The heuristic information for the MMAS algorithm is exploited as a novel means to guide ants to build reasonably good solutions from the very beginning of the search. To improve the performance of the MMAS algorithm, six local search methods which are well-known and suitable for the AOIS problem are used. Selecting relevant parameter values for the MMAS algorithm can have a great impact on the algorithm’s performance. As a result, a method for tuning the most influential parameter values for the MMAS algorithm is developed. The contribution of this research is, for the first time, a methodology using MMAS to solve the AOIS problem in serial multistage manufacturing processes has been developed. The methodology takes into account the constraints on inspection resources, in terms of a limited number of inspection stations. As a result, the total manufacturing cost of a product can be reduced, while maintaining the quality of the product. Four numerical experiments are conducted to assess the MMAS algorithm for the AOIS problem. The performance of the MMAS algorithm is compared with a number of other methods this includes the complete enumeration method (CEM), rule of thumb, a pure random search algorithm, particle swarm optimisation, simulated annealing and genetic algorithm. The experimental results show that the effectiveness of the MMAS algorithm lies in its considerably shorter execution time and robustness. Further, in certain conditions results obtained by the MMAS algorithm are identical to the CEM. In addition, the results show that applying local search to the MMAS algorithm has significantly improved the performance of the algorithm. Also the results demonstrate that it is essential to use heuristic information with the MMAS algorithm for the AOIS problem, in order to obtain a high quality solution. It was found that the main parameters of MMAS include the pheromone trail intensity, heuristic information and evaporation of pheromone are less sensitive within the specified range as the number of workstations is significantly increased.
|
Page generated in 0.0272 seconds