231 |
The applications of the boundary element method in high voltage systemsJones, S. P. January 1995 (has links)
This thesis describes the development and enhancement of the theory of the Boundary Element Method (BEM) as applied to electro-static systems. The objective of this study was the creation of a user-friendly suite of computer codes based on the BEM algorithms. The work starts by examining the integral equivalent of Poisson's equation which is the basic founding block of the method. A brief study of the basic algorithms involved in the method is carried out, together with a review of the different possible kinds of surface elements that can be found in high voltage applications. This is followed by descriptions of the computational methods used to solve for the electric potentials, field components and field derivatives, at any point in space. A major enhancement to the code has introduced the ability to calculate the self and mutual capacitances of any metallic bodies present in the high voltage system considered. Another major enhancement allows for solutions to be obtained in the presence of space-charge which may be in the form of rings, disks, and/or a continuous finite volume distribution. During this work a means was also developed of analysing system where charge accumulation has occurred on some or all dielectric surfaces present within the system. To test the applicability of the overall package a variety of classical solutions have been used to compare the BEM solutions against well known results. Comparisons have also been made for more complex high voltage systems where the BEM solutions have been compared with solutions obtained using other common numerical techniques e.g. the Finite Element Method or the Charge Simulation Method.
|
232 |
The visualisation of regular three dimensional dataJones, M. W. January 1995 (has links)
This work is a thorough investigation of the area of visualisation of regular three dimensional data. The main contributions are new methods for: (1) reconstructing surfaces from contour data; (2) constructing voxel data from triangular meshes; (3) real-time manipulation through the use of cut planes; and (4) ultra high quality and accurate rendering. Various other work is presented which reduces the amount of calculations required during volume rendering, reduces the number of cubes that need to be considered during surface tiling and the combined application of particle systems and blobby models with high quality, computationally efficient rendering. All these methods have offered new solutions and improved existing methods for the construction, manipulation and visualisation of volume data. In addition to these new methods this work acts as a review and guide of current state of the art research, and gives in depth details of implementations and results of well known methods. Comparisons are made using these results of both computational expense and image quality, and these serve as a basis for the consideration of what visualisation technique to use for the resources available and the presentation of the data required. Reviews of each main visualisation topic are presented, in particular the review of volume rendering methods covers much of the recent research. Complementing this is the comparison of many alternative viewing models and efficiency tricks in the most thorough investigation to this researcher's knowledge. During the course of this research many existing methods have been implemented efficiently, in particular the surface tiling technique, and a method for measuring the distance between a point and a 3D triangle.
|
233 |
Power semiconductor devices : two-dimensional simulation and compact model developmentLi, Z. M. January 1997 (has links)
The mixed-mode modelling methodology is essential to extend the benefit of computer-aided design (CAD) techniques to the design of power electronic circuits. The mixed-mode environment can be established in two ways. One is through adopting device circuit models into the two-dimensional device simulator. The other is through introducing finite-element device models into the circuit simulator. In the first method, an effective numerical algorithm has been proposed for adopting advanced compact models for power devices into a general-purpose two-dimensional mixed-mode device/circuit simulator. A suite of compact models for major power devices are implemented into the simulator and validated. In the second method, simplified numerical models for power diodes and insulated gate bipolar transistors (IGBT) for the circuit simulator Saber is proposed. Comparisons of the new models with the other compact model and the experimental result are made and show good agreement. It is also shown that the speed of the numerical models can be increased enormously to be comparable with the speed of other compact models in some cases by using specific numerical techniques. Techniques for measuring fast power devices and extracting the parameters are studied and enhanced, a commercial IGBT device is measured, the parameter extraction procedure applied and device characterised. Two-dimensional device simulation has been used extensively for understanding the devices and to help in the compact model development. Two-dimensional numerical analysis of the quasi-saturation phenomenon in the vertical double-diffused MOS (VDMOS) device is carried out and improved design proposed.
|
234 |
An approach for the causal analysis in casting processes based on probabilistic analysis, neural network and design optimisationRansing, R. S. January 1995 (has links)
For the causal analysis in a manufacturing process, many statistical techniques are available to assist an engineer in the decision making. Later, expert systems techniques were also proposed to attempt some routine decision making problems. However, these systems faced a criticism that they do not generate any new information i.e. the user only gets that information which is already encoded in the database and rulebase. The algorithms proposed in this thesis are expected to generate new information even for an expert user. A two pronged strategy has been adopted for the causal analysis. Analysing the influence of causes on the occurrence of defects by recognising patterns in the rejection data and also using the previous diagnostic examples. The program learns from examples as well a mistakes and at the same time explains or justifies its results. In the second strategy, the causal relationship is analysed by the numerical simulation of the process. If a defect is predicted, the algorithm will automatically redesign the process or die until the casting is simulated as defect free. A new scheme has been proposed for the causal relationship. Causes are categorised into two groups viz. Rootcauses and Metacauses. Rootcauses are the actual process, design and material parameters which can be directly controlled. Metacauses denote the scientific rationale which relate rootcauses to defects. Representation of the causal knowledge in such a network form opened two research directions for the diagnostic analysis. The first one based on the probabilistic analysis and the second one on neural network. In both cases the network was constrained to the defect-metacause-rootcause relationship and necessary modifications were done in the conventional approaches.
|
235 |
Models and computer based support tools for conceptual design in mechanical engineeringAl-Salka, M. A. January 1995 (has links)
The aim of this thesis is to find a valid and practical model for a computer-based conceptual design support system. A prerequisite to building any design support system is a proper understanding of the design process. Hence, a survey of current <I>prescriptive, descriptive, </I>and <I>computational </I>models of the design process is presented. Initially, it was thought that the various stages of the conceptual design phase, such as function decomposition, solution generation, and evaluation, can be addressed independently. However, from the initial investigations in this research programme, it was soon realised that these stages are heavily inter-related and, therefore, the conceptual design phase should preferably be studied as a whole. Established prescriptive models of the design process (design methodologies) have been carefully studied and evaluated, with the help of a number of case design problems. A number of limitations have been encountered with these models, and therefore, a more realistic design model, which overcomes some of the limitations of established methodologies is proposed. Based on the insights gained from the theoretical studies, and the empirical investigations of other researchers, a model of an interactive design support system is described and fully explored. A novel high-level programming language, which can easily be used to define procedural models of engineering design, has been devised. This language, which will be referred to in this thesis as the <I>Design Procedures programming Language </I>(DPPL). A prototype <I>Conceptual Design Support and Analysis System </I>(CODSAS), which embodies the above model has been implemented. CODSAS provides an integrated, consistent framework for carrying out (conceptual) engineering design.
|
236 |
The use of hypertext in handling process engineering design dataKift, M. H. January 1995 (has links)
This thesis aims to investigate aspects of data management associated with process engineering design, with a view to proposing improved methods and models for data storage, manipulation and representation, on which a foundation for new process design environments may be based. Several aspects of integrated process design database systems are examined, including data models, data handling, program control and integration. Process design is an increasingly complex activity, with this complexity being compounded by moves towards the adoption of concurrent engineering and "round-the-clock" activities in design offices around the world. It is evident that better support for the management of advanced data types and data control is needed to support the design activity. The main focus of this research is the use of hypertext in an integrated engineering database system, for handling complex information during process design. The research investigates the potential that hypertext provides for easy control of information, and the powerful tools it provides to allow the construction of data relationships. In addition to hypertext, this research investigates the strengths and weaknesses of current database models for engineering and hypertext usage. This leads to recommendations on the models best suited for these tasks. Finally, methods of data and program integration in current integrated systems are examined. Several new methods of integration are investigated, using advanced features of the Microsoft Windows operating system. These new methods aim to achieve close integration of both engineering and conventional applications. The ideas proposed by the research are manifested in the form of computer packages. These have been used in the thesis to demonstrate, by means of realistic examples, the use of hypertext for handling the design process, the employment of an advanced data model for handling complex multimedia information, and program integration.
|
237 |
Processor interactions in multiprocessor digital filtering studiesArtym, R. January 1984 (has links)
No description available.
|
238 |
Interactive multi-media systems for science and rheologyDeliyannis, I. January 2003 (has links)
This work is associated with the construction of multi-media systems for scientific and rheological case-studies, introducing the element of interactivity. The base-data are often the end-result of simulation (CFD) or experimental investigations, utilised in some post-processing form, calling upon visualisation. Other issues addressed include the organisation and presentation of data, into meaningful and easily-accessible forms. This reveals multifaceted presentation requirements. In this respect, current commercial presentation applications have been found lacking. They rarely offer cross-platform compatibility, provide limited programmability, are restrictive on data-interaction, and only support static WWW-delivery. Hence, a substantive goal has been to deliver comprehensive multimedia-system functionality, both as a stand-alone system and over the WWW, via a single, multi-purpose implementation. A multimedia framework is employed, to embrace the interactive and visual presentation of a series of scientific and rheological case-studies with large conventional (static), and multi-mode (dynamic) data-sets. Graphs are utilised to enable rigorous multimedia system (MMS) design, effortless updates and unproblematic future expansion. Multimedia environments (MME) facilitate rapid prototyping and a data-oriented approach is advocated. As a result, the customised presentation-attributes vary widely from case-study to case-study, touching upon aspects such as user-interface, interaction, data-content organisation and presentation-styles. Case-studies include educational and scientific/industrially-related content. A film introducing the scientific area of non-Newtonian fluids is digitally constructed, using solid-modelling animations, simulation-data, filmed experiments, music, voiceover, and transitions. Content-delivery for multiple media is implemented, and includes interactive WWW or CD/DVD multi-language versions and static (videotape, DV) modes. Next, a historical multimedia (MM) presentation (History of Rheology MMS) is generated. This features an informal user-interface dynamic rendering of content, making extensive use of client-server technology for interactive stream-delivery over the WWW. Synchronisation is demanded for multiple streamed data-types, such as: voiceover, music, experimental and CFD-sourced data, animated-graphs, and rendered wire-frame graphics. Scientific/industrial data-sets of increasing complexity are featured.
|
239 |
On polynomial expansions in modelling and smoothing transect dataHeatley, P. January 1999 (has links)
The focus of this study is the modelling and smoothing of transect data. The former requires parameter estimation in statistical models for data with spatial dependence or correlation, while the latter requires the study of penalty functions specifying the degree of smoothing required. Both aspects typically involve the computation of determinants and inverses of matrices, and such calculations are computationally expensive if reliable estimation of parameters is required. This investigation addresses such practical difficulties by developing computationally efficient and robust frameworks for parameter estimation for models in these related fields of study. For the first, we consider the estimation of correlation and related parameters using the likelihoods of the spatial linear model. Here, spatial observations are regarded as dependent observations in <I>d-</I>dimensional space, with special emphasis on <I>d=1. </I>The correlation structure is assumed to be of known functional form, depending on a finite number of parameters and the relative locations of the data points. Thus, for particular correlation functions and sampling configurations, it is possible to express quantities in likelihood functions as polynomials in the parameters of the correlation function. This facilitates efficient estimation of parameters and therefore permits a comprehensive examination of likelihood functions. As it is well-known that these functions can, in certain circumstances, be seriously multimodal, our results have both theoretical and practical interest, by providing useful assessments of the behaviour the likelihoods, in conjunction with details on the application of the computationally efficient polynomial based estimation framework. The second area for discussion involves the application of the polynomial framework in the estimation of parameters in difference-based methodologies for the smoothing of transect data. The polynomial framework leads to useful formulae for quantities in penalty functions, leading to the selection of a smoothing parameter. Then, by numerical examination, we assess the consistency in the degree of smoothing prescribed by various criteria, again data which exhibits some spatial dependence.
|
240 |
A class and object library for parallel discrete event simulation : with application to manufacturing systems and communication network systemsHuang, L. S. F. January 1998 (has links)
In parallel to the developments of base technologies for discrete event simulation, there has been an ever-increasing trend in using the object-oriented programming (OOP) technique to realise simulations. Object-oriented programming provides outstanding benefits for software development in which the classes and objects can be reused, extended, and easily maintained for the simulation of discrete event systems. This thesis describes the implementation of discrete event simulation using a library of classes and objects running under a parallel and distributed computing environment. The proposed system is focused on applications related to manufacturing control systems and communication network systems. Nevertheless, the object-oriented infrastructure of the proposed system will accommodate future development for applications in distributed computer control systems, traffic systems, start-up and shut-down procedures of industrial plants and many other safety-critical applications. Based on the object-oriented philosophy, a hierarchy of classes and objects has been developed to represent the behaviour of such entities and their interactions as a discrete event system. From this library of reusable classes and objects, which are developed in the C++ programming language, the behaviour and functionality of the designated application system can be modelled. The efficiency and applicability of the developed class and object library can then be demonstrated by the simulation of a variety of discrete event systems. For parallel implementation, an object distribution mechanism is applied to assign one or more application objects to each process. By sending and receiving messages through the Parallel Virtual Machine (PVM) architecture, the multiple tasks of an application can co-operate to solve a problem in parallel. Several concepts of the realisation of a parallel discrete event system simulation, are also discussed in this thesis. By integrating the necessary software engineering technologies into the proposed system infrastructure, the resulting general-purpose library of classes and objects for parallel discrete event system simulation is realised. A high degree of parallelisation is also achieved through the Parallel Virtual Machine framework. Industrial applications are simulated to demonstrate the versatility of the developed class and object library for parallel discrete event simulation.
|
Page generated in 0.0379 seconds