• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 551
  • 51
  • 43
  • 41
  • 32
  • 29
  • 27
  • 20
  • 17
  • 16
  • 15
  • 14
  • 13
  • 12
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Design for manufacturability : a feature-based agent-driven approach

Jacquel, Dominique January 2000 (has links)
This thesis presents a feature-based design system called MADSfm (MultiAgent Design System for manufacturability), which allows the creation of 21/2D mechanical components, performs on the fly manufacturability analysis and even solves common problems automatically. The system uses the multiagent paradigm at a feature level to create a new type of active product model. Indeed, each feature inside the product model is embodied by an autonomous agent capable of communicating with its peers, building an image of its world, assessing its fitness in this world and modifying its own geometry to guarantee its manufacturability. The model therefore becomes a community of motivated pro-active agents able to carry out simple tasks on behalf on the user. The underlying activity inside the agent community representing the model leads to a global emergent behaviour in the system that ensures component manufacturability. In turn the emergent behaviour allows a reduction of the amount of global component knowledge needed to perform manufacturability analysis. Most activity can take place locally for each individual feature making the approach robust and a good candidate for parallelisation and distribution. The agent-driven approach to feature-based design and manufacturability analysis ensures robust manufacturable designs with a shorter lead time bringing substantial cost savings.
2

The computation of multiple roots of a polynomial using structure preserving matrix methods

Hasan, Madina January 2011 (has links)
Solving polynomial equations is a fundamental problem in several engineering and science fields. This problem has been handled by several researchers and excellent algorithms have been proposed for solving this problem. The computation of the roots of ill-conditioned polynomials is, however, still drawing the attention of several researchers. In particular, a small round off error due to floating point arithmetic is sufficient to break up a multiple root of a polynomial into a cluster of simple closely spaced roots. The problem becomes more complicated if the neighbouring roots are closely spaced. This thesis develops a root finder to compute multiple roots of an inexact polynomial whose coefficients are corrupted by noise. The theoretical development of the developed root solver involves the use of structured matrix methods, optimising parameters using linear programming, and solving least squares equality and nonlinear least squares problems. The developed root solver differs from the classical methods, because it first computes the multiplicities of the roots, after which the roots are computed. The experimental results show that the developed root solver gives very good results without the need for prior knowledge about the noise level imposed on the coefficients of the polynomial.
3

Application of parallel computers to particle physics

Booth, Stephen Peter January 1992 (has links)
This thesis describes lattice gauge theories and discusses methods used to simulate them stochastically. The use of parallel computers for these simulations is discussed in depth. Various pseudo-random number generator algorithms are reviewed and the implementation of these algorithms on parallel systems is investigated. The strong-coupling phase transition of non-compact lattice QED is investigated. The phase diagram of strong-coupling non-compact lattice QED with an additional four-fermion interaction is deduced using a series of dynamical fermion simulations. The mass dependence of the system is investigated for non-compact QED and along the β = 2.0 axis, which is close to a system with only four-fermi interactions. These results are compared with solutions to the gap equation in order to determine if the data is consistent with a mean-field interpretation. An interpolation technique intended to improve the utilisation of the available data is investigated. The simulation program is also described in detail as a case study of a parallel implementation of a lattice gauge theory. The implementation of QCD on an i860 based parallel computer is described in depth. This includes a description of how code is optimised for the i860, an analysis of the time-critical portions of the code and a discussion of how these routines were implemented. Timings for these routines are given. Some results from these simulations are also presented.
4

Approximate procedures for simulation and synthesis of nonideal separation systems

Matias, Teresa do Rosario Senos January 1997 (has links)
Simulation and synthesis of nonideal separation systems is computationally intensive. The main reasons for this are the time used in the calculation of physical properties, which cannot be assumed constant throughout the calculation, and the elaborate methods required for the full rigorous simulation and design of distillation units. The present work looks at two different ways of reducing computing time in steady state simulation and in synthesis of nonideal separation systems: • Use of approximate models for physical property calculation. • Use of 'shortcut' procedures, which are thermodynamically rigorous, in simulation and synthesis of nonideal distillation. Approximate models are derived for the liquid activity coefficient and for relative volatilities within a simplified flash unit. Liquid activity coefficient models include a Margules-like equation generalised for a multicomponent mixture and other equations of the form of rational functions. They are tested with several nonideal ternary mixtures and it is shown how their behaviour changes across the ternary composition diagram. The development of simplified flash units with approximate physical properties is done in a dual level flowsheeting environment. One level is used to solve the material balance assuming given fixed relative volatilities. The other level approximates the physical property values based on rigorous bubble point data obtained from a rigorous physical property package, using an 'ideal' correction to calculate the vapour liquid equilibrium conditions. It is shown how the two levels can be used in different arrangements, by converging them simultaneously or one within the other. The performance of the dual level flowsheeting arrangements is tested using the Cavett problem structure for several mixtures and compared against the conventional method where the flash is performed directly by the rigorous physical property package. Finally a rigorous shortcut procedure has been developed for designing nonideal distillation processes. The procedure is based on a nonideal variation of Fenske equation with rigorous physical properties using an iterative method. The procedure is implemented in a package for automated synthesis incorporating heat integration. An example case is studied and the results obtained in the synthesis are compared will a full rigorous simulation of the same process. It is shown for the first time how a rigorous shortcut procedure can be used in synthesis to produce results that consider heat integration in the initial stages of design, within a reasonable amount of time.
5

Knowledge based techniques in plant design for safety

Waters, Anthony January 1991 (has links)
This thesis is concerned with computer support for the Loss Prevention activities that take place during process design. The scope is deliberately wide because the central problem is to 'get the design right'. This in turn requires consideration of 'What is a good design?' and 'How can one represent designs computationally?'. Added to this are strategic issues of how a plant design should proceed and how safety techniques can best be integrated into the overall design structure. Thus one needs to addresss both the fundamental questions concerning the nature of reasoning and the database/communication problems of managing a large design project. An example problem from each area is explored in this thesis, which has a common introduction but then divides into two parts. The first part is devoted to 'bottom-up qualitative reasoning', which tries to predict system behaviour without performing detailed numerical simulations. The aim was to capture the sorts of reasoning that a HAZOP team might do whilst working through a set of failure scenarios. This problem is addressed both by the use of rules to directly represent causality and by the use of <i>qualitative simulation</i>, which is a technique from Artificial Intelligence (AI). The inferencing power of the latter approach is shown to be far superior but severe efficiency problems do result. The second part addresses the problem of checking the validity and satisfaction of <i>designer's intention</i>, at all stages of design. To explore this the author has extended the features of an AI toolkit called Knowledge Craft, in order to allow its use as a Database Management System for design knowledge. The resulting software is called The Constraint Tools System, which has three main components: 1. The Constraint System, which is used to create constraints and to control their application throughout the design process. 2. The Propagating-relation System, which is used to specify mappings in data sharing situations and to automatically update equivalent values. 3. The Refinement Manager, which uses Knowledge Craft contexts to represent hierarchical design.
6

The application of parallel computation to process simulation for the structured design of IC fabrication processes

Alexander, Walter James Cunningham January 1992 (has links)
The ability of semiconductor process simulation to analyse the physical effects of individual fabrication steps and their interaction within an entire process has gained increasing recognition within the industry. Simulation has been applied to the synthesis of nominal operating points and has offered substantial reductions in both time and expenditure when compared to experimental runs for this role. Semiconductor companies are also realising that both performance and manufacturability must be designed into new technologies from their inception. This concept of Design for Manufacturability (DFM) can be implemented by linking process simulation with statistically based experimental design and analysis tools. However, neither the software framework nor the underlying computational resource currently exist to provide the level of system integration required to support DFM within a commercial environment. The Thesis first introduces a method for enhancing the performance of process simulation software by utilising the power of parallel computing offered by the INMOS transputer. A parallel implementation of the one-dimensional simulator SUPREM-II has been developed which demonstrates the computational performance that is economically attainable and readily scalable using this technology. The system has then been extended to provide a fully functional DFM environment by automatically integrating the parallel process simulation capability with the experimental design and analysis software, RS/1. A review of parallel computing systems, semiconductor fabrication control, process simulation and experimental design/analysis is also provided to complement the presentation of the original contributions outlined above.
7

Feature based design : integration of CAD and CAM

Chaharbaghi, H. January 1992 (has links)
The product model is the primary source of input for a process planning system. The basic requirement of a product modeller in the area of Generative Computer-Aided Process Planning (GCAPP) is to generate a complete, exact, unambiguous 3D product representation which is directly accessible to automated planning. Such a product representation must include geometry, material specification, surface finish data, features and their relationship to other features, and tolerances. The product representation has to be complete, since in an automated environment interactive input of missing information at a later stage has to be avoided. Automation of process planning requires the product data to be extracted from the product model without human interaction. With respect to the above requirements a principle called Feature Based Design has been investigated and developed. This method provides a part description at the design stage which is suitable for a GCAPP system. If a fully automated GCAPP is consolidated to the system a real integration of CAD and CAM will be achieved, which will support simultaneous engineering philosophy as well as providing a true foundation for CIM.
8

Integration of software tools to aid the implementation of a DFM strategy

Nilsen, Vidar K. January 2000 (has links)
This thesis reports on the design and operation of three software tools that have been developed to integrate commercial analysis packages into an existing TCAD framework. FASTT (Factory Simulation in Total TCAD) and CASTT (Cost AnalysiS in Total TCAD) automate the creation, simulation and extraction of results of factory and cost of ownership models respectively, whilst MASTT (MAnufacturing execution System in Total TCAD) ensures that up-to-date modelling data is readily available. Together they enable faster and simpler analysis of manufacturing issues than is the case with traditional model building techniques. This enhances the existing development tool set and for the first time allows manufacturing analysis to become a routine part of process development. The thesis introduces the background associated with process development, the existing tool-set and the packages integrated by FASTT, CASTT and MASTT. Examples are used to illustrate the ease of use of the software tools and to highlight their potential. These include: the use of FASTT to identify potential production bottlenecks and capacity; identifying low cost production for a range of potential process options using CASTT; cycle time, throughput and cost of ownership analysis using both FASTT and CASTT to highlight the manufacturing differences of alternative dielectric process steps and finally, the role of MASTT during TCAD analysis to identify corrective processing after wafers have received an incorrect implant.
9

Portable lattice QCD software for massively parallel processor systems

Stanford, Nicholas Paul January 1994 (has links)
Quantum Chromodynamics (QCD), which models the interactions of quarks and gluons, forms part of the standard model, currently the best theoretical framework of unified particle interactions. Lattice QCD is a method of simulating the theory of QCD in a discretised form on computers. This approach to particle physics is vitally important for providing a comparison with experimental measurements and predicting new particle properties. To implement lattice QCD we require very high performance computers, the latest generation of which are known as Massively Parallel Processors (MPPs). These are available in two main distinct architectures, Multiple Instruction Multiple Data (MIMD) and Single Instruction Multiple Data (SIMD). We present a suite of lattice QCD software intended to be portable across all currently available MPP platforms. This is achieved by utilising emerging standards in parallel programming languages. We use subset High Performance Fortran for SIMD machines and the PVM message passing package, with provision for the forthcoming Message Passing Interface (MPI) standard, for MIMD machines. Software engineering techniques are used to design and document a package which delivers a high output of physics results without a large investment in optimisation for new platforms. This is achieved while still preserving the major requirements of reducing memory demands and increasing speed and user understanding. Detailed procedures for testing the package and validating results are presented, without which there could be little confidence in the physics generated. To evaluate the efficiency of the software suite we present timings for important code sections generated on a range of MPP platforms.
10

Investigation of object oriented programming techniques for embedded generation switchgear design

McCabe, Paul Robert January 2000 (has links)
An area of particular concern is the design of electrical switchgear within embedded generation systems. Currently, the electrical design for embedded generation installations are prepared without the use of any specialist design tools, software based or otherwise. This situation renders the switchgear design process reliant upon bespoke, ill-defined and un-optimised methods. Such practices are labour intensive, error prone and require substantial expertise in all aspects of power protection and distribution systems. This thesis investigates the use of object oriented programming to develop a software tool that assists with the complete design and specification of embedded generation switchgear. By capturing the design rationale in such a tool, fast, accurate and checked switchgear designs may be produced by developers without extensive previous switchgear design experience. The thesis describes how the design process for switchgear may be rationalised based upon design methodology, taking account of legal and regulatory codes of practice, such as G59. A complete and general review of artificially intelligent techniques and programming paradigms considered suitable for capturing design reasoning are presented. All aspects of the switchgear design are modelled, including protection and instrumentation equipment, auxiliary power supplies and sundry components. Internal component section, connection and loading is automated ensuring that a complete, fully specified switchgear installation is produced. The techniques investigated illustrate that cooperating, interacting networks of software objects may be used to assist and perform switchgear design. The techniques presented could be readily adapted for use in other, non-related design domains, within which complex interdependent architectures and relationships exist.

Page generated in 0.0154 seconds