• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 256
  • 61
  • 20
  • 12
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 639
  • 639
  • 94
  • 88
  • 58
  • 52
  • 52
  • 50
  • 49
  • 46
  • 45
  • 44
  • 41
  • 41
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
311

Conceptual design of a commercial-Tokamak-hybrid-reactor fueling system

Matney, Kenneth Dale, Commercial Tokamak Hybrid Reactor. January 2011 (has links)
Digitized by Kansas Correctional Industries
312

A systematic, experimental methodology for design optimization

Ritchie, Paul Andrew, 1960- January 1988 (has links)
Much attention has been directed at off-line quality control techniques in recent literature. This study is a refinement of and an enhancement to one technique, the Taguchi Method, for determining the optimum setting of design parameters in a product or process. In place of the signal-to-noise ratio, the mean square error (MSE) for each quality characteristic of interest is used. Polynomial models describing mean response and variance are fit to the observed data using statistical methods. The settings for the design parameters are determined by minimizing a statistical model. The model uses a multicriterion objective consisting of the MSE for each quality characteristic of interest. Minimum bias central composite designs are used during the data collection step to determine the settings of the parameters where observations are to be taken. Included is the development of minimum bias designs for various cases. A detailed example is given.
313

Methodologies for the optimization of fibre-reinforced composite structures with manufacturing uncertainties

Hamilton, Ryan Jason January 2006 (has links)
Thesis (M.Tech.:Mechanical Engineering)-Dept. of Mechanical Engineering, Durban University of Technology, 2006 xv, iii, 108 leaves / Fibre Reinforced Plastics (FRPs) have been used in many practical structural applications due to their excellent strength and weight characteristics as well as the ability for their properties to be tailored to the requirements of a given application. Thus, designing with FRPs can be extremely challenging, particularly when the number of design variables contained in the design space is large. For example, to determine the ply orientations and the material properties optimally is typically difficult without a considered approach. Optimization of composite structures with respect to the ply angles is necessary to realize the full potential of fibre-reinforced materials. Evaluating the fitness of each candidate in the design space, and selecting the most efficient can be very time consuming and costly. Structures composed of composite materials often contain components which may be modelled as rectangular plates or cylindrical shells, for example. Modelling of components such as plates can be useful as it is a means of simplifying elements of structures, and this can save time and thus cost. Variations in manufacturing processes and user environment may affect the quality and performance of a product. It is usually beneficial to account for such variances or tolerances in the design process, and in fact, sometimes it may be crucial, particularly when the effect is of consequence. The work conducted within this project focused on methodologies for optimally designing fibre-reinforced laminated composite structures with the effects of manufacturing tolerances included. For this study it is assumed that the probability of any tolerance value occurring within the tolerance band, compared with any other, is equal, and thus the techniques are aimed at designing for the worst-case scenario. This thesis thus discusses four new procedures for the optimization of composite structures with the effects of manufacturing uncertainties included.
314

Taguchi methods in internal combustion engine optimisation

Green, Jeremy James 12 1900 (has links)
Thesis (MScEng)--University of Stellenbosch, 2001. / ENGLISH ABSTRACT: Statistical experimental design techniques are powerful tools that are often approached with suspicion and apprehension by experimenters. The trend is to avoid any statistically structured and designed experimentation program, and to rather use the traditional method of following ones "gut feel". This approach, more often than not, will supply a satisfactory solution, but there is so much more information availablefor the same amount of effort. This thesis strives to outline the method and application of the Taguchi methodology of experimental design. The Taguchi method is a practical, statistical experimental design technique that does not rely on the designer's knowledge of the complex statistics typicallyneeded to design experimental programs, a fact that tends to exclude design of experiments from the averageengineers' toolbox. The essence of the statistical design of experiments is this: The traditional method of varying one variable at a time and investigating its effect on an output is no longer sufficient. Instead all the input variables are varied at the same time in a structured manner. The output trends resulting from each input variable are then statisticallyextracted from the data in the midst of the variation. Taguchi method achieves this by designing experiments where every level of every input variable occurs an equal number of times with every level of every other input variable. The experimental designs are represented in orthogonal arrays that are chosen and populated by the experimenter by following a simple procedure. Four case studies are worked through in this text and, where possible, compared to the "traditional" approach to the same problem. The case studies show the additional information and time savings availablewith the Taguchi method, as well as clearlyindicating the importance of using a stable system on which to do the experiments. The Taguchi method generated more information in fewer experiments than the traditional approaches as well as allowing analysis of problems too complex to analysewithout a statisticaldesign of the experimentation procedure. / AFRIKAANSE OPSOMMING: Statistiese eksperimentele ontwerptegnieke is besonder kragtige instrumente wat baie keer met agterdog deur ekspermenteerders beheen word. Die neiging is om enige statistiese gestruktureerde and ontwerpte eksperimentele program te vermy, en om liewer die tradisionele metode, wat op 'n mens se intuïsie staatmaak, te gebruik. Hierdie benadering sal baie keer 'n bevredigende oplossing gee, maar daar is veel meer inligting vir dieselfde hoeveelheid inspanning verkrygbaar, wanneer die Taguchimetode gebruik word. Hierdie tesis strewe om die metode en toepassing van die Taguchimetodologie van eksperimentele ontwerp voor te lê. Die Taguchimetode is 'n praktiese statistiese eksperimentele ontwerptegniek .wat nie op die ontwerper se kennis van komplekse statistiek om eksperimentele programme te ontwerp berus nie. Hierdie komplekse statistiek neig ook om eksperimentele ontwerp van die gemiddelde ingenieursvaardigehede uit te sluit. Die kern van statistiese eksperimentele ontwerp is die volgende: Die tradisionele metode van een veranderlike op 'n slag te varieer om die effek op die uitset te ondersoek, is onvoldoende. In plaas daarvan, word al die insetveranderlikes gelyktydig gevarieer in 'n gestruktureered manier. Die neigings van elke veranderlike is dan statisties ontleed van die data ten midde van die variasie van al die ander veranderlikes. Die Taguchimetode bereik die ontwerpte eksperimente deur elke vlak van elke insetveranderlik in 'n gelyke aantal keer met elke vlak van elke ander insetveranderlike te varieer. Hierdie is verteenwoordig deur ortogenale reekse wat gekies en gevul is deur 'n eenvoudige wisselpatroon te volg. Vier gevallestudies is deurgewerk en, waar moontlik, vergelyk met die tradisonele siening van dieselfde probleem. Die gevallestudies wys hoe toereikbaar die additionele inligting in die Taguchimethode toepassings is. Hulle beklemtoon ook die belangrikheid van 'n stabiele sisteem waarop die eksperimente berus. Die Taguchimetode het meer inligting verskaf met minder eksperimente as die tradisionele toenaderings, en ook toegelaat dat die analise van probleme, te kompleks om te analiseer sonder om 'n statistiese ontwerp van eksperimentele prosedure te volg, opgelos kon word.
315

Compact reliability and maintenance modeling of complex repairable systems

Valenzuela Vega, Rene Cristian 22 May 2014 (has links)
Maintenance models are critical for evaluation of the alternative maintenance policies for modern engineering systems. A poorly selected policy can result in excessive life-cycle costs as well as unnecessary risks for catastrophic failures of the system. Economic dependence refers to the difference between the cost of combining the maintenance of a number of components and the cost of performing the same maintenance actions individually. Maintenance that takes advantage of this difference is often called opportunistic. Large number of components and economic inter-dependence are two pervasive characteristics of modern engineering systems that make the modeling of their maintenance processes particularly challenging. Simulation is able to handle both of these characteristics computationally, but the complexity, especially from the model verification perspective, becomes overwhelming as the number of components increases. This research introduces a new procedure for maintenance models of multi-unit repairable systems with economic dependence among its components and under opportunistic maintenance policies. The procedure is based on the stochastic Petri net with aging tokens modeling framework and it makes use of a component-level model approach to overcome the state explosion of the model combined with a novel order-reduction scheme that effectively combines the impact of other components into a single distribution. The justification for the used scheme is provided, the accuracy is assessed, and applications for the systems of realistic complexity are considered.
316

Comparative evaluation of the model-centred and the application-centred design approach in civil engineering software

Sinske, A. N. (Alexander Nicholas) 12 1900 (has links)
Thesis (PhD)--University of Stellenbosch, 2002. / ENGLISH ABSTRACT: In this dissertation the traditional model-centred (MC)design approach for the development of software in the civil engineering field is compared to a newly developed application-centred (AC)design approach. In the MC design software models play the central role. A software model maps part of the world, for example its visualization or analysis onto the memory space of the computer. Characteristic of the MC design is that the identifiers of objects are unique and persistent only within the name scope of a model, and that classes which define the objects are components of the model. In the AC design all objects of the engineering task are collected in an application. The identifiers of the objects are unique and persistent within the name scope of the application and classes are no longer components of a model, but components of the software platform. This means that an object can be a part of several models. It is investigated whether the demands on the information and communication in modern civil engineering processes can be satisfied using the MC design approach. The investigation is based on the evaluation of existing software for the analysis and design of a sewer reticulation system of realistic dimensions and complexity. Structural, quantitative, as well as engineering complexity criteria are used to evaluate the design. For the evaluation of the quantitative criteria, in addition to the actual Duration of Execution, a User Interaction Count, the Persistent Data Size, and a Basic Instruction Count based on a source code complexity analysis, are introduced. The analysis of the MCdesign shows that the solution of an engineering task requires several models. The interaction between the models proves to be complicated and inflexible due to the limitation of object identifier scope: The engineer is restricted to the concepts of the software developer, who must provide static bridges between models in the form of data files or software transformers. The concept of the ACdesign approach is then presented and implemented in a new software application written in Java. This application is also extended for the distributed computing scenario. Newbasic classes are defined to manage the static and dynamic behaviour of objects, and to ensure the consistent and persistent state of objects in the application. The same structural and quantitative analyses are performed using the same test data sets as for the MCapplication. It is shown that the AC design approach is superior to the MC design approach with respect to structural, quantitative and engineering complexity .criteria. With respect to the design structure the limitation of object identifier scope, and thus the requirement for bridges between models, falls away, which is in particular of value for the distributed computing scenario. Although the new object management routines introduce an overhead in the duration of execution for the AC design compared to a hypothetical MC design with only one model and no software bridges, the advantages of the design structure outweigh this potential disadvantage. / AFRIKAANSE OPSOMMING: In hierdie proefskrif word die tradisionele modelgesentreerde (MC)ontwerpbenadering vir die ontwikkeling van sagteware vir die siviele ingenieursveld vergelyk met 'n nuut ontwikkelde applikasiegesentreerde (AC) ontwerpbenadering. In die MContwerp speel sagtewaremodelle 'n sentrale rol. 'n Sagtewaremodel beeld 'n deel van die wêreld, byvoorbeeld die visualisering of analise op die geheueruimte van die rekenaar af. Eienskappe van die MContwerp is dat die identifiseerders van objekte slegs binne die naamruimte van 'n model uniek en persistent is, en dat klasse wat die objekte definieer komponente van die model is. In die AC ontwerp is alle objekte van die ingenieurstaak saamgevat in 'n applikasie. Die identifisieerders van die objekte is uniek en persistent binne die naamruimte van die applikasie en klasse is nie meer komponente van die model nie, maar komponente van die sagtewareplatform. Dit beteken dat 'n objek deel van 'n aantal modelle kan vorm. Dit word ondersoek of daar by die MC ontwerpbenadering aan die vereistes wat by moderne siviele ingenieursprosesse ten opsigte van inligting en kommunikasie gestel word, voldoen kan word. Die ondersoek is gebaseer op die evaluering van bestaande sagteware vir die analise en ontwerp van 'n rioolversamelingstelsel met realistiese dimensies en kompleksiteit. Strukturele, kwantitatiewe, sowel as ingenieurskompleksiteitskriteria word gebruik om die ontwerp te evalueer. Vir die evaluering van die kwantitatiewe kriteria word addisioneel tot die uitvoerduurte 'n gebruikersinteraksie-telling, die persistente datagrootte, en 'n basiese instruksietelling gebaseer op 'n bronkode kompleksiteitsanalise , ingevoer. Die analise van die MC ontwerp toon dat die oplossing van ingenieurstake 'n aantal modelle benodig. Die interaksie tussen die modelle bewys dat dit kompleks en onbuigsaam is, as gevolg van die beperking op objekidentifiseerderruimte: Die ingenieur is beperk tot die konsepte van die sagteware ontwikkelaar wat statiese brue tussen modelle in die vorm van lêers of sagteware transformators moet verskaf. Die AC ontwerpbenadering word dan voorgestel en geïmplementeer in 'n nuwe sagteware-applikasie, geskryf in Java. Die applikasie word ook uitgebrei vir die verdeelde bewerking in die rekenaarnetwerk. Nuwe basisklasse word gedefinieer om die statiese en dinamiese gedrag van objekte te bestuur, en om die konsistente en persistente status van objekte in die applikasie te verseker. Dieselfde strukturele en kwantitatiewe analises word uitgevoer met dieselfde toetsdatastelle soos vir die MC ontwerp. Daar word getoon dat die AC ontwerpbenadering die MC ontwerpbenadering oortref met betrekking tot die strukturele, kwantitatiewe en ingenieurskompleksiteitskriteria. Met betrekking tot die ontwerpstruktuur val die beperking van die objek-identfiseerderruimte en dus die vereiste van brue tussen modelle weg, wat besonder voordelig is vir die verdeelde bewerking in die rekenaarnetwerk. Alhoewel die nuwe objekbestuurroetines in die AC ontwerp in vergelyking met 'n hipotetiese MC ontwerp, wat slegs een model en geen sagteware brue bevat, langer uitvoerduurtes tot gevolg het, is die voordele van die ontwerpstruktuur groter as die potensiële nadele.
317

The effects of empathic experience design techniques on product design innovation

Saunders, Matthew Nelson 05 November 2010 (has links)
The effects of empathic experience design (EED) on the product design process are investigated through a series of product redesign experimental studies. As defined, empathic experience design is the simulation of the experiences of a lead user, or someone who uses a product in an extreme condition. To better understand product innovation, the link between creativity in engineering design and commercial market success is explored through literature and a study of award-winning products is performed to analyze the current trends in innovation. The findings suggest that products are becoming increasingly more innovative in the ways in which they interact with users and their surroundings and that a gap exists between the current tools available for engineers to innovate and the types of innovations present in award-winning products. The application of EED to a concept generation study shows that empathic experiences while interacting with a prototype results in more innovative concepts over typical interactions. The experimental group also saw an increase in user interaction innovations and a decrease in technical feasibility. The application of EED to a customer needs study compares the effect of empathic experiences in an articulated use interview setting. The EED interviews discovered 2.5 times the number of latent customer needs than the control group. A slight decrease in the breadth of topics covered was also seen, but was compensated for when used in conjunction with categorical questioning. Overall the use of empathic experience design is shown to increase the level of innovation throughout the product design process. / text
318

Measurement evaluation and FEM simulation of bridge dynamics

Andersson, Andreas, Malm, Richard January 2004 (has links)
<p>The aim of this thesis is to analyse the effects of train induced vibrations in a steel Langer beam bridge. A case study of a bridge over the river Ljungan in Ånge has been made by analysing measurements and comparing the results with a finite element model in ABAQUS. The critical details of the bridge are the hangers that are connected to the arches and the main beams. A stabilising system has been made in order to reduce the vibrations which would lead to increased life length of the bridge.</p><p>Initially, the background to this thesis and a description of the studied bridge are presented. An introduction of the theories that has been applied is given and a description of the modelling procedure in ABAQUS is presented.</p><p>The performed measurements investigated the induced strain and accelerations in the hangers. The natural frequency, the corresponding damping coefficients and the displacement these vibrations leads to has been evaluated. The vibration-induced stresses, which could lead to fatigue, have been evaluated. The measurement was made after the existing stabilising system has been dismantled and this results in that the risk of fatigue is excessive. The results were separated into two parts: train passage and free vibrations. This shows that the free vibrations contribute more and longer life expectancy could be achieved by introducing dampers, to reduce the amplitude of the amplitude of free vibrations.</p><p>The finite element modelling is divided into four categories: general static analysis, eigenvalue analysis, dynamic analysis and detailed analysis of the turn buckle in the hangers. The deflection of the bridge and the initial stresses due to gravity load were evaluated in the static analysis. The eigenfrequencies were extracted in an eigenvalue analysis, both concerning eigenfrequencies in the hangers as well as global modes of the bridge. The main part of the finite element modelling involves the dynamic simulation of the train passing the bridge. The model shows that the longer hangers vibrate excessively during the train passage because of resonance. An analysis of a model with a stabilising system shows that the vibrations are damped in the direction along the bridge but are instead increased in the perpendicular direction. The results from the model agree with the measured data when dealing with stresses. When comparing the results concerning the displacement of the hangers, accurate filtering must be applied to obtain similar results.</p>
319

Simulation of turbocharged SI-engines - with focus on the turbine

Westin, Fredrik January 2005 (has links)
<p>The aim is to share experience gained when simulating (and doing measurements on) the turbocharged SI-engine as well as describing the limits of current state of the technology. In addition an overview of current boosting systems is provided.</p><p>The target readers of this text are engineers employed in the engine industry as well as academia who will get in contact, or is experienced, with 1D engine performance simulation and/or boosting systems. Therefore the text requires general knowledge about engines.</p><p>The papers included in the thesis are, in reverse chronological order:</p><p>[8] SAE 2005-XX-XXX Calculation accuracy of pulsating flow through the turbine of SI-engine turbochargers - Part 2 Measurements, simulation correlations and conclusions Westin & Ångström</p><p>To be submitted to the 2005 SAE Powertrain and Fluid Systems Conference in San Antonio</p><p>[7] SAE 2005-01-2113 Optimization of Turbocharged Engines’ Transient Response with Application on a Formula SAE / Student engine Westin & Ångström</p><p>Approved for publication at the 2005 SAE Spring Fuels and Lubricants Meeting in Rio de Janeiro</p><p>[6] SAE 2005-01-0222 Calculation accuracy of pulsating flow through the turbine of SI-engine turbochargers - Part 1 Calculations for choice of turbines with different flow characteristics Westin & Ångström</p><p>Published at the 2005 SAE World Congress in Detroit April 11-14, 2005</p><p>[5] SAE 2004-01-0996 Heat Losses from the Turbine of a Turbocharged SI-Engine – Measurements and Simulation Westin, Rosenqvist & Ångström</p><p>Presented at the 2004 SAE World Congress in Detroit March 8-11, 2004</p><p>[4] SAE 2003-01-3124 Simulation of a turbocharged SI-engine with two software and comparison with measured data Westin & Ångström</p><p>Presented at the 2003 SAE Powertrain and Fluid Systems Conference in Pittsburgh</p><p>[3] SIA C06 Correlation between engine simulations and measured data - experiences gained with 1D-simulations of turbocharged SI-engines Westin, Elmqvist & Ångström</p><p>Presented at the SIA International Congress SIMULATION, as essential tool for risk management in industrial product development in Poissy, Paris September 17-18 2003</p><p>[2] IMechE C602/029/2002 A method of investigating the on-engine turbine efficiency combining experiments and modelling Westin & Ångström</p><p>Presented at the 7th International Conference on Turbochargers and Turbocharging in London 14-15 May, 2002</p><p>[1] SAE 2000-01-2840 The Influence of Residual Gases on Knock in Turbocharged SI-Engines Westin, Grandin & Ångström</p><p>Presented at the SAE International Fall Fuels and Lubricants Meeting in Baltimore October 16-19, 2000</p><p>The first step in the investigation about the simulation accuracy was to model the engine as accurately as possible and to correlate it against as accurate measurements as possible. That work is covered in the chapters 3 and 5 and in paper no. 3 in the list above. The scientific contribution here is to isolate the main inaccuracy to the simulation of turbine efficiency.</p><p>In order to have anything to compare the simulated turbine efficiency against, a method was developed that enables calculation of the CA-resolved on-engine turbine efficiency from measured data, with a little support from a few simulated properties. That work was published in papers 2 and 8 and is the main scope of chapter 6 in the thesis. The scientific contributions here are several:</p><p>· The application on a running SI-engine is a first</p><p>· It was proven that CA-resolution is absolutely necessary in order to have a physically and mathematically valid expression for the turbine efficiency. A new definition of the time-varying efficiency is developed.</p><p>· It tests an approach to cover possible mass accumulation in the turbine housing</p><p>· It reveals that the common method for incorporating bearing losses, a constant mechanical efficiency, is too crude.</p><p>The next step was to investigate if different commercial codes differ in the results, even though they use equal theoretical foundation. That work is presented in chapter 4, which corresponds to paper 4. This work has given useful input to the industry in the process of choosing simulation tools.</p><p>The next theory to test was if heat losses were a major reason for the simulation accuracy. The scientific contribution in this part of the work was a model for the heat transport within the turbocharger that was developed, calibrated and incorporated in the simulations. It was concluded that heat losses only contributed to a minor part of the inaccuracy, but that is was a major reason for a common simulation error of the turbine outlet temperature, which is very important when trying to simulate catalyst light off. This work was published in paper 5 and is covered in chapter 7.</p><p>Chapter 8, and papers 6 and 8, covers the last investigation of this work. It is a broad study where the impact of design changes of both manifold at turbines on both simulation accuracy as well as engine performance. The scientific contribution here is that the common theory that the simulation inaccuracy is proportional to the pulsation amplitude of the flow is non-valid. It was shown that the reaction was of minor importance for the efficiency of the turbine in the pulsating engine environment. Furthermore it presents a method to calculate internal flow properties in the turbine, by use of a steady-flow design software in a quasi-steady procedure. Of more direct use for the industry is important information of how to design the manifolds as well as it sheds more light on how the turbine works under unsteady flow, for instance that the throat area is the single most important property of the turbine and that the system has a far larger sensitivity to this parameter than to any other design parameters of the turbine. Furthermore it was proven that the variation among individual turbines is of minor importance, and that the simulation error was of similar magnitude for different turbine manufacturers.</p><p>Paper 7, and chapter 9, cover a simulation exercise where the transient performance of turbocharged engines is optimised with help from factorials. It sorts out the relative importance of several design parameters of turbocharged engines and gives the industry important information of where to put the majority of the work in order to maximize the efficiency in the optimisation process.</p><p>Overall, the work presented in this thesis has established a method for calibration of models to measured data in a sequence that makes the process efficient and accurate. It has been shown that use of controllers in this process can save time and effort tenfold or more.</p><p>When designing turbocharged engines the residual gas is a very important factor. It affects both knock sensitivity and the volumetric efficiency. The flow in the cylinder is in its nature of more dimensions than one and is therefore not physically modelled in 1D codes. It is modelled through models of perfect mixing or perfect displacement, or at a certain mix between them. Before the actual project started, the amount of residual gases in an engine was measured and it’s influence on knock was established and quantified. This was the scope of paper 1. This information has been useful when interpreting the model results throughout the entire work.</p>
320

Specifications extraction and synthesis: Their correlations with preliminary design.

Umaretiya, Jagdish R. January 1990 (has links)
This report addresses the research applied towards the automation of the engineering design process, in particular the structural design process. The three important stages of the structural design process are: the specifications, preliminary design and the detailed design. An iterative redesign architecture of the structural design process lends itself to automation. The automation of the structural design can improve both the cost and the reliability, and enhance the productivity of the human designers. To the extent that the assumptions involved in the design process are explicitly represented and automatically inforced, the design errors resulting from the violated assumptions can be avoided. Artificial Intelligence (AI) addresses the automation of complex and knowledge-intensive tasks such as the structural design process. It involves the development of the Knowledge Based Expert System (KBES). There are several tools, also known as expert shells, and languages available for the development of knowledge-based expert systems. A general purpose language, called LISP, is very popular among researchers in AI and is used as an environmental tool for the development of the KBES for the structural design process. The resulting system, called Expert-SEISD, is very generic in nature. The Expert-SEISD is composed of the user interface, inference engine, domain specific knowledge and data bases and the knowledge acquisition. The present domain of the Expert-SEISD encompasses the design of structural components such as beams and plates. The knowledge acquisition module is developed to facilitate the incorporation of new capabilities (knowledge or data) for beams, plates and for new structural components. The decision making is an integral part of any design process. A decision-making model suitable for the specifications extraction and the preliminary design phases of the structural design process is proposed and developed based on the theory of fuzzy sets. The methods developed here are evaluated and compared with similar methods available in the literature. The new method, based on the union of fuzzy sets and contrast intensification, was found suitable for the proposed model. It was implemented as a separate module in the Expert-SEISD. A session with the Expert-SEISD is presented to demonstrate its capabilities of beam and plate designs and knowledge acquisition.

Page generated in 0.0917 seconds