• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 398
  • 93
  • 51
  • 14
  • 12
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 5
  • 4
  • Tagged with
  • 787
  • 787
  • 511
  • 474
  • 130
  • 124
  • 121
  • 105
  • 94
  • 88
  • 83
  • 78
  • 76
  • 73
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
401

Sistema inteligente para determinação de limite de crédito / Intelligent system for determination of credit limit

Dacy Câmara Lobosco 12 April 2013 (has links)
A presente dissertação trata da estipulação de limite de crédito para empresas clientes, de modo automático, com o uso de técnicas de Inteligência Computacional, especificamente redes neurais artificiais (RNA). Na análise de crédito as duas situações mais críticas são a liberação do crédito, de acordo com o perfil do cliente, e a manutenção deste limite ao longo do tempo de acordo com o histórico do cliente. O objeto desta dissertação visa a automação da estipulação do limite de crédito, implementando uma RNA que possa aprender com situações já ocorridas com outros clientes de perfil parecido e que seja capaz de tomar decisões baseando-se na política de crédito apreendida com um Analista de Crédito. O objetivo é tornar o sistema de crédito mais seguro para o credor, pois uma análise correta de crédito de um cliente reduz consideravelmente os índices de inadimplência e mantém as vendas num patamar ótimo. Para essa análise, utilizouse a linguagem de programação VB.Net para o sistema de cadastro e se utilizou do MatLab para treinamento das RNAs. A dissertação apresenta um estudo de caso, onde mostra a forma de aplicação deste software para a análise de crédito. Os resultados obtidos aplicando-se as técnicas de RNAs foram satisfatórias indicando um caminho eficiente para a determinação do limite de crédito. / This research deals with the credit limit stipulation for corporate clients, automatically, with the use of Computational Intelligence techniques, specifically artificial neural networks (ANN). In the analysis of credit, the two most critical situations are release of credit, according to the customer profile, and maintain the credit according to the customer history. The object of this work aims at automating the stipulated credit limit at the time of initial registration of the customer. The main focus of this work is to make an ANN can provide the credit limit, learning from situations that have occurred with other clients of similar profile and is able to make decisions based on the credit policy seized with a Credit Analyst. The goal is to make the system more secure credit to the lender, for a correct analysis of the creditworthiness of a customer drops considerably default rates and maintains a sales plateau great. For this analysis, we used the VB.Net programming language for the registration system of MatLab and was used for training ANNs. The paper presents a case study, which shows how to apply this software to credit analysis. The results obtained applying the techniques ANNs were satisfactory showing an efficient way to determine the credit limit.
402

Protótipo de um conjunto de sistemas especialistas para operação, monitoração e manutenção de subestações. / Expert systems for substation operation, supervision and monitoring and maintenance.

Jose Aquiles Baesso Grimoni 27 April 1994 (has links)
O trabalho apresenta o protótipo de um conjunto de sistemas especialistas para o auxílio na operação, monitoração e a avaliação do desempenho da manutenção de uma subestação de energia. Os sistemas especialistas propostos têm como objetivos básicos: o tratamento de alarmes da subestação, para depuração e triagem dos mesmos durante perturbações facilitando a análise e ações do operador; a localização de defeitos através da iteração entre alarmes gerados (fatos), regras que relacionam alarmes a causas (base de conhecimento) e a estrutura que relaciona este conjunto de regras (máquina de inferência); a reconfiguração da subestação para transferência de cargas entre circuitos através de um conjunto de manobras gerados por algoritmos de busca e de regras ligadas aos limites das grandezas elétricas dos equipamentos da rede; e ainda a análise do desempenho da manutenção baseada no conceito de índice de mérito operativo aplicado a subestações. O protótipo prevê o desenvolvimento utilizando a linguagem PROLOG, que é voltada para o tratamento declarativo de informações. Foram utilizados dados de uma subestação de uma empresa concessionária de energia para que se pudesse avaliar melhor o desenvolvimento das bases de conhecimento e da própria arquitetura do conjunto e de sua comunicação. Os testes efetuados mostraram resultados promissores e com grau de acerto elevado, o que indica que o sistema desenvolvido, é um embrião confiável para maiores sofisticações. O trabalho termina por apresentar estas novas possibilidades de aperfeiçoamento do sistema. / This work presents a set of expert systems for operation , supervision and maintenance of a electrical energy substation. The function performed by the expert systems presented here are: alarm processing, fault diagnosis, reconfiguration and the analysis of the maintenance performance. The system was developed in PROLOG language. Substation characteristics, data and information has been provided by a São Paulo utility company. The knowledge basis for the expert systems was developed with basis in such information. Tests were carried out to verify this system performance including the architecture and communication. The results are very promising and with a high level of confidence, which means that this system can be used as a seed for future improvement. The work ends by suggesting such improvements and new applications.
403

Desenvolvimento de um sistema especialista para seleção de componentes mecânicos. / Development of a expert system for the selection of mechanical components.

Weber, Cláudio José 06 October 2017 (has links)
A seleção de componentes mecânicos não é uma tarefa fácil e exige um substancial know-how e experiência. As ferramentas atuais trabalham de forma isolada umas das outras, além disto não levam em consideração os requisitos que a interface de um componente exige do outro onde se acoplará, e por consequência os custos de fabricação e logísticos envolvidos para a compatibilização das interfaces. Para auxiliar neste processo está sendo proposto um método para desenvolver um SE (Sistema Especialista) para a seleção de componentes mecânicos que além de levar em conta os requisitos de aplicação, também considera no processo de seleção as deficiências supracitadas. Outro aspecto é que, leva em conta as diretrizes do projeto e os recursos de fabricação da planta, e em paralelo ao processo de seleção também pode dimensionar os componentes selecionados e as interfaces das peças nas quais se acoplarão. O processo de aquisição do conhecimento é uma das principais etapas de desenvolvimento de um SE é considerado como um dos estágios mais importantes em sua execução. Em função disto está sendo proposto um método alternativo que permite que o conhecimento seja adquirido de forma sistemática e organizada pelo engenheiro do conhecimento e pelo especialista para o seu emprego na construção da base de dados do SE. O SE desenvolvido com base no método de aquisição de conhecimento proposto neste trabalho é validado através de dois casos exemplo, inseridos em um projeto de uma máquina de processamento de papel. A validação se dá através da análise de especialistas quanto à adequação dos componentes selecionados pelo SE, comparando-os com o projeto atual. Como resultado, observa-se uma substancial redução dos custos de projeto em função dos componentes selecionados, além da economia no tempo gasto pelos projetistas neste processo de seleção. / Selecting mechanical components is not an easy task and requires a substantial amount of knowledge and experience. The current component selecting tools do not work in an integrated manner, not taking into account component interface requirements for coupling it to another component and, consequently, not considering manufacturing and logistic costs for the interfaces compatibility. In order to improve this process, this thesis proposes a method for the development of an ES (Expert System) for mechanical components selection, which considers both application and component interface requirements. Additionally, the ES will also take into consideration design guidelines and the manufacturing plant capabilities, so it will also be able to scale the selected components and its interfaces. The process of knowledge acquisition is one of the main stages of the ES development and is considered as one of the most important stages in its execution. As a result of this, an alternative method is proposed that allows the knowledge to be acquired in a systematic and organized way by the knowledge engineer and the specialist for its use in the construction of the ES database. The ES developed based on the proposed knowledge acquisition method is validated through two case studies in a paper processing machine design. The validation is done through the analysis of experts regarding the suitability of the components selected by the ES, comparing them with the current project. As a result, there is a substantial reduction in design costs due to the selected components, as well as the savings in the time spent by the designers in this selection process.
404

Medical decision support systems based on machine learning

Chi, Chih-Lin 01 July 2009 (has links)
This dissertation discusses three problems from different areas of medical research and their machine learning solutions. Each solution is a distinct type of decision support system. They show three common properties: personalized healthcare decision support, reduction of the use of medical resources, and improvement of outcomes. The first decision support system assists individual hospital selection. This system can help a user make the best decision in terms of the combination of mortality, complication, and travel distance. Both machine learning and optimization techniques are utilized in this type of decision support system. Machine learning methods, such as Support Vector Machines, learn a decision function. Next, the function is transformed into an objective function and then optimization methods are used to find the values of decision variables to reach the desired outcome with the most confidence. The second decision support system assists diagnostic decisions in a sequential decision-making setting by finding the most promising tests and suggesting a diagnosis. The system can speed up the diagnostic process, reduce overuse of medical tests, save costs, and improve the accuracy of diagnosis. In this study, the system finds the test most likely to confirm a diagnosis based on the pre-test probability computed from the patient's information including symptoms and the results of previous tests. If the patient's disease post-test probability is higher than the treatment threshold, a diagnostic decision will be made, and vice versa. Otherwise, the patient needs more tests to help make a decision. The system will then recommend the next optimal test and repeat the same process. The third decision support system recommends the best lifestyle changes for an individual to lower the risk of cardiovascular disease (CVD). As in the hospital recommendation system, machine learning and optimization are combined to capture the relationship between lifestyle and CVD, and then generate recommendations based on individual factors including preference and physical condition. The results demonstrate several recommendation strategies: a whole plan of lifestyle changes, a package of n lifestyle changes, and the compensatory plan (the plan that compensates for unwanted lifestyle changes or real-world limitations).
405

Legal knowledge engineering methodology for large-scale expert systems

Gray, Pamela N., University of Western Sydney, College of Business, School of Law January 2007 (has links)
Legal knowledge engineering methodology for epistemologically sound, large scale legal expert systems is developed in this dissertation. A specific meta-epistemological method is posed for the transformation of legal domain epistemology to large scale legal expert systems; the method has five stages: 1. domain epistemology; 2. computational domain epistemology; 3. shell epistemology; 4. programming epistemology; and 5. application epistemology and ontology. The nature of legal epistemology is defined in terms of a deep model that divides the information of the ontology of legal possibilities into the three sorts of logic premises, namely, (1) rules of law for extended deduction, (2) material facts of cases for induction that establishes rule antecedents, and (3) reasons for rules, including justifications, explanations or criticisms of rules, for abduction. Extended deduction is distinguished for automation, and provides a map for locating, relatively, associated induction and abduction. Added to this is a communication system that involves issues of cognition and justice in the legal system. The Appendix sets out a sample of draft rule maps of the United Nations Convention on Contracts for the International Sale of Goods, known as the Vienna Convention, to illustrate that the substantive epistemology of the international law can be mapped to the generic epistemology of the shell. This thesis deflects the ontological solution back to the earlier rule-based, case-based and logic advances, with a definition of artificial legal intelligence that rests on legal epistemology; added to the definition is a transparent communication system of a user interface, including an interactive visualisation of rule maps, and the heuristics that process input and produce output to give effect to the legal intelligence of an application. The additions include an epistemological use of the ontology of legal possibilities to complete legal logic, for the purposes of processing specific legal applications. While the specific meta-epistemological methodology distinguishes domain epistemology from the epistemologies of artificial legal intelligence, namely computational domain epistemology, program design epistemology, programming epistemology and application epistemology, the prototypes illustrate the use of those distinctions, and the synthesis effected by that use. The thesis develops the Jurisprudence of Legal Knowledge Engineering by an artificial metaphysics. / Doctor of Philosophy (Ph.D)
406

Incremental knowledge acquisition for natural language processing

Pham, Son Bao, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Linguistic patterns have been used widely in shallow methods to develop numerous NLP applications. Approaches for acquiring linguistic patterns can be broadly categorised into three groups: supervised learning, unsupervised learning and manual methods. In supervised learning approaches, a large annotated training corpus is required for the learning algorithms to achieve decent results. However, annotated corpora are expensive to obtain and usually available only for established tasks. Unsupervised learning approaches usually start with a few seed examples and gather some statistics based on a large unannotated corpus to detect new examples that are similar to the seed ones. Most of these approaches either populate lexicons for predefined patterns or learn new patterns for extracting general factual information; hence they are applicable to only a limited number of tasks. Manually creating linguistic patterns has the advantage of utilising an expert's knowledge to overcome the scarcity of annotated data. In tasks with no annotated data available, the manual way seems to be the only choice. One typical problem that occurs with manual approaches is that the combination of multiple patterns, possibly being used at different stages of processing, often causes unintended side effects. Existing approaches, however, do not focus on the practical problem of acquiring those patterns but rather on how to use linguistic patterns for processing text. A systematic way to support the process of manually acquiring linguistic patterns in an efficient manner is long overdue. This thesis presents KAFTIE, an incremental knowledge acquisition framework that strongly supports experts in creating linguistic patterns manually for various NLP tasks. KAFTIE addresses difficulties in manually constructing knowledge bases of linguistic patterns, or rules in general, often faced in existing approaches by: (1) offering a systematic way to create new patterns while ensuring they are consistent; (2) alleviating the difficulty in choosing the right level of generality when creating a new pattern; (3) suggesting how existing patterns can be modified to improve the knowledge base's performance; (4) making the effort in creating a new pattern, or modifying an existing pattern, independent of the knowledge base's size. KAFTIE, therefore, makes it possible for experts to efficiently build large knowledge bases for complex tasks. This thesis also presents the KAFDIS framework for discourse processing using new representation formalisms: the level-of-detail tree and the discourse structure graph.
407

Perspectives on belief and change

Aucher, Guillaume, n/a January 2008 (has links)
This thesis is about logical models of belief (and knowledge) representation and belief change. This means that we propose logical systems which are intended to represent how agents perceive a situation and reason about it, and how they update their beliefs about this situation when events occur. These agents can be machines, robots, human beings. . . but they are assumed to be somehow autonomous. The way a fixed situation is perceived by agents can be represented by statements about the agents� beliefs: for example �agent A believes that the door of the room is open� or �agent A believes that her colleague is busy this afternoon�. �Logical systems� means that agents can reason about the situation and their beliefs about it: if agent A believes that her colleague is busy this afternoon then agent A infers that he will not visit her this afternoon. We moreover often assume that our situations involve several agents which interact between each other. So these agents have beliefs about the situation (such as �the door is open�) but also about the other agents� beliefs: for example agent A might believe that agent B believes that the door is open. These kinds of beliefs are called higher-order beliefs. Epistemic logic [Hintikka, 1962; Fagin et al., 1995; Meyer and van der Hoek, 1995], the logic of belief and knowledge, can capture all these phenomena and will be our main starting point to model such fixed (�static�) situations. Uncertainty can of course be expressed by beliefs and knowledge: for example agent A being uncertain whether her colleague is busy this afternoon can be expressed by �agent A does not know whether her colleague is busy this afternoon�. But we sometimes need to enrich and refine the representation of uncertainty: for example, even if agent A does not know whether her colleague is busy this afternoon, she might consider it more probable that he is actually busy. So other logics have been developed to deal more adequately with the representation of uncertainty, such as probabilistic logic, fuzzy logic or possibilistic logic, and we will refer to some of them in this thesis (see [Halpern, 2003] for a survey on reasoning about uncertainty). But things become more complex when we introduce events and change in the picture. Issues arise even if we assume that there is a single agent. Indeed, if the incoming information conveyed by the event is coherent with the agent�s beliefs then the agent can just add it to her beliefs. But if the incoming information contradicts the agent�s beliefs then the agent has somehow to revise her beliefs, and as it turns out there is no obvious way to decide what should be her resulting beliefs. Solving this problem was the goal of the logic-based belief revision theory developed by Alchourrón, Gärdenfors and Makinson (to which we will refer by the term AGM) [Alchourrón et al., 1985; Gärdenfors, 1988; Gärdenfors and Rott, 1995]. Their idea is to introduce �rationality postulates� that specify which belief revision operations can be considered as being �rational� or reasonable, and then to propose specific revision operations that fulfill these postulates. However, AGM does not consider situations where the agent might also have some uncertainty about the incoming information: for example agent A might be uncertain due to some noise whether her colleague told her that he would visit her on Tuesday or on Thursday. In this thesis we also investigate this kind of phenomenon. Things are even more complex in a multi-agent setting because the way agents update their beliefs depends not only on their beliefs about the event itself but also on their beliefs about the way the other agents perceived the event (and so about the other agents� beliefs about the event). For example, during a private announcement of a piece of information to agent A the beliefs of the other agents actually do not change because they believe nothing is actually happening; but during a public announcement all the agents� beliefs might change because they all believe that an announcement has been made. Such kind of subtleties have been dealt with in a field called dynamic epistemic logic (Gerbrandy and Groeneveld, 1997; Baltag et al., 1998; van Ditmarsch et al., 2007b]. The idea is to represent by an event model how the event is perceived by the agents and then to define a formal update mechanism that specifies how the agents update their beliefs according to this event model and their previous representaton of the situation. Finally, the issues concerning belief revision that we raised in the single agent case are still present in the multi-agent case. So this thesis is more generally about information and information change. However, we will not deal with problems of how to store information in machines or how to actually communicate information. Such problems have been dealt with in information theory [Cover and Thomas, 1991] and Kolmogorov complexity theory [Li and Vitányi, 1993]. We will just assume that such mechanisms are already available and start our investigations from there. Studying and proposing logical models for belief change and belief representation has applications in several areas. First in artificial intelligence, where machines or robots need to have a formal representation of the surrounding world (which might involve other agents), and formal mechanisms to update this representation when they receive incoming information. Such formalisms are crucial if we want to design autonomous agents, able to act autonomously in the real world or in a virtual world (such as on the internet). Indeed, the representation of the surrounding world is essential for a robot in order to reason about the world, plan actions in order to achieve goals... and it must be able to update and revise its representation of the world itself in order to cope autonomously with unexpected events. Second in game theory (and consequently in economics), where we need to model games involving several agents (players) having beliefs about the game and about the other agents� beliefs (such as agent A believes that agent B has the ace of spade, or agent A believes that agent B believes that agent A has the ace of heart...), and how they update their representation of the game when events (such as showing privately a card or putting a card on the table) occur. Third in cognitive psychology, where we need to model as accurately as possible epistemic state of human agents and the dynamics of belief and knowledge in order to explain and describe cognitive processes. The thesis is organized as follows. In Chapter 2, we first recall epistemic logic. Then we observe that representing an epistemic situation involving several agents depends very much on the modeling point of view one takes. For example, in a poker game the representation of the game will be different depending on whether the modeler is a poker player playing in the game or the card dealer who knows exactly what the players� cards are. In this thesis, we will carefully distinguish these different modeling approaches and the. different kinds of formalisms they give rise to. In fact, the interpretation of a formalism relies quite a lot on the nature of these modeling points of view. Classically, in epistemic logic, the models built are supposed to be correct and represent the situation from an external and objective point of view. We call this modeling approach the perfect external approach. In Chapter 2, we study the modeling point of view of a particular modeler-agent involved in the situation with other agents (and so having a possibly erroneous perception of the situation). We call this modeling approach the internal approach. We propose a logical formalism based on epistemic logic that this agent uses to represent �for herself� the surrounding world. We then set some formal connections between the internal approach and the (perfect) external approach. Finally we axiomatize our logical formalism and show that the resulting logic is decidable. In Chapter 3, we first recall dynamic epistemic logic as viewed by Baltag, Moss and Solecki (to which we will refer by the term BMS). Then we study in which case seriality of the accessibility relations of epistemic models is preserved during an update, first for the full updated model and then for generated submodels of the full updated model. Finally, observing that the BMS formalism follows the (perfect) external approach, we propose an internal version of it, just as we proposed an internal version of epistemic logic in Chapter 2. In Chapter 4, we still follow the internal approach and study the particular case where the event is a private announcement. We first show, thanks to our study in Chapter 3, that in a multi-agent setting, expanding in the AGM style corresponds to performing a private announcement in the BMS style. This indicates that generalizing AGM belief revision theory to a multi-agent setting amounts to study private announcement. We then generalize the AGM representation theorems to the multi-agent case. Afterwards, in the spirit of the AGM approach, we go beyond the AGM postulates and investigate multi-agent rationality postulates specific to our multi-agent setting inspired from the fact that the kind of phenomenon we study is private announcement. Finally we provide an example of revision operation that we apply to a concrete example. In Chapter 5, we follow the (perfect) external approach and enrich the BMS formalism with probabilities. This enables us to provide a fined-grained account of how human agents interpret events involving uncertainty and how they revise their beliefs. Afterwards, we review different principles for the notion of knowledge that have been proposed in the literature and show how some principles that we argue to be reasonable ones can all be captured in our rich and expressive formalism. Finally, we extend our general formalism to a multi-agent setting. In Chapter 6, we still follow the (perfect) external approach and enrich our dynamic epistemic language with converse events. This language is interpreted on structures with accessibility relations for both beliefs and events, unlike the BMS formalism where events and beliefs are not on the same formal level. Then we propose principles relating events and beliefs and provide a complete characterization, which yields a new logic EDL. Finally, we show that BMS can be translated into our new logic EDL thanks to the converse operator: this device enables us to translate the structure of the event model directly within a particular axiomatization of EDL, without having to refer to a particular event model in the language (as done in BMS). In Chapter 7 we summarize our results and give an overview of remaining technical issues and some desiderata for future directions of research. Parts of this thesis are based on publication, but we emphasize that they have been entirely rewritten in order to make this thesis an integrated whole. Sections 4.2.2 and 4.3 of Chapter 4 are based on [Aucher, 2008]. Sections 5.2, 5.3 and 5.5 of Chapter 5 are based on [Aucher, 2007]. Chapter 6 is based on [Aucher and Herzig, 2007].
408

A reference architecture for holonic execution in manufacturing enterprises

Jarvis, Jacqueline January 2007 (has links)
On the basis of extensive practical experience in the development of agent-based systems for manufacturing execution and agent-based systems in general, a reference model for holonic execution known as HERA (Holonic Execution Reference Architecture) is developed. The model is characterised by a focus on holarchy - Koestler's Janus Effect (Koestler, 1967) is explicitly captured. However, the Holonic Manufacturing Systems (HMS) Consortium's view of a holon having an information processing part and a physical part (Brennan and Norrie, 2003) is also present. We refer to these parts as the behaviour and the embodiment respectively.
409

Diagnostics, prognostics and fault simulation for rolling element bearings

Sawalhi, Nader, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2007 (has links)
Vibration signals generated from spalled elements in rolling element bearings (REBs) are investigated in this thesis. A novel signal-processing algorithm to diagnose localized faults in rolling element bearings has been developed and tested on a variety of signals. The algorithm is based on Spectral Kurtosis (SK), which has special qualities for detecting REBs faults. The algorithm includes three steps. It starts by pre-whitening the signal's power spectral density using an autoregressive (AR) model. The impulses, which are contained in the residual of the AR model, are then enhanced using the minimum entropy deconvolution (MED) technique, which effectively deconvolves the effect of the transmission path and clarifies the impulses. Finally the output of the MED filter is decomposed using complex Morlet wavelets and the SK is calculated to select the best filter for the envelope analysis. Results show the superiority of the developed algorithm and its effectiveness in extracting fault features from the raw vibration signal. The problem of modelling the vibration signals from a spalled bearing in a gearbox environment is discussed. This problem has been addressed through the incorporation of a time varying, non-linear stiffness bearing model into a previously developed gear model. It has the new capacity of modeling localized faults and extended faults in the different components of the bearing. The simulated signals were found to have the same basic characteristics as measured signals, and moreover were found to have a characteristic seen in the measured signals, and also referred to in the literature, of double pulses corresponding to entry into and exit from a localized fault, which could be made more evident by the MED technique. The simulation model is useful for producing typical fault signals from gearboxes to test new diagnostic algorithms, and also prognostic algorithms. The thesis provides two main tools (SK algorithm and the gear bearing simulation model), which could be effectively employed to develop a successful prognostic model.
410

Establishment of a database for tool life performance

Vom Braucke, Troy S., tvombraucke@swin.edu.au January 2004 (has links)
The cutting tool industry has evolved over the last half century to the point where an increasing range and complexity of cutting tools are available for metal machining. This highlighted a need to provide an intelligent, user-friendly system of tool selection and recommendation that can also provide predictive economic performance data for engineers and end-users alike. Such an 'expert system' was developed for a local manufacturer of cutting tools in the form of a relational database to be accessed over the Internet. A number of performance predictive models were reviewed for various machining processes, however they did not encompass the wide range of variables encountered in metal machining, thus adaptation of these existing models for an expert system was reasoned to be economically prohibitive at this time. Interrogation of published expert systems from cutting tool manufacturers, showed the knowledge-engineered principle to be a common approach to transferring economic and technological information to an end-user. The key advantage being the flexibility to allow further improvements as new knowledge is gained. As such, a relational database was built upon the knowledge-engineered principle, based on skilled craft oriented knowledge to establish an expert system for selection and performance assessment of cutting tools. An investigation into tapping of austenitic stainless steels was undertaken to develop part of a larger expert system. The expert system was then interrogated in this specific area in order to challenge by experiment, the skilled craft oriented knowledge in this area. The experimental results were incorporated into the database where appropriate, providing a user-friendly working expert system for intelligent cutting tool selection, recommendation and performance data.

Page generated in 0.1098 seconds