• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 28
  • 9
  • 5
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 61
  • 61
  • 23
  • 18
  • 18
  • 15
  • 14
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Expressed sequence tag clustering using commercial gaming hardware

Van Deventer, Charl 16 April 2014 (has links)
M.Ing. (Electrical And Electronic Engineering) / Bioinformatics is one of the most rapidly advancing sciences today. It is a scienti c domain that attempts to apply modern computing and information technologies to the eld of biology, the study of life itself and involves documenting and analysing genetics, proteins, viruses, bacteria and cancer as well as hereditary traits and diseases, as well as researching cures and treatments for whole ranges of health threats. The growth of bioinformatics and developments, both theoretical and experimental in biology, can largely be linked to the IT explosion which gives the eld more powerful processing options with much cheaper solutions, limited only by the steady yet signi cant improvements as promised by Moore's Law [3]. This IT explosion has also caused signi cant advances due to the high consumer demand region of computer graphics hardware, or GPUs (Graphics Processing Units). The consumer demand has actually managed to advance GPUs far faster than classical CPUs (Central Processing Units), outpacing CPU performance improvements by a large margin. As of early 2010, the fastest available PC processor(Intel Core i7 980 XE) has a theoretical performance of 107.55 GFLOPS [4], while GPUs with TFLOPS (1000 GFLOPS) of performance have been commercially available since 2008 (ATI HD4800). While typically used only for graphical rendering, modern innovations have greatly increased GPU exibility and has given rise to the eld of GPGPU (General Purpose GPU) which allows graphics processors to be applied to non-graphics applications. By utilizing GPU processing power to solve bioinformatics problems, the eld can theoretically be boosted once again, increasing the amount of computational power available to scientists by an order of magnitude or more...
22

Marc integrador de les capacitats de Soft-Computing i de Knowledge Discovery dels Mapes Autoorganitzatius en el Raonament Basat en Casos

Fornells Herrera, Albert 14 December 2007 (has links)
El Raonament Basat en Casos (CBR) és un paradigma d'aprenentatge basat en establir analogies amb problemes prèviament resolts per resoldre'n de nous. Per tant, l'organització, l'accés i la utilització del coneixement previ són aspectes claus per tenir èxit en aquest procés. No obstant, la majoria dels problemes reals presenten grans volums de dades complexes, incertes i amb coneixement aproximat i, conseqüentment, el rendiment del CBR pot veure's minvat degut a la complexitat de gestionar aquest tipus de coneixement. Això ha fet que en els últims anys hagi sorgit una nova línia de recerca anomenada Soft-Computing and Intelligent Information Retrieval enfocada en mitigar aquests efectes. D'aquí neix el context d'aquesta tesi.Dins de l'ampli ventall de tècniques Soft-Computing per tractar coneixement complex, els Mapes Autoorganitzatius (SOM) destaquen sobre la resta per la seva capacitat en agrupar les dades en patrons, els quals permeten detectar relacions ocultes entre les dades. Aquesta capacitat ha estat explotada en treballs previs d'altres investigadors, on s'ha organitzat la memòria de casos del CBR amb SOM per tal de millorar la recuperació dels casos.La finalitat de la present tesi és donar un pas més enllà en la simple combinació del CBR i de SOM, de tal manera que aquí s'introdueixen les capacitats de Soft-Computing i de Knowledge Discovery de SOM en totes les fases del CBR per nodrir-les del nou coneixement descobert. A més a més, les mètriques de complexitat apareixen en aquest context com un instrument precís per modelar el funcionament de SOM segons la tipologia de les dades. L'assoliment d'aquesta integració es pot dividir principalment en quatre fites: (1) la definició d'una metodologia per determinar la millor manera de recuperar els casos tenint en compte la complexitat de les dades i els requeriments de l'usuari; (2) la millora de la fiabilitat de la proposta de solucions gràcies a les relacions entre els clústers i els casos; (3) la potenciació de les capacitats explicatives mitjançant la generació d'explicacions simbòliques; (4) el manteniment incremental i semi-supervisat de la memòria de casos organitzada per SOM.Tots aquests punts s'integren sota la plataforma SOMCBR, la qual és extensament avaluada sobre datasets provinents de l'UCI Repository i de dominis mèdics i telemàtics.Addicionalment, la tesi aborda de manera secundària dues línies de recerca fruït dels requeriments dels projectes on ha estat ubicada. D'una banda, s'aborda la definició de funcions de similitud específiques per definir com comparar un cas resolt amb un de nou mitjançant una variant de la Computació Evolutiva anomenada Evolució de Gramàtiques (GE). D'altra banda, s'estudia com definir esquemes de cooperació entre sistemes heterogenis per millorar la fiabilitat de la seva resposta conjunta mitjançant GE. Ambdues línies són integrades en dues plataformes, BRAIN i MGE respectivament, i són també avaluades amb els datasets anteriors. / El Razonamiento Basado en Casos (CBR) es un paradigma de aprendizaje basado en establecer analogías con problemas previamente resueltos para resolver otros nuevos. Por tanto, la organización, el acceso y la utilización del conocimiento previo son aspectos clave para tener éxito. No obstante, la mayoría de los problemas presentan grandes volúmenes de datos complejos, inciertos y con conocimiento aproximado y, por tanto, el rendimiento del CBR puede verse afectado debido a la complejidad de gestionarlos. Esto ha hecho que en los últimos años haya surgido una nueva línea de investigación llamada Soft-Computing and Intelligent Information Retrieval focalizada en mitigar estos efectos. Es aquí donde nace el contexto de esta tesis.Dentro del amplio abanico de técnicas Soft-Computing para tratar conocimiento complejo, los Mapas Autoorganizativos (SOM) destacan por encima del resto por su capacidad de agrupar los datos en patrones, los cuales permiten detectar relaciones ocultas entre los datos. Esta capacidad ha sido aprovechada en trabajos previos de otros investigadores, donde se ha organizado la memoria de casos del CBR con SOM para mejorar la recuperación de los casos.La finalidad de la presente tesis es dar un paso más en la simple combinación del CBR y de SOM, de tal manera que aquí se introducen las capacidades de Soft-Computing y de Knowledge Discovery de SOM en todas las fases del CBR para alimentarlas del conocimiento nuevo descubierto. Además, las métricas de complejidad aparecen en este contexto como un instrumento preciso para modelar el funcionamiento de SOM en función de la tipología de los datos. La consecución de esta integración se puede dividir principalmente en cuatro hitos: (1) la definición de una metodología para determinar la mejor manera de recuperar los casos teniendo en cuenta la complejidad de los datos y los requerimientos del usuario; (2) la mejora de la fiabilidad en la propuesta de soluciones gracias a las relaciones entre los clusters y los casos; (3) la potenciación de las capacidades explicativas mediante la generación de explicaciones simbólicas; (4) el mantenimiento incremental y semi-supervisado de la memoria de casos organizada por SOM. Todos estos puntos se integran en la plataforma SOMCBR, la cual es ampliamente evaluada sobre datasets procedentes del UCI Repository y de dominios médicos y telemáticos.Adicionalmente, la tesis aborda secundariamente dos líneas de investigación fruto de los requeri-mientos de los proyectos donde ha estado ubicada la tesis. Por un lado, se aborda la definición de funciones de similitud específicas para definir como comparar un caso resuelto con otro nuevo mediante una variante de la Computación Evolutiva denominada Evolución de Gramáticas (GE). Por otro lado, se estudia como definir esquemas de cooperación entre sistemas heterogéneos para mejorar la fiabilidad de su respuesta conjunta mediante GE. Ambas líneas son integradas en dos plataformas, BRAIN y MGE, las cuales también son evaluadas sobre los datasets anteriores. / Case-Based Reasoning (CBR) is an approach of machine learning based on solving new problems by identifying analogies with other previous solved problems. Thus, organization, access and management of this knowledge are crucial issues for achieving successful results. Nevertheless, the major part of real problems presents a huge amount of complex data, which also presents uncertain and partial knowledge. Therefore, CBR performance is influenced by the complex management of this knowledge. For this reason, a new research topic has appeared in the last years for tackling this problem: Soft-Computing and Intelligent Information Retrieval. This is the point where this thesis was born.Inside the wide variety of Soft-Computing techniques for managing complex data, the Self-Organizing Maps (SOM) highlight from the rest due to their capability for grouping data according to certain patterns using the relations hidden in data. This capability has been used in a wide range of works, where the CBR case memory has been organized with SOM for improving the case retrieval.The goal of this thesis is to take a step up in the simple combination of CBR and SOM. This thesis presents how to introduce the Soft-Computing and Knowledge Discovery capabilities of SOM inside all the steps of CBR to promote them with the discovered knowledge. Furthermore, complexity measures appear in this context as a mechanism to model the performance of SOM according to data topology. The achievement of this goal can be split in the next four points: (1) the definition of a methodology for setting up the best way of retrieving cases taking into account the data complexity and user requirements; (2) the improvement of the classification reliability through the relations between cases and clusters; (3) the promotion of the explaining capabilities by means of the generation of symbolic explanations; (4) the incremental and semi-supervised case-based maintenance. All these points are integrated in the SOMCBR framework, which has been widely tested in datasets from UCI Repository and from medical and telematic domains. Additionally, this thesis secondly tackles two additional research lines due to the requirements of a project in which it has been developed. First, the definition of similarity functions ad hoc a domain is analyzed using a variant of the Evolutionary Computation called Grammar Evolution (GE). Second, the definition of cooperation schemes between heterogeneous systems is also analyzed for improving the reliability from the point of view of GE. Both lines are developed in two frameworks, BRAIN and MGE respectively, which are also evaluated over the last explained datasets.
23

Soft computing based spatial analysis of earthquake triggered coherent landslides

Turel, Mesut 08 November 2011 (has links)
Earthquake triggered landslides cause loss of life, destroy structures, roads, powerlines, and pipelines and therefore they have a direct impact on the social and economic life of the hazard region. The damage and fatalities directly related to strong ground shaking and fault rupture are sometimes exceeded by the damage and fatalities caused by earthquake triggered landslides. Even though future earthquakes can hardly be predicted, the identification of areas that are highly susceptible to landslide hazards is possible. For geographical information systems (GIS) based deterministic slope stability and earthquake-induced landslide analysis, the grid-cell approach has been commonly used in conjunction with the relatively simple infinite slope model. The infinite slope model together with Newmark's displacement analysis has been widely used to create seismic landslide susceptibility maps. The infinite slope model gives reliable results in the case of surficial landslides with depth-length ratios smaller than 0.1. On the other hand, the infinite slope model cannot satisfactorily analyze deep-seated coherent landslides. In reality, coherent landslides are common and these types of landslides are a major cause of property damage and fatalities. In the case of coherent landslides, two- or three-dimensional models are required to accurately analyze both static and dynamic performance of slopes. These models are rarely used in GIS-based landslide hazard zonation because they are numerically expensive compared to one dimensional infinite slope models. Building metamodels based on data obtained from computer experiments and using computationally inexpensive predictions based on these metamodels has been widely used in several engineering applications. With these soft computing methods, design variables are carefully chosen using a design of experiments (DOE) methodology to cover a predetermined range of values and computer experiments are performed at these chosen points. The design variables and the responses from the computer simulations are then combined to construct functional relationships (metamodels) between the inputs and the outputs. In this study, Support Vector Machines (SVM) and Artificial Neural Networks (ANN) are used to predict the static and seismic responses of slopes. In order to integrate the soft computing methods with GIS for coherent landslide hazard analysis, an automatic slope profile delineation method from Digital Elevation Models is developed. The integrated framework is evaluated using a case study of the 1989 Loma Prieta, CA earthquake (Mw = 6.9). A seismic landslide hazard analysis is also performed for the same region for a future scenario earthquake (Mw = 7.03) on the San Andreas Fault.
24

Hybrid soft computing : architecture optimization and applications

Abraham, Ajith, 1968- January 2002 (has links)
Abstract not available
25

Soft computing approaches to uncertainty propagation in environmental risk mangement

Kumar, Vikas 19 June 2008 (has links)
Real-world problems, especially those that involve natural systems, are complex and composed of many nondeterministic components having non-linear coupling. It turns out that in dealing with such systems, one has to face a high degree of uncertainty and tolerate imprecision. Classical system models based on numerical analysis, crisp logic or binary logic have characteristics of precision and categoricity and classified as hard computing approach. In contrast soft computing approaches like probabilistic reasoning, fuzzy logic, artificial neural nets etc have characteristics of approximation and dispositionality. Although in hard computing, imprecision and uncertainty are undesirable properties, in soft computing the tolerance for imprecision and uncertainty is exploited to achieve tractability, lower cost of computation, effective communication and high Machine Intelligence Quotient (MIQ). Proposed thesis has tried to explore use of different soft computing approaches to handle uncertainty in environmental risk management. The work has been divided into three parts consisting five papers. In the first part of this thesis different uncertainty propagation methods have been investigated. The first methodology is generalized fuzzy α-cut based on the concept of transformation method. A case study of uncertainty analysis of pollutant transport in the subsurface has been used to show the utility of this approach. This approach shows superiority over conventional methods of uncertainty modelling. A Second method is proposed to manage uncertainty and variability together in risk models. The new hybrid approach combining probabilistic and fuzzy set theory is called Fuzzy Latin Hypercube Sampling (FLHS). An important property of this method is its ability to separate randomness and imprecision to increase the quality of information. A fuzzified statistical summary of the model results gives indices of sensitivity and uncertainty that relate the effects of variability and uncertainty of input variables to model predictions. The feasibility of the method is validated to analyze total variance in the calculation of incremental lifetime risks due to polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/F) for the residents living in the surroundings of a municipal solid waste incinerator (MSWI) in Basque Country, Spain. The second part of this thesis deals with the use of artificial intelligence technique for generating environmental indices. The first paper focused on the development of a Hazzard Index (HI) using persistence, bioaccumulation and toxicity properties of a large number of organic and inorganic pollutants. For deriving this index, Self-Organizing Maps (SOM) has been used which provided a hazard ranking for each compound. Subsequently, an Integral Risk Index was developed taking into account the HI and the concentrations of all pollutants in soil samples collected in the target area. Finally, a risk map was elaborated by representing the spatial distribution of the Integral Risk Index with a Geographic Information System (GIS). The second paper is an improvement of the first work. New approach called Neuro-Probabilistic HI was developed by combining SOM and Monte-Carlo analysis. It considers uncertainty associated with contaminants characteristic values. This new index seems to be an adequate tool to be taken into account in risk assessment processes. In both study, the methods have been validated through its implementation in the industrial chemical / petrochemical area of Tarragona. The third part of this thesis deals with decision-making framework for environmental risk management. In this study, an integrated fuzzy relation analysis (IFRA) model is proposed for risk assessment involving multiple criteria. The fuzzy risk-analysis model is proposed to comprehensively evaluate all risks associated with contaminated systems resulting from more than one toxic chemical. The model is an integrated view on uncertainty techniques based on multi-valued mappings, fuzzy relations and fuzzy analytical hierarchical process. Integration of system simulation and risk analysis using fuzzy approach allowed to incorporate system modelling uncertainty and subjective risk criteria. In this study, it has been shown that a broad integration of fuzzy system simulation and fuzzy risk analysis is possible. In conclusion, this study has broadly demonstrated the usefulness of soft computing approaches in environmental risk analysis. The proposed methods could significantly advance practice of risk analysis by effectively addressing critical issues of uncertainty propagation problem. / Los problemas del mundo real, especialmente aquellos que implican sistemas naturales, son complejos y se componen de muchos componentes indeterminados, que muestran en muchos casos una relación no lineal. Los modelos convencionales basados en técnicas analíticas que se utilizan actualmente para conocer y predecir el comportamiento de dichos sistemas pueden ser muy complicados e inflexibles cuando se quiere hacer frente a la imprecisión y la complejidad del sistema en un mundo real. El tratamiento de dichos sistemas, supone el enfrentarse a un elevado nivel de incertidumbre así como considerar la imprecisión. Los modelos clásicos basados en análisis numéricos, lógica de valores exactos o binarios, se caracterizan por su precisión y categorización y son clasificados como una aproximación al hard computing. Por el contrario, el soft computing tal como la lógica de razonamiento probabilístico, las redes neuronales artificiales, etc., tienen la característica de aproximación y disponibilidad. Aunque en la hard computing, la imprecisión y la incertidumbre son propiedades no deseadas, en el soft computing la tolerancia en la imprecisión y la incerteza se aprovechan para alcanzar tratabilidad, bajos costes de computación, una comunicación efectiva y un elevado Machine Intelligence Quotient (MIQ). La tesis propuesta intenta explorar el uso de las diferentes aproximaciones en la informática blanda para manipular la incertidumbre en la gestión del riesgo medioambiental. El trabajo se ha dividido en tres secciones que forman parte de cinco artículos. En la primera parte de esta tesis, se han investigado diferentes métodos de propagación de la incertidumbre. El primer método es el generalizado fuzzy α-cut, el cual está basada en el método de transformación. Para demostrar la utilidad de esta aproximación, se ha utilizado un caso de estudio de análisis de incertidumbre en el transporte de la contaminación en suelo. Esta aproximación muestra una superioridad frente a los métodos convencionales de modelación de la incertidumbre. La segunda metodología propuesta trabaja conjuntamente la variabilidad y la incertidumbre en los modelos de evaluación de riesgo. Para ello, se ha elaborado una nueva aproximación híbrida denominada Fuzzy Latin Hypercube Sampling (FLHS), que combina los conjuntos de la teoría de probabilidad con la teoría de los conjuntos difusos. Una propiedad importante de esta teoría es su capacidad para separarse los aleatoriedad y imprecisión, lo que supone la obtención de una mayor calidad de la información. El resumen estadístico fuzzificado de los resultados del modelo generan índices de sensitividad e incertidumbre que relacionan los efectos de la variabilidad e incertidumbre de los parámetros de modelo con las predicciones de los modelos. La viabilidad del método se llevó a cabo mediante la aplicación de un caso a estudio donde se analizó la varianza total en la cálculo del incremento del riesgo sobre el tiempo de vida de los habitantes que habitan en los alrededores de una incineradora de residuos sólidos urbanos en Tarragona, España, debido a las emisiones de dioxinas y furanos (PCDD/Fs). La segunda parte de la tesis consistió en la utilización de las técnicas de la inteligencia artificial para la generación de índices medioambientales. En el primer artículo se desarrolló un Índice de Peligrosidad a partir de los valores de persistencia, bioacumulación y toxicidad de un elevado número de contaminantes orgánicos e inorgánicos. Para su elaboración, se utilizaron los Mapas de Auto-Organizativos (SOM), que proporcionaron un ranking de peligrosidad para cada compuesto. A continuación, se elaboró un Índice de Riesgo Integral teniendo en cuenta el Índice de peligrosidad y las concentraciones de cada uno de los contaminantes en las muestras de suelo recogidas en la zona de estudio. Finalmente, se elaboró un mapa de la distribución espacial del Índice de Riesgo Integral mediante la representación en un Sistema de Información Geográfico (SIG). El segundo artículo es un mejoramiento del primer trabajo. En este estudio, se creó un método híbrido de los Mapas Auto-organizativos con los métodos probabilísticos, obteniéndose de esta forma un Índice de Riesgo Integrado. Mediante la combinación de SOM y el análisis de Monte-Carlo se desarrolló una nueva aproximación llamada Índice de Peligrosidad Neuro-Probabilística. Este nuevo índice es una herramienta adecuada para ser utilizada en los procesos de análisis. En ambos artículos, la viabilidad de los métodos han sido validados a través de su aplicación en el área de la industria química y petroquímica de Tarragona (Cataluña, España). El tercer apartado de esta tesis está enfocado en la elaboración de una estructura metodológica de un sistema de ayuda en la toma de decisiones para la gestión del riesgo medioambiental. En este estudio, se presenta un modelo integrado de análisis de fuzzy (IFRA) para la evaluación del riesgo cuyo resultado depende de múltiples criterios. El modelo es una visión integrada de las técnicas de incertidumbre basadas en diseños de valoraciones múltiples, relaciones fuzzy y procesos analíticos jerárquicos inciertos. La integración de la simulación del sistema y el análisis del riesgo utilizando aproximaciones inciertas permitieron incorporar la incertidumbre procedente del modelo junto con la incertidumbre procedente de la subjetividad de los criterios. En este estudio, se ha demostrado que es posible crear una amplia integración entre la simulación de un sistema incierto y de un análisis de riesgo incierto. En conclusión, este trabajo demuestra ampliamente la utilidad de aproximación Soft Computing en el análisis de riesgos ambientales. Los métodos propuestos podría avanzar significativamente la práctica de análisis de riesgos de abordar eficazmente el problema de propagación de incertidumbre.
26

A neural network construction method for surrogate modeling of physics-based analysis

Sung, Woong Je 04 April 2012 (has links)
A connectivity adjusting learning algorithm, Optimal Brain Growth (OBG) was proposed. Contrast to the conventional training methods for the Artificial Neural Network (ANN) which focus on the weight-only optimization, the OBG method trains both weights and connectivity of a network in a single training process. The standard Back-Propagation (BP) algorithm was extended to exploit the error gradient information of the latent connection whose current weight has zero value. Based on this, the OBG algorithm makes a rational decision between a further adjustment of an existing connection weight and a creation of a new connection having zero weight. The training efficiency of a growing network is maintained by freezing stabilized connections in the further optimization process. A stabilized computational unit is also decomposed into two units and a particular set of decomposition rules guarantees a seamless local re-initialization of a training trajectory. The OBG method was tested for the multiple canonical, regression and classification problems and for a surrogate modeling of the pressure distribution on transonic airfoils. The OBG method showed an improved learning capability in computationally efficient manner compared to the conventional weight-only training using connectivity-fixed Multilayer Perceptrons (MLPs).
27

Seedlet Technology for anomaly detection

Patton, Michael Dean. January 2002 (has links)
Thesis (Ph. D.)--Mississippi State University. Department of Electrical and Computer Engineering. / Includes bibliographical references.
28

A HYBRID FUZZY/GENETIC ALGORITHM FOR INTRUSION DETECTION IN RFID SYSTEMS

Geta, Gemechu 16 November 2011 (has links)
Various established and emerging applications of RFID technology have been and are being implemented by companies in different parts of the world. However, RFID technology is susceptible to a variety of security and privacy concerns, as it is prone to attacks such as eavesdropping, denial of service, tag cloning and user tracking. This is mainly because RFID tags, specifically low-cost tags, have low computational capability to support complex cryptographic algorithms. Tag cloning is a key problem to be considered since it leads to severe economic losses. One of the possible approaches to address tag cloning is using an intrusion detection system. Intrusion detection systems in RFID networks, on top of the existing lightweight cryptographic algorithms, provide an additional layer of protection where other security mechanisms may fail. This thesis presents an intrusion detection mechanism that detects anomalies caused by one or more cloned RFID tags in the system. We make use of a Hybrid Fuzzy Genetics-Based Machine Learning algorithm to design an intrusion detection model from RFID system-generated event logs. For the purpose of training and evaluation of our proposed approach, part of the RFID system-generated dataset provided by the University of Tasmania’s School of Computing and Information Systems was used, in addition to simulated datasets. The results of our experiments show that the model can achieve high detection rates and low false positive rates when identifying anomalies caused by one or more cloned tags. In addition, the model yields linguistically interpretable rules that can be used to support decision making during the detection of anomaly caused by the cloned tags.
29

New methods of mathematical modeling of human behavior in the manual tracking task

George, Gary R. January 2008 (has links)
Thesis (Ph. D.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Mechanical Engineering, 2008. / Includes bibliographical references.
30

FORMALIZATION AND IMPLEMENTATION OF GENERALIZED CONSTRAINT LANGUAGE FOR REALIZATION OF COMPUTING WITH WORDS

Sahebkar Khorasani, Elham Sahebkar 01 December 2012 (has links)
The Generalized Constraint Language (GCL), introduced by Zadeh, is the essence of Computing with Words (CW). It provides an genda to represent the meaning of imprecise words and phrases in natural language and introduces advanced techniques to perform reasoning on imprecise knowledge. Despite its fundamental role, the definition of GCL has remained informal since its introduction by Zadeh and, to our knowledge, no attempt has been made to formalize GCL or to build a working GCL deduction system. In this dissertation, two main interrelated objectives are pursued: First, the syntax and semantics of GCL are formalized in a logical setting. The notion of soundness of a GCL argument is defined and Zadeh's inference rules are proven sound in the defined language. Second, a CW Expert System Shell (CWSHELL) is implemented for the realization of a GCL deduction system. The CWSHELL software allows users to express their knowledge in terms of GCL formulas and pose queries to a GCL knowledge base. The richness of GCL language allows CWSHELL to greatly surpass current fuzzy logic expert systems both in its knowledge representation and reasoning capabilities. While many available fuzzy logic toolboxes can only represent knowledge in terms of fuzzy-if-then rules, CWShell goes beyond simple fuzzy conditional statements and performs a chain of reasoning on complex fuzzy propositions containing generalized constraints, fuzzy arithmetic expressions, fuzzy quantifiers, and fuzzy relations. To explore the application of CWSHELL, a realistic case study is developed to compute the auto insurance premium based on an imprecise knowledge base. The alpha version of CWSHELL along with the case study and documentation is available for download at http://cwjess.cs.siu.edu/.

Page generated in 0.0842 seconds