• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 21
  • 21
  • 21
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

知識管理中推論機制之研究–應用在信用卡行銷

葛世豪, Ko, Shih Hao Unknown Date (has links)
銀行個人金融中,信用卡產品的競爭相當激烈,而為了吸引持卡人消費,行銷人員也設計了許多的信用卡行銷優惠。現有銀行的資訊部門為了滿足新行銷方案的設計與行銷方案優惠的判斷,採取的作法是在現有的系統上不斷的新增外掛程式,來滿足新的需求,但這樣的作法會產生許多問題,每當一個新的行銷優惠方案產生後,就必須在現有系統上新增外掛,不但可能會危及現有系統的穩定度,同時這些外掛的系統會造成系統的複雜度,增加維護與修改的困難度與成本。 本研究運用邏輯符號以及物件導向的觀念,分析行銷人員設計信用卡行銷優惠方案,提出規則模型以及事實模型。規則模型提供了一套通用的邏輯符號架構,並將邏輯的表達透過XML的標準,轉化成電腦可處理的形式;而事實模型架構出通用的事實模版架構,提供在事實與實際資料庫對應上的處理標準。並透過以規則為基礎的推論機制,建構出實際的系統,來驗證研究的可行性。希望透過本研究的規則模型以及事實模型架構,將行銷人員的知識結構化與知識儲存,達到邏輯符號的可重複使用性與可延伸性,並提供了系統未來修改與延伸的彈性與準則,改善過去新增行銷優惠方案時,就必須增加外掛系統的問題。 / Competition for credit card is sharp in personal finance of banking and financial service industry. In order to attract customer to use credit card, the marketing staff designs lots of marketing campaigns. But for information system department of the bank, it takes more effort to manage and process these campaigns. The approach that the information system staff uses is to create lots of plug-in systems. But this approach harms the stability of the system architecture and increases the complexity of maintenance and cost. In this research, we use the concept of symbolic logic and object-oriented concept to analysis how the marketing staff designs the credit card campaigns and provide the rule model and fact model. The rule model provides standard symbolic logic architecture and uses the XML standard to transform logic symbols into RuleML. The fact model is a standard architecture and provides the mapping between facts and database. And we build a prototype rule-based system to validate the feasibility. The research contributes to construct and store the knowledge of credit card marketing campaign of the marketing staff by the rule model and the fact model. And the rule-based system in this research also provides flexibility and modifiability in the future to reform the drawback of original plug-in system approach.
12

USING RULE-BASED METHODS AND MACHINE LEARNING FOR SHORT ANSWER SCORING

Pihlqvist, Fredrik, Mulongo, Benedith January 2018 (has links)
Automatiskt rättning av korta texter är ett område som spänner allt från naturlig språkbehandling till maskininlärning. Projektet behandlar maskininlärning för att förutsäga korrektheten av svar i fritext. Naturlig språkbehandling används för att analysera text och utvinna viktiga underliggande relationer i texten. Det finns idag flera approximativa lösningar för automatiskt rättning av korta svar i fritext. Två framstående metoder är maskininlärning och regelbaserad metod. Vi kommer att framföra en alternativ metod som kombinerar maskininlärning med en regelbaserad metod för att approximativt lösa förenämnda problemet. Studien handlar om att implementera en regelbaserad metod, maskininlärning metod och en slutgiltig kombination av båda dessa metoder. Utvärderingen av den kombinerade metoden utförs genom att titta på de relativa ändringarna i prestanda då vi jämför med den regelbaserade och maskininlärning metoden. De erhållna resultaten har visat att det inte finns någon ökning av noggrannheten hos den kombinerade metoden jämfört med endast maskininlärning metoden. Den kombinerade metoden använder emellertid en liten mängd märkta data med en noggrannhet som är nästan lika metoden med maskininlärning, vilket är positivt. Ytterligare undersökning inom detta område behövs, denna uppsats är bara ett litet bidrag till nya metoder i automatisk rättning. / Automatic correction of short text answers is an area that involves everything from natural language processing to machine learning. Our project deals with machine learning for predicting the correctness of candidate answers and natural language processing to analyse text and extract important underlying relationships in the text. Given that today there are several approximative solutions for automatically correcting short answers, ranging from rule-based methods to machine learning methods. We intend to look at how automatic answer scoring can be solved through a clever combination of both machine learning methods and rule-based method for a given dataset. The study is about implementing a rule-based method, a machine learning method and a final combination of both these methods. The evaluation of the combined method is done by measuring its relative performance compared to the rule-based method and machine learning method. The results obtained have shown that there is no increase in the accuracy of the combined method compared to the machine learning method alone. However, the combined method uses a small amount of labeled data with an accuracy almost equal to the machine learning, which is positive. Further investigation in this area is needed, this thesis is only a small contribution, with a new approaches and methods in automatic short answer scoring.
13

Metodologia para aferi??o do n?vel de maturidade associado ? interoperabilidade t?cnica nas a??es de Governo Eletr?nico / Assessment methodology for E-Government technical interoperability maturity level

Corr?a, Andreiwid Sheffer 23 November 2012 (has links)
Made available in DSpace on 2016-04-04T18:31:32Z (GMT). No. of bitstreams: 1 Andreiwid Sheffer Correa.pdf: 4501821 bytes, checksum: 3d9f7fecc118ff0dc70b11ea481d5e67 (MD5) Previous issue date: 2012-11-23 / The unstructured and unplanned implementation of technological solutions leads to wastage of resources and imposes itself as a barrier to achieving the potential benefits of information technologies and communication. The problem increases when managers who operate these technologies are part of the public administration, as structural issues make this scenario open for merely temporary, strictly proprietary, experimental or doomed to obsolescence solutions, resulting in interoperability problems. Thus, the possible damage extrapolates financial issues and compromise expected social return. In an attempt to avoid this problem, several countries are developing and adopting government interoperability frameworks to guide their actions in electronic government. These architectures expose successful solutions for technical, semantic and organizational dimension of interoperability, and reflect on the best path according to the understanding of its government. However, specifically for the technical dimension, there is no way to evaluate the effectiveness of these architectures and assess how the solutions are interoperable. This work aims to propose a maturity model for technical interoperability in order to assess the use of standards and assist software and systems engineers, as well as professionals in general, to focus their efforts on the use of recommended technologies by good practices. It has been based on e-PING architecture, which is the Brazilian standard for interoperability. In addition, this work proposes the development and use of a rule-based system that implements fuzzy logic to assist evaluation and adherence to the model. To verify model feasibility and validate the developed system, this paper also uses a real scenario as the basis of analysis of interoperability. / A implementa??o desestruturada e n?o planejada de solu??es tecnol?gicas ? fonte de desperd?cio de recursos e imp?e-se como barreira para obten??o dos potenciais benef?cios do uso das tecnologias da informa??o e comunica??o. O problema acentua-se quando gestores dessas tecnologias atuam para a administra??o p?blica, pois quest?es estruturais fazem com que este cen?rio abra espa?o para solu??es pontuais e transit?rias, estritamente propriet?rias, experimentais ou fadadas ? obsolesc?ncia, o que resulta em problemas de interoperabilidade. Desse modo, os poss?veis danos extrapolam o sentido financeiro por comprometer o retorno social esperado. Na tentativa de contornar essa quest?o, v?rios pa?ses v?m desenvolvendo e adotando as arquiteturas de interoperabilidade governamentais para orientar suas a??es de governo eletr?nico. Essas arquiteturas buscam evidenciar, a partir dos aspectos t?cnicos, sem?nticos ou organizacionais, as solu??es bem sucedidas e aceitas universalmente, al?m de refletirem o melhor caminho para a interoperabilidade, segundo o entendimento de cada governo. No entanto, especificamente para o aspecto t?cnico, n?o existe um meio para avaliar a efetiva utiliza??o dessas arquiteturas e aferir o qu?o interoper?veis as solu??es se encontram. Este trabalho visa propor um modelo de maturidade para interoperabilidade t?cnica com o objetivo de medir o uso de padr?es de interoperabilidade e auxiliar engenheiros de softwares e de sistemas, assim como profissionais em geral, a direcionar seus esfor?os no emprego de tecnologias consagradas pelas boas pr?ticas de mercado. Tem-se, como base para a constru??o do modelo, a arquitetura e-PING, que ? o padr?o brasileiro de interoperabilidade. Adicionalmente, este trabalho prop?e o desenvolvimento e utiliza??o de um sistema baseado em regras que emprega l?gica nebulosa para auxiliar no processo de avalia??o da ader?ncia ao modelo. Para verifica??o da viabilidade do modelo e valida??o do sistema desenvolvido, este trabalho tamb?m utiliza um cen?rio real para servir de base de an?lise da interoperabilidade.
14

建立本體式財務會計資訊系統 / Construct an Ontology-based Financial Accounting Information System

黃炳榮 Unknown Date (has links)
財務會計資訊系統是企業基礎且必要的資訊系統,提供內、外部使用者有關企業的經營績效資訊,作為內部經營者的管理依據及外部投資者的決策參考。然而由於法令頻繁的變更及企業本身策略需求的改變,會計資訊系統面臨很高的更新、維護成本。   本研究提出一個以本體為基礎的資訊系統架構,首先修改W.E. McCarthy於1982年提出的REA 模型,利用本體工程方法建立財務會計本體,描述企業流程及會計處理知識;再利用規則式系統之技術,於財務會計本體外建立系統存取規則及介面,呈現系統樣貌與功能。於法令變更及需求改變時直接修改本體內容,彈性變更系統內的流程、運作規則,達成減少維護成本、增加彈性之目標。 / Accounting Information System(AIS) is a kind of important system in an enterprise. That provides financial information for users to make decisions. However, accounting principle and manage strategy changing frequently that cause highly maintenance cost.   This research proposes an ontology-based system structure. First, we amend the REA model and analyze business processes and accounting rules to build an accounting ontology by ontology engineering. Then, we use the rule-based system skill to develop the system interface. Whenever accounting principle be modified, we update the ontology only. This design promotes the flexibility and decreases the cost.
15

Sistema Especialista para Supress?o Online de Alarmes em Processos Industriais

Souza, Danilo Curvelo de 01 February 2013 (has links)
Made available in DSpace on 2014-12-17T14:56:12Z (GMT). No. of bitstreams: 1 DaniloCS_DISSERT.pdf: 3897603 bytes, checksum: cd98fa05a1dee36b5186c50e95b2f03b (MD5) Previous issue date: 2013-02-01 / Coordena??o de Aperfei?oamento de Pessoal de N?vel Superior / Operating industrial processes is becoming more complex each day, and one of the factors that contribute to this growth in complexity is the integration of new technologies and smart solutions employed in the industry, such as the decision support systems. In this regard, this dissertation aims to develop a decision support system based on an computational tool called expert system. The main goal is to turn operation more reliable and secure while maximizing the amount of relevant information to each situation by using an expert system based on rules designed for a particular area of expertise. For the modeling of such rules has been proposed a high-level environment, which allows the creation and manipulation of rules in an easier way through visual programming. Despite its wide range of possible applications, this dissertation focuses only in the context of real-time filtering of alarms during the operation, properly validated in a case study based on a real scenario occurred in an industrial plant of an oil and gas refinery / A opera??o de processos industriais vem se tornando mais complexa ao longo dos anos, e um dos elementos que possibilitam este aumento de complexidade ? a integra??o de novas tecnologias e solu??es inteligentes empregadas no setor, como ? o caso dos sistemas de apoio ? decis?o. Neste sentido, esta disserta??o visa o desenvolvimento de um sistema de aux?lio ? opera??o baseado em uma ferramenta computacional chamada de sistema especialista. O objetivo principal ? tornar a opera??o mais confi?vel e segura ao maximizar a quantidade de informa??es relevantes a cada situa??o atrav?s da utiliza??o de um sistema especialista baseado em regras pr?-moldadas para uma determinada ?rea de conhecimento. Para a modelagem de tais regras foi proposto um ambiente de alto-n?vel, que permite a cria??o e manipula??o de regras de forma facilitada atrav?s de programa??o visual. A despeito de sua ampla gama de poss?veis aplica??es, esta disserta??o tem como foco o contexto de filtragem em tempo real de alarmes durante a opera??o, devidamente validada em um estudo de caso baseado em um cen?rio real ocorrido em uma planta industrial de uma refinaria de petr?leo e g?s
16

Visualization of tabular data on mobile devices / Visualisering av tabulär data på mobila enheter

Caspár, Sophia January 2018 (has links)
This thesis evaluates various ways of displaying tabular data on mobile devices using different responsive table solutions. It also presents a tool to help web developers and designers in the process of choosing and implementing a suitable table approach. The proposed solution for this thesis is a web system called The Visualizing Wizard that allows the user to answer some questions about the intended table and then get a recommended responsive table solution generated based on the answers. The system uses a rule-based approach via Prolog to match the answers to a set of rules and provide an appropriate result. In order to determine which table solutions are more appropriate to use for which type of data a statistical analysis and user tests were performed. The statistical analysis contains an investigation to identify the most common table approaches and data types used on various websites. The result indicates that solutions such as "squish", "collapse by rows", "click" and "scroll" are most common. The most common table categories are product comparison, product offerings, sports and stock market/statistics. This information was used to implement and establish user tests to collect feedback and opinions. The data and statistics gathered from the user tests were mapped into sets of rules to answer the question of which responsive table solution is more appropriate to use for which type of data. This serves as the foundation for The Visualizing Wizard.
17

Le repérage automatique des entités nommées dans la langue arabe : vers la création d'un système à base de règles

Zaghouani, Wajdi January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
18

A scalable evolutionary learning classifier system for knowledge discovery in stream data mining

Dam, Hai Huong, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2008 (has links)
Data mining (DM) is the process of finding patterns and relationships in databases. The breakthrough in computer technologies triggered a massive growth in data collected and maintained by organisations. In many applications, these data arrive continuously in large volumes as a sequence of instances known as a data stream. Mining these data is known as stream data mining. Due to the large amount of data arriving in a data stream, each record is normally expected to be processed only once. Moreover, this process can be carried out on different sites in the organisation simultaneously making the problem distributed in nature. Distributed stream data mining poses many challenges to the data mining community including scalability and coping with changes in the underlying concept over time. In this thesis, the author hypothesizes that learning classifier systems (LCSs) - a class of classification algorithms - have the potential to work efficiently in distributed stream data mining. LCSs are an incremental learner, and being evolutionary based they are inherently adaptive. However, they suffer from two main drawbacks that hinder their use as fast data mining algorithms. First, they require a large population size, which slows down the processing of arriving instances. Second, they require a large number of parameter settings, some of them are very sensitive to the nature of the learning problem. As a result, it becomes difficult to choose a right setup for totally unknown problems. The aim of this thesis is to attack these two problems in LCS, with a specific focus on UCS - a supervised evolutionary learning classifier system. UCS is chosen as it has been tested extensively on classification tasks and it is the supervised version of XCS, a state of the art LCS. In this thesis, the architectural design for a distributed stream data mining system will be first introduced. The problems that UCS should face in a distributed data stream task are confirmed through a large number of experiments with UCS and the proposed architectural design. To overcome the problem of large population sizes, the idea of using a Neural Network to represent the action in UCS is proposed. This new system - called NLCS { was validated experimentally using a small fixed population size and has shown a large reduction in the population size needed to learn the underlying concept in the data. An adaptive version of NLCS called ANCS is then introduced. The adaptive version dynamically controls the population size of NLCS. A comprehensive analysis of the behaviour of ANCS revealed interesting patterns in the behaviour of the parameters, which motivated an ensemble version of the algorithm with 9 nodes, each using a different parameter setting. In total they cover all patterns of behaviour noticed in the system. A voting gate is used for the ensemble. The resultant ensemble does not require any parameter setting, and showed better performance on all datasets tested. The thesis concludes with testing the ANCS system in the architectural design for distributed environments proposed earlier. The contributions of the thesis are: (1) reducing the UCS population size by an order of magnitude using a neural representation; (2) introducing a mechanism for adapting the population size; (3) proposing an ensemble method that does not require parameter setting; and primarily (4) showing that the proposed LCS can work efficiently for distributed stream data mining tasks.
19

A scalable evolutionary learning classifier system for knowledge discovery in stream data mining

Dam, Hai Huong, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2008 (has links)
Data mining (DM) is the process of finding patterns and relationships in databases. The breakthrough in computer technologies triggered a massive growth in data collected and maintained by organisations. In many applications, these data arrive continuously in large volumes as a sequence of instances known as a data stream. Mining these data is known as stream data mining. Due to the large amount of data arriving in a data stream, each record is normally expected to be processed only once. Moreover, this process can be carried out on different sites in the organisation simultaneously making the problem distributed in nature. Distributed stream data mining poses many challenges to the data mining community including scalability and coping with changes in the underlying concept over time. In this thesis, the author hypothesizes that learning classifier systems (LCSs) - a class of classification algorithms - have the potential to work efficiently in distributed stream data mining. LCSs are an incremental learner, and being evolutionary based they are inherently adaptive. However, they suffer from two main drawbacks that hinder their use as fast data mining algorithms. First, they require a large population size, which slows down the processing of arriving instances. Second, they require a large number of parameter settings, some of them are very sensitive to the nature of the learning problem. As a result, it becomes difficult to choose a right setup for totally unknown problems. The aim of this thesis is to attack these two problems in LCS, with a specific focus on UCS - a supervised evolutionary learning classifier system. UCS is chosen as it has been tested extensively on classification tasks and it is the supervised version of XCS, a state of the art LCS. In this thesis, the architectural design for a distributed stream data mining system will be first introduced. The problems that UCS should face in a distributed data stream task are confirmed through a large number of experiments with UCS and the proposed architectural design. To overcome the problem of large population sizes, the idea of using a Neural Network to represent the action in UCS is proposed. This new system - called NLCS { was validated experimentally using a small fixed population size and has shown a large reduction in the population size needed to learn the underlying concept in the data. An adaptive version of NLCS called ANCS is then introduced. The adaptive version dynamically controls the population size of NLCS. A comprehensive analysis of the behaviour of ANCS revealed interesting patterns in the behaviour of the parameters, which motivated an ensemble version of the algorithm with 9 nodes, each using a different parameter setting. In total they cover all patterns of behaviour noticed in the system. A voting gate is used for the ensemble. The resultant ensemble does not require any parameter setting, and showed better performance on all datasets tested. The thesis concludes with testing the ANCS system in the architectural design for distributed environments proposed earlier. The contributions of the thesis are: (1) reducing the UCS population size by an order of magnitude using a neural representation; (2) introducing a mechanism for adapting the population size; (3) proposing an ensemble method that does not require parameter setting; and primarily (4) showing that the proposed LCS can work efficiently for distributed stream data mining tasks.
20

A scalable evolutionary learning classifier system for knowledge discovery in stream data mining

Dam, Hai Huong, Information Technology & Electrical Engineering, Australian Defence Force Academy, UNSW January 2008 (has links)
Data mining (DM) is the process of finding patterns and relationships in databases. The breakthrough in computer technologies triggered a massive growth in data collected and maintained by organisations. In many applications, these data arrive continuously in large volumes as a sequence of instances known as a data stream. Mining these data is known as stream data mining. Due to the large amount of data arriving in a data stream, each record is normally expected to be processed only once. Moreover, this process can be carried out on different sites in the organisation simultaneously making the problem distributed in nature. Distributed stream data mining poses many challenges to the data mining community including scalability and coping with changes in the underlying concept over time. In this thesis, the author hypothesizes that learning classifier systems (LCSs) - a class of classification algorithms - have the potential to work efficiently in distributed stream data mining. LCSs are an incremental learner, and being evolutionary based they are inherently adaptive. However, they suffer from two main drawbacks that hinder their use as fast data mining algorithms. First, they require a large population size, which slows down the processing of arriving instances. Second, they require a large number of parameter settings, some of them are very sensitive to the nature of the learning problem. As a result, it becomes difficult to choose a right setup for totally unknown problems. The aim of this thesis is to attack these two problems in LCS, with a specific focus on UCS - a supervised evolutionary learning classifier system. UCS is chosen as it has been tested extensively on classification tasks and it is the supervised version of XCS, a state of the art LCS. In this thesis, the architectural design for a distributed stream data mining system will be first introduced. The problems that UCS should face in a distributed data stream task are confirmed through a large number of experiments with UCS and the proposed architectural design. To overcome the problem of large population sizes, the idea of using a Neural Network to represent the action in UCS is proposed. This new system - called NLCS { was validated experimentally using a small fixed population size and has shown a large reduction in the population size needed to learn the underlying concept in the data. An adaptive version of NLCS called ANCS is then introduced. The adaptive version dynamically controls the population size of NLCS. A comprehensive analysis of the behaviour of ANCS revealed interesting patterns in the behaviour of the parameters, which motivated an ensemble version of the algorithm with 9 nodes, each using a different parameter setting. In total they cover all patterns of behaviour noticed in the system. A voting gate is used for the ensemble. The resultant ensemble does not require any parameter setting, and showed better performance on all datasets tested. The thesis concludes with testing the ANCS system in the architectural design for distributed environments proposed earlier. The contributions of the thesis are: (1) reducing the UCS population size by an order of magnitude using a neural representation; (2) introducing a mechanism for adapting the population size; (3) proposing an ensemble method that does not require parameter setting; and primarily (4) showing that the proposed LCS can work efficiently for distributed stream data mining tasks.

Page generated in 0.0455 seconds