511 |
Interactive event-based intelligent scheduling.Zhang, Xiaomei 04 June 2008 (has links)
The present research study will be dedicated to expounding an integrated event-based scheduling model, which model will, in turn, be based on an object-oriented method and a knowledge-based methodology. In order to complete the said model, the integration of vision and scheduling systems has been taken one step further, especially as far as the processing of events, data integration and interface design are concerned. Consequent upon the latter research, three knowledge-based domain schedulers will be expounded as scheduling control mechanisms. For the completion of the integrated scheduling system, scheduling strategies and methods based on general environments have been developed further. A wide knowledge base model will also be introduced. Finally, a case study based on the management and manufacturing environments of Omega Holdings Ltd will be conducted with the help of the proposed new scheduling model. The author hopes that the integrated event-based scheduling system will serve as an effective scheduling system tool for manufacturing and industrial-management environments alike. This thesis comprises three sections, the first of which provides an overview of scheduling literature, including scheduling types, methods and technologies in a manufacturing environment. The first section will also be dedicated to a discussion on current approaches to scheduling and their respective limitations, followed by the introduction of an integrated scheduling model for interactive event-based intelligent scheduling. This will be followed by a detailed function analysis of the model in question, based on its architecture. The second section holds the key to this thesis, as it will be dedicated to a discussion on knowledge-based domain schedulers for interactive scheduling, the implementation of three knowledge-based domain schedulers based on an object-oriented concept and event-based scheduling strategies. Consequent upon this discussion, the model of a wide integrated knowledge base will be developed further. Finally, an interactive event-based intelligent scheduling system will be developed for a dynamic manufacturing environment, whereupon an evaluation of the proposed scheduling tool and system will be effected. A case study undertaken in an existing holding company will then be used to illustrate how to realise interactive event-based intelligent scheduling and how to improve on the management function in a dynamic environment. The thesis will culminate in a summary of the pros and cons of the proposed system. In conclusion, an indication will be given as to possible areas for future research, such as multilayer scheduling in a distributed environment. / Prof. E.M. Ehlers
|
512 |
Economic modelling using computational intelligence techniquesKhoza, Msizi Smiso 09 December 2013 (has links)
M.Ing. ( Electrical & Electronic Engineering Science) / Economic modelling tools have gained popularity in recent years due to the increasing need for greater knowledge to assist policy makers and economists. A number of computational intelligence approaches have been proposed for economic modelling. Most of these approaches focus on the accuracy of prediction and not much research has been allocated to investigate the interpretability of the decisions derived from these systems. This work proposes the use of computational intelligence techniques (Rough set theory (RST) and the Multi-layer perceptron (MLP) model) to model the South African economy. RST is a rule-based technique suitable for analysing vague, uncertain and imprecise data. RST extracts rules from the data to model the system. These rules are used for prediction and interpreting the decision process. The lesser the number of rules, the easier it is to interpret the model. The performance of the RST is dependent on the discretization technique employed. An equal frequency bin (EFB), Boolean reasoning (BR), entropy partition (EP) and the Naïve algorithm (NA) are used to develop an RST model. The model trained using EFB data performs better than the models trained using BR and EP. RST was used to model South Africa’s financial sector. Here, accuracy of 86.8%, 57.7%, 64.5% and 43% were achieved for EFB, BR, EP and NA respectively. This work also proposes an ensemble of rough set theory and the multi-layer perceptron model to model the South African economy wherein, a prediction of the direction of the gross domestic product is presented. This work also proposes the use of an auto-associative Neural Network to impute missing economic data. The auto-associative neural network imputed the ten variables or attributes that were used in the prediction model. These variables were: Construction contractors rating lack of skilled labour as constraint, Tertiary economic sector contribution to GDP, Income velocity of circulation of money, Total manufacturing production volume, Manufacturing firms rating lack of skilled labour as constraint, Total asset value of banking industry, Nominal unit labour cost, Total mass of Platinum Group Metals (PGMs) mined, Total revenue from sale of PGMs and the Gross Domestic Expenditure (GDE). The level of imputation accuracy achieved varied with the attribute. The accuracy ranged from 85.9% to 98.7%.
|
513 |
An implementation framework for knowledge-based engineering projectsMvudi, Yannick 27 May 2013 (has links)
M.Ing. (Engineering Management) / The growing need for customized solutions and faster product delivery obliges the product development industry to develop new strategies that can enable the rapid and flexible design of products. Several design approaches have been developed to address this issue: one such is Knowledge-Based Engineering (KBE), which is a design technique that enables the automation of the design process. This approach consists of using computational intelligence to capture the design rules related to a product family in order to generate automatically customized designs adapted to particular customer requirements. Knowledge-Based Engineering is also used to facilitate the performance of design evaluation activities such as finite element analysis (FEA) and computational fluid dynamics (CFD) as part of multi-disciplinary design optimization (MDO). The application of this approach led to impressive results mostly in the automotive and aeronautical industry. Owing to this method, some companies manage to reduce the duration of the design process by 90%. Despite the excellent results obtained through the use of Knowledge-Based Engineering, there are still very few companies that make use of this approach in their design process. The review of the relevant literature showed that the lack of a standard easy-to-use methodology of implementation is one of the major obstacles to the expansion of Knowledge-Based Engineering. The knowledge processing phase constitutes one of the main challenges of the KBE implementation process. This phase consists of extracting and documenting the knowledge embedded in the design team in order to convert it in a programming code. Available methodologies such as MOKA and KNOMAD do not seem to provide easy-to-use methods to represent the design knowledge in a form that makes it easy to be programmed. The lack of a preliminary stage that justifies the adequacy of KBE for a particular design process is also an important gap identified in the literature.This dissertation discusses a detailed method that addresses issues related to knowledge processing and suitability analysis in KBE implementation. The knowledge processing method suggested is based on the Work Breakdown Structure (WBS) which is used widely in the system engineering approach and consists of a very logical classification of the design knowledge. The strength of this method lies in its ability to represent the design knowledge in a form that makes it understandable for both engineers and programmers. Appropriate representation of this sort shortens the duration of the knowledge processing and facilitates the knowledge programming phase. Regarding the rationale for choosing of KBE, a detailed suitability assessment method is proposed.
|
514 |
Assessing the suitability of holonic control to the commodity petrochemical industryNiemand, Marinus 04 May 2005 (has links)
Dissertation (MEng)--University of Pretoria, 2005. / Chemical Engineering / unrestricted
|
515 |
Cogitator : a parallel, fuzzy, database-driven expert systemBaise, Paul 08 October 2012 (has links)
The quest to build anthropomorphic machines has led researchers to focus on knowledge and the manipulation thereof. Recently, the expert system was proposed as a solution, working well in small, well understood domains. However these initial attempts highlighted the tedious process associated with building systems to display intelligence, the most notable being the Knowledge Acquisition Bottleneck. Attempts to circumvent this problem have led researchers to propose the use of machine learning databases as a source of knowledge. Attempts to utilise databases as sources of knowledge has led to the development Database-Driven Expert Systems. Furthermore, it has been ascertained that a requisite for intelligent systems is powerful computation. In response to these problems and proposals, a new type of database-driven expert system, Cogitator is proposed. It is shown to circumvent the Knowledge Acquisition Bottleneck and posess many other advantages over both traditional expert systems and connectionist systems, whilst having non-serious disadvantages. / KMBT_223
|
516 |
Trust on the semantic webCloran, Russell Andrew 07 August 2006 (has links)
The Semantic Web is a vision to create a “web of knowledge”; an extension of the Web as we know it which will create an information space which will be usable by machines in very rich ways. The technologies which make up the Semantic Web allow machines to reason across information gathered from the Web, presenting only relevant results and inferences to the user. Users of the Web in its current form assess the credibility of the information they gather in a number of different ways. If processing happens without the user being able to check the source and credibility of each piece of information used in the processing, the user must be able to trust that the machine has used trustworthy information at each step of the processing. The machine should therefore be able to automatically assess the credibility of each piece of information it gathers from the Web. A case study on advanced checks for website credibility is presented, and the site presented in the case presented is found to be credible, despite failing many of the checks which are presented. A website with a backend based on RDF technologies is constructed. A better understanding of RDF technologies and good knowledge of the RAP and Redland RDF application frameworks is gained. The second aim of constructing the website was to gather information to be used for testing various trust metrics. The website did not gain widespread support, and therefore not enough data was gathered for this. Techniques for presenting RDF data to users were also developed during website development, and these are discussed. Experiences in gathering RDF data are presented next. A scutter was successfully developed, and the data smushed to create a database where uniquely identifiable objects were linked, even where gathered from different sources. Finally, the use of digital signature as a means of linking an author and content produced by that author is presented. RDF/XML canonicalisation is discussed in the provision of ideal cryptographic checking of RDF graphs, rather than simply checking at the document level. The notion of canonicalisation on the semantic, structural and syntactic levels is proposed. A combination of an existing canonicalisation algorithm and a restricted RDF/XML dialect is presented as a solution to the RDF/XML canonicalisation problem. We conclude that a trusted Semantic Web is possible, with buy in from publishing and consuming parties.
|
517 |
Towards a framework for building security operation centersJacobs, Pierre Conrad January 2015 (has links)
In this thesis a framework for Security Operation Centers (SOCs) is proposed. It was developed by utilising Systems Engineering best practices, combined with industry-accepted standards and frameworks, such as the TM Forum’s eTOM framework, CoBIT, ITIL, and ISO/IEC 27002:2005. This framework encompasses the design considerations, the operational considerations and the means to measure the effectiveness and efficiency of SOCs. The intent is to provide guidance to consumers on how to compare and measure the capabilities of SOCs provided by disparate service providers, and to provide service providers (internal and external) a framework to use when building and improving their offerings. The importance of providing a consistent, measureable and guaranteed service to customers is becoming more important, as there is an increased focus on holistic management of security. This has in turn resulted in an increased number of both internal and managed service provider solutions. While some frameworks exist for designing, building and operating specific security technologies used within SOCs, we did not find any comprehensive framework for designing, building and managing SOCs. Consequently, consumers of SOCs do not enjoy a constant experience from vendors, and may experience inconsistent services from geographically dispersed offerings provided by the same vendor.
|
518 |
A knowledge-based system for estimating the duration of cast in place concrete activitiesDiaz Zarate, Gerardo Daniel 01 January 1992 (has links)
No description available.
|
519 |
Knowledge representation and problem solving for an intelligent tutoring systemLi, Vincent January 1990 (has links)
As part of an effort to develop an intelligent tutoring system, a set of knowledge representation
frameworks was proposed to represent expert domain knowledge. A general representation of time points and temporal relations was developed to facilitate temporal concept deductions as well as facilitating explanation capabilities vital in an intelligent advisor system. Conventional representations of time use a single-referenced timeline and assigns a single unique value to the time of occurrence of an event. They fail to capture the notion of events, such as changes in signal states in microcomputer systems, which do not occur at precise points in time, but rather over a range of time with some probability distribution. Time is, fundamentally, a relative quantity. In conventional representations,
this relative relation is implicitly defined with a fixed reference, "time-zero", on the timeline. This definition is insufficient if an explanation of the temporal relations is to be constructed. The proposed representation of time solves these two problems by representing a time point as a time-range and making the reference point explicit.
An architecture of the system was also proposed to provide a means of integrating various modules as the system evolves, as well as a modular development approach. A production rule EXPERT based on the rule framework used in the Graphic Interactive LISP tutor (GIL) [44, 45], an intelligent tutor for LISP programming, was implemented to demonstrate the inference process using this time point representation. The EXPERT is goal-driven and is intended to be an integral part of a complete intelligent tutoring system. / Applied Science, Faculty of / Electrical and Computer Engineering, Department of / Graduate
|
520 |
Developing conceptual frameworks for structuring legal knowledge to build knowledge-based systemsDeedman, Galvin Charles 05 1900 (has links)
This dissertation adopts an interdisciplinary approach to
the field of law and artificial intelligence. It argues
that the conceptual structuring of legal knowledge within an
appropriate theoretical framework is of primary importance
when building knowledge-based systems. While technical
considerations also play a role, they must take second place
to an in-depth understanding of the law.
Two alternative methods of structuring legal knowledge in
very different domains are used to explore the thesis. A
deep-structure approach is used on nervous shock, a rather
obscure area of the law of negligence. A script-based
method is applied to impaired driving, a well-known part of
the criminal law. A knowledge-based system is implemented
in each area. The two systems, Nervous Shock Advisor (NSA)
and Impaired Driving Advisor (IDA), and the methodologies
they embody, are described and contrasted.
In light of the work undertaken, consideration is given to
the feasibility of lawyers without much technical knowledge
using general-purpose tools to build knowledge-based systems
for themselves. / Graduate and Postdoctoral Studies / Graduate
|
Page generated in 0.0356 seconds