Spelling suggestions: "subject:"expert systems (computer cience)"" "subject:"expert systems (computer cscience)""
421 |
Determining critical success factors for implementation of on-line registration systemsThompson, Robyn Cindy January 2017 (has links)
Submitted in fulfillment of the requirements for the degree of the Master of Information Technology, Durban University of Technology, Durban, South Africa, 2017. / The assignment of identifying Critical Success Factors (CSFs) for the successful implementation of the Enterprise Resource Planning (ERP) systems has become an important problem in the information system (IS) research. The necessity to identify CSFs becomes perceptible because of the failure often associated with the ERP system implementation in corporate organisations. The investigation and identification of CSFs will help cut costs of implementing ERP systems in organisations by giving higher precedence to the most critical factors. Literature has indicated that some factors of ERP system implementation labelled as critical are, in most cases, not critical for achieving success in the ERP system implementation. It can be argued that the inherent prediction error in the identification of CSFs is associated with the method employed for identifying criticality. Certain researchers have asserted that many of the studies on CSFs have based their findings on the use of content analysis method to identify and classify implementation factors of ERP systems as critical or not, rather than empirical findings. This intrinsic drawback has led researchers to suggest the use of sound scientific methods such as the structural equation modelling technique to identify CSFs to help guide the implementation of ERP systems in organizations. However, because of the limitations of the existing findings, the expectation is still much higher in effectively resolving the problem of identifying CSFs, in general.
The overarching aim of this study was to determine those factors that are deemed critical for the successful implementation of the on-line registration system as an archetype of ERP system at HEIs. It was necessary to, firstly, identify common factors that have a significant impact on ERP system implementation and, secondly, to ascertain whether the identified factors are applicable in HEI settings, particularly to the on-line registration system. This study plans an in-depth exploration of the implementation of an on-line registration system with the identified factors forming the precursor to unearth those factors that are critical for the success of implementing on-line registration systems. The study has adopted a post-positivism mixed methods approach to identify and verify CSFs of the on-line registration system implementation, taking into consideration higher-order relationships between the factors. Data gathering took place using expert judgement with the involvement of role players in the implementation of on-line registration systems. The ADVIAN classification method provides the analytic tool for identifying factors that are deemed critical for successful implementation of on-line registration systems.
The results reveal the existence of various dimensions of criticality with organisational culture and ERP strategy and implementation methodology emerging as critical factors, while the driving factors for implementation include ERP vendor support and guidance, senior and top management support, project plan with agreed objectives and goals, project management to implement project plan and project leader. It is established that the driven factors that should be observed when intervention measures are implemented include change management, post-implementation evaluation, software testing and troubleshooting, user training and user involvement. It is hoped that the CSFs discovered in this study will contribute towards the under-researched area of ERP and pragmatically aid the improvement of a process area that is in desperate need of business process re-engineering at HEIs. / M
|
422 |
OVR : a novel architecture for voice-based applications / Ontologies, VoiceXML and ReasonersMaema, Mathe 01 April 2011 (has links)
Despite the inherent limitation of accessing information serially, voice applications are increasingly growing in popularity as computing technologies advance. This is a positive development, because voice communication offers a number of benefits over other forms of communication. For example, voice may be better for delivering services to users whose eyes and hands may be engaged in other activities (e.g. driving) or to semi-literate or illiterate users. This thesis proposes a knowledge based architecture for building voice applications to help reduce the limitations of serial access to information. The proposed architecture, called OVR (Ontologies, VoiceXML and Reasoners), uses a rich backend that represents knowledge via ontologies and utilises reasoning engines to reason with it, in order to generate intelligent behaviour. Ontologies were chosen over other knowledge representation formalisms because of their expressivity and executable format, and because current trends suggest a general shift towards the use of ontologies in many systems used for information storing and sharing. For the frontend, this architecture uses VoiceXML, the emerging, and de facto standard for voice automated applications. A functional prototype was built for an initial validation of the architecture. The system is a simple voice application to help locate information about service providers that offer HIV (Human Immunodeficiency Virus) testing. We called this implementation HTLS (HIV Testing Locator System). The functional prototype was implemented using a number of technologies. OWL API, a Java interface designed to facilitate manipulation of ontologies authored in OWL was used to build a customised query interface for HTLS. Pellet reasoner was used for supporting queries to the knowledge base and Drools (JBoss rule engine) was used for processing dialog rules. VXI was used as the VoiceXML browser and an experimental softswitch called iLanga as the bridge to the telephony system. (At the heart of iLanga is Asterisk, a well known PBX-in-a-box.) HTLS behaved properly under system testing, providing the sought initial validation of OVR. / LaTeX with hyperref package
|
423 |
A knowledge-oriented, context-sensitive architectural framework for service deployment in marginalized rural communitiesThinyane, Mamello P January 2009 (has links)
The notion of a global knowledge society is somewhat of a misnomer due to the fact that large portions of the global community are not participants in this global knowledge society which is driven, shaped by and socio-technically biased towards a small fraction of the global population. Information and Communication Technology (ICT) is culture-sensitive and this is a dynamic that is largely ignored in the majority of ICT for Development (ICT4D) interventions, leading to the technological determinism flaw and ultimately a failure of the undertaken projects. The deployment of ICT solutions, in particular in the context of ICT4D, must be informed by the cultural and socio-technical profile of the deployment environments and solutions themselves must be developed with a focus towards context-sensitivity and ethnocentricity. In this thesis, we investigate the viability of a software architectural framework for the development of ICT solutions that are context-sensitive and ethnocentric1, and so aligned with the cultural and social dynamics within the environment of deployment. The conceptual framework, named PIASK, defines five tiers (presentation, interaction, access, social networking, and knowledge base) which allow for: behavioural completeness of the layer components; a modular and functionally decoupled architecture; and the flexibility to situate and contextualize the developed applications along the dimensions of the User Interface (UI), interaction modalities, usage metaphors, underlying Indigenous Knowledge (IK), and access protocols. We have developed a proof-of-concept service platform, called KnowNet, based on the PIASK architecture. KnowNet is built around the knowledge base layer, which consists of domain ontologies that encapsulate the knowledge in the platform, with an intrinsic flexibility to access secondary knowledge repositories. The domain ontologies constructed (as examples) are for the provisioning of eServices to support societal activities (e.g. commerce, health, agriculture, medicine) within a rural and marginalized area of Dwesa, in the Eastern Cape province of South Africa. The social networking layer allows for situating the platform within the local social systems. Heterogeneity of user profiles and multiplicity of end-user devices are handled through the access and the presentation components, and the service logic is implemented by the interaction components. This services platform validates the PIASK architecture for end-to-end provisioning of multi-modal, heterogeneous, ontology-based services. The development of KnowNet was informed on one hand by the latest trends within service architectures, semantic web technologies and social applications, and on the other hand by the context consideration based on the profile (IK systems dynamics, infrastructure, usability requirements) of the Dwesa community. The realization of the service platform is based on the JADE Multi-Agent System (MAS), and this shows the applicability and adequacy of MAS’s for service deployment in a rural context, at the same time providing key advantages such as platform fault-tolerance, robustness and flexibility. While the context of conceptualization of PIASK and the implementation of KnowNet is that of rurality and of ICT4D, the applicability of the architecture extends to other similarly heterogeneous and context-sensitive domains. KnowNet has been validated for functional and technical adequacy, and we have also undertaken an initial prevalidation for social context sensitivity. We observe that the five tier PIASK architecture provides an adequate framework for developing context-sensitive and ethnocentric software: by functionally separating and making explicit the social networking and access tier components, while still maintaining the traditional separation of presentation, business logic and data components.
|
424 |
Developing A Dialogue Based Knowledge Acquisition Method For Automatically Acquiring Expert Knowledge To Diagnose Mechanical AssembliesMadhusudanan, N 12 1900 (has links) (PDF)
Mechanical assembly is an important step during product realization, which is an integrative process that brings together the parts of the assembly, the people performing the assembly and the various technologies that are involved. Assembly planning involves deciding on the assembly sequence, the tooling and the processes to be used. Assembly planning should enable the actual assembly process to be as effective as possible.Assembly plans may have to be revised due to issues arising during assembly. Many
of these revisions can be avoided at the planning stage if assembly planners have prior
knowledge of these issues and how to resolve them. General guidelines to make assembly easier (e.g. Design for Assembly) are usually suited for mass-manufactured assemblies and are applied where similar issues are faced regularly. However, for very specific issues that are unique to some domains only, such as aircraft assembly, only expert knowledge in that domain can identify and resolve the issues.
Assembly experts are the sources of knowledge for identifying and resolving these issues. If assembly planners could receive assembly experts’ advice about the potential issues and resolutions that are likely to occur in a given assembly situation, they could use this advice to revise the assembly plan in order to avoid these issues. This link between assembly experts and planners can be provided using knowledge based systems. Knowledge-based systems contain a knowledge base to store experts’ knowledge, and an inference engine that derives certain conclusions using this knowledge. However, knowledge acquisition for such systems is a difficult process with substantial resistance to being automated. Methods reported in literature propose various ways of addressing the problem of automating knowledge acquisition. However, there are many limitations to these methods, which have been the motivations for the research work reported in this thesis. This thesis proposes a dialog-like method of questioning an expert to automatically acquire knowledge from assembly experts. The questions are asked in the context of an assembly situation shown to them. During the interviews, the knowledge required for diagnosing potential issues and resolutions are identified. The experts were shown a situation, and asked to identify issues and suggest solutions. The above knowledge is translated into the rules for a knowledge based system. This knowledge based system can then be used to advise assembly planners about potential issues and solutions in an assembly situation.
After a manual verification, the questioning procedure has been implemented on computer as a software named EXpert Knowledge Acquisition and Validation (ExKAV). A preliminary evaluation of ExKAV has been carried out, in which assembly experts interacted with the tool using the researcher as an intermediary. The results of these sessions have been discussed in the thesis and assessed against the original research objectives. The current limitations of the procedure and its implementation have been highlighted, and potential directions for improving the knowledge acquisition process are discussed.
|
425 |
An Approach Towards Self-Supervised Classification Using CycCoursey, Kino High 12 1900 (has links)
Due to the long duration required to perform manual knowledge entry by human knowledge engineers it is desirable to find methods to automatically acquire knowledge about the world by accessing online information. In this work I examine using the Cyc ontology to guide the creation of Naïve Bayes classifiers to provide knowledge about items described in Wikipedia articles. Given an initial set of Wikipedia articles the system uses the ontology to create positive and negative training sets for the classifiers in each category. The order in which classifiers are generated and used to test articles is also guided by the ontology. The research conducted shows that a system can be created that utilizes statistical text classification methods to extract information from an ad-hoc generated information source like Wikipedia for use in a formal semantic ontology like Cyc. Benefits and limitations of the system are discussed along with future work.
|
426 |
Sistema especialista para a trefilação a frio de barras de aço / Expert system for steel bar drawingGomes, Ivan Alexandre Cotrick, 1960- 28 August 2018 (has links)
Orientador: Sérgio Tonini Button / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecânica / Made available in DSpace on 2018-08-28T00:05:58Z (GMT). No. of bitstreams: 1
Gomes_IvanAlexandreCotrick_M.pdf: 5091688 bytes, checksum: ddbd83343728d06fb6e89b21a3a06d98 (MD5)
Previous issue date: 2015 / Resumo: O processo de conformação plástica através da trefilação a frio de barras de aço tem como principal dificuldade o projeto da ferramenta. Ainda hoje, projetistas lançam mão de ábacos e tabelas colecionados ao longo do tempo através dos sucessos e insucessos experimentais. Tais informações, normalmente, ficam restritas às empresas ou aos poucos técnicos, dentro das suas bibliotecas particulares, onde, devido à dificuldade de acesso às informações práticas para projetar ferramentas e processos mais complexos, a fabricação de perfis trefilados tem perdido espaço para outros meios produtivos, por vezes, mais onerosos. Assim, este trabalho compila ábacos e tabelas presentes em projetos de ferramentas para trefilação ¿ seja através das experiências próprias, seja dos resultados herdados de outros técnicos ¿ e também, o escasso material bibliográfico sobre o assunto, organizando um banco de dados para a modelagem de um método que auxilie assertivamente as tarefas do projeto para ferramentas para trefilação a frio de barras de aço, utilizando-se de programas comerciais de computador. Cabe salientar que o sistema não é definitivo, dessa forma, o programa fonte é aberto e detalhado para que seja analisado e melhorado, sendo indicados os pontos de atenção onde devem ser inseridas ou modificadas informações, tornando o acesso amigável para aderência às particularidades do processo onde for aplicado / Abstract: The process of plastic conformation by cold drawing of steel bars has it's main difficulty in the tool design. Even today, designers have abacuses and tables built over time through the successes and failures experimental. Such collections of information, usually, are restricted to companies or few technicians within their private libraries Due to the difficulty of access to practical information to design tools and more complex processes, the manufacture of cold drawn profiles has lost ground to other, sometimes more expensive, productive means. This work compiles abacuses and tables collected in my 30 years in cold drawing tools' design - either through my own experiences, either from inherited results of the private collections of other technicians - and also, the scarce bibliography about the subject, organizing a database for modeling a method that assertively assists the project tasks for cold drawing steel bars tools, using commercial software. The system is not final, thereby the program has open source and detailed for to be analyzed and improved, being indicated the points where must be inserted - or changed - information, making friendly access to the particularities of the production process where it will be applied / Mestrado / Materiais e Processos de Fabricação / Mestre em Engenharia Mecânica
|
427 |
Issues of civil liability arising from the use of expert systemsAlheit, Karin 08 1900 (has links)
Computers have become indispensable in all walks of life, causing people to rely
increasingly on their accurate performance. Defective computer programs, the
incorrect use of computer programs and the non-use of computer programs can
cause serious damage. Expert systems are an application of artificial intelligence
techniques whereby the human reasoning process is simulated in a computer system,
enabling the system to act as a human expert when executing a task. Expert
systems are used by professional users as an aid in reaching a decision and by nonprofessional
users to solve a problem or to decide upon a specific course of action.
As such they can be compared to a consumer product through which professional
services are sold. The various parties that may possibly be held liable in the event
of damage suffered by the use of expert systems are identified as consisting of two
main groups, namely the producers and the users. Because of the frequent
exemption of liability for any consequential loss in standard form computer contracts,
the injured user may often have only a delictual action at her disposal. The faultbased
delictual actions in SA law give inadequate protection to unsuspecting software
users who incur ·personal and property damage through the use of defective expert
systems since it is almost impossible for an unsophisticated injured party to prove the
negligence of the software developer during the technical production process. For
this reason it is recommended that software liability be grounded on strict liability in
analogy to the European Directive on Liability for Defective Products. It is also
pointed out that software standards and quality assurance procedures have a major
role to play in the determination of the elements of wrongfulness and negligence in
software liability and that the software industry should be accorded professional
status to ensure a safe standard of computer programming. / Private Law / LL.D.
|
428 |
The construction and use of an ontology to support a simulation environment performing countermeasure evaluation for military aircraftLombard, Orpha Cornelia 05 1900 (has links)
This dissertation describes a research study conducted to determine the benefits and
use of ontology technologies to support a simulation environment that evaluates
countermeasures employed to protect military aircraft.
Within the military, aircraft represent a significant investment and these valuable assets
need to be protected against various threats, such as man-portable air-defence
systems. To counter attacks from these threats, countermeasures are deployed, developed
and evaluated by utilising modelling and simulation techniques. The system
described in this research simulates real world scenarios of aircraft, missiles and
countermeasures in order to assist in the evaluation of infra-red countermeasures
against missiles in specified scenarios.
Traditional ontology has its origin in philosophy, describing what exists and how
objects relate to each other. The use of formal ontologies in Computer Science have
brought new possibilities for modelling and representation of information and knowledge
in several domains. These advantages also apply to military information systems
where ontologies support the complex nature of military information. After considering
ontologies and their advantages against the requirements for enhancements
of the simulation system, an ontology was constructed by following a formal development
methodology. Design research, combined with the adaptive methodology
of development, was conducted in a unique way, therefore contributing to establish
design research as a formal research methodology. The ontology was constructed
to capture the knowledge of the simulation system environment and the use of it
supports the functions of the simulation system in the domain.
The research study contributes to better communication among people involved in
the simulation studies, accomplished by a shared vocabulary and a knowledge base
for the domain. These contributions affirmed that ontologies can be successfully use
to support military simulation systems / Computing / M. Tech. (Information Technology)
|
429 |
Socio-semantic conversational information accessSahay, Saurav 15 November 2011 (has links)
The main contributions of this thesis revolve around development of an integrated conversational recommendation system, combining data and information models with community network and interactions to leverage multi-modal information access. We have developed a real time conversational information access community agent that leverages community knowledge by pushing relevant recommendations to users of the community. The recommendations are delivered in the form of web resources, past conversation and people to connect to. The information agent (cobot, for community/ collaborative bot) monitors the community conversations, and is 'aware' of users' preferences by implicitly capturing their short term and long term knowledge models from conversations. The agent leverages from health and medical domain knowledge to extract concepts, associations and relationships between concepts; formulates queries for semantic search and provides socio-semantic recommendations in the conversation after applying various relevance filters to the candidate results. The agent also takes into account users' verbal intentions in conversations while making recommendation decision.
One of the goals of this thesis is to develop an innovative approach to delivering relevant information using a combination of social networking, information aggregation, semantic search and recommendation techniques. The idea is to facilitate timely and relevant social information access by mixing past community specific conversational knowledge and web information access to recommend and connect users with relevant information.
Language and interaction creates usable memories, useful for making decisions about what actions to take and what information to retain. Cobot leverages these interactions to maintain users' episodic and long term semantic models. The agent
analyzes these memory structures to match and recommend users in conversations by matching with the contextual information need. The social feedback on the recommendations is registered in the system for the algorithms to promote community
preferred, contextually relevant resources.
The nodes of the semantic memory are frequent concepts extracted from user's interactions. The concepts are connected with associations that develop when concepts co-occur frequently. Over a period of time when the user participates in more interactions, new concepts are added to the semantic memory. Different conversational
facets are matched with episodic memories and a spreading activation search on the
semantic net is performed for generating the top candidate user recommendations for the conversation.
The tying themes in this thesis revolve around informational and social aspects of a unified information access architecture that integrates semantic extraction and indexing with user modeling and recommendations.
|
430 |
Risk-based proactive availability management - attaining high performance and resilience with dynamic self-management in Enterprise Distributed SystemsCai, Zhongtang 10 January 2008 (has links)
Complex distributed systems such as distributed information flows systems
which continuously acquire manipulate and disseminate
information across an enterprise's distributed sites and machines,
and distributed server applications co-deployed in one or multiple shared data centers,
with each of them having different performance/availability requirements
that vary over time and competing with each other for the shared resources,
have been playing a more serious role in industry and society now.
Consequently, it becomes more important for enterprise scale IT infrastructure to
provide timely and sustained/reliable delivery and processing of service requests.
This hasn't become easier, despite more than 30 years of progress in distributed
computer connectivity, availability and reliability, if not more difficult~cite{ReliableDistributedSys},
because of many reasons. Some of them are, the increasing complexity
of enterprise scale computing infrastructure; the distributed
nature of these systems which make them prone to failures,
e.g., because of inevitable Heisenbugs in these complex distributed systems;
the need to consider diverse and complex business objectives and policies
including risk preference and attitudes in enterprise computing;
the issues of performance and availability conflicts, varying importance of
sub-systems in an enterprise's distributed infrastructure which compete for
resource in currently typical shared environment; and
the best effort nature of resources such as network resources, which implies
resource availability itself an issue, etc.
This thesis proposes a novel business policy-driven risk-based automated availability management
which uses an automated decision engine to make various availability decisions and
meet business policies while optimizing overall system utility,
uses utility theory to capture users' risk attitudes,
and address the potentially conflicting business goals and resource demands in enterprise scale
distributed systems.
For the critical and complex enterprise applications,
since a key contributor to application utility is the time taken to
recover from failures, we develop a novel proactive fault tolerance approach,
which uses online methods for failure prediction to dynamically determine the acceptable amounts of
additional processing and communication resources to be used (i.e., costs)
to attain certain levels of utility and acceptable delays in failure
recovery.
Since resource availability itself is often not guaranteed in typical shared enterprise
IT environments, this thesis provides IQ-Paths with probabilistic
service guarantee, to address the dynamic network
behavior in realistic enterprise computing environment.
The risk-based formulation is used as an effective
way to link the operational guarantees expressed by utility and
enforced by the PGOS algorithm with the higher level business objectives sought
by end users.
Together, this thesis proposes novel availability management framework and methods for
large-scale enterprise applications and systems, with the goal to provide different
levels of performance/availability guarantees for multiple applications and
sub-systems in a complex shared distributed computing infrastructure. More specifically,
this thesis addresses the following problems. For data center environments,
(1) how to provide availability management for applications and systems that
vary in both resource requirements and in their importance to the enterprise,
based both on operational level quantities and on business level objectives;
(2) how to deal with managerial policies such as risk attitude; and
(3) how to deal with the tradeoff between performance and availability,
given limited resources in a typical data center.
Since realistic business settings extend beyond single data centers, a second
set of problems addressed in this thesis concerns predictable and reliable
operation in wide area settings. For such systems, we explore (4) how to
provide high availability in widely distributed operational systems with
low cost fault tolerance mechanisms, and (5) how to provide probabilistic
service guarantees given best effort network resources.
|
Page generated in 0.0874 seconds