• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 33
  • 18
  • 8
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 256
  • 256
  • 204
  • 56
  • 53
  • 52
  • 42
  • 37
  • 36
  • 33
  • 32
  • 31
  • 29
  • 28
  • 27
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Autonomous interface agents for assessing computer programs utilizing the Microsoft Windows 32-bit application programming interface.

Joubert, Gideon Francois 02 June 2008 (has links)
In order for an agent to be considered both an interface agent and autonomous, it follows that there must be some part of the interface that the agent must operate in an autonomous fashion. The user must be able to directly observe autonomous actions of the agent and the agent must be able to observe actions taken autonomously by the user in the interface The ability of a software agent to operate the same interface operated by the human user, and the ability of a software agent to act independently of, and concurrently with, the human user will become increasingly important characteristics of human-computer interaction. Agents will observe what human users do when they interact with interfaces, and provide assistance by manipulating the interface themselves, while the user is thinking or performing other operations. Increasingly, applications will be designed to be operated simultaneously by users and their agents [1]. This study is motivated by the need to solve a problem of human resource optimization in the first year informatics practical course as presented by the R.A.U. Standard Bank Academy for Information Technology. The major aim being the development of a prototype system capable of automatically grading first year Microsoft Visual Basic.Net applications. The prototype system will ultimately render assistants obsolete in the grading process and free the assistants to help students with problems related to the informatics course. Developing the envisaged prototype requires much preliminary reading on artificial intelligence and its applications, more specifically autonomous interface agent architecture. Case-based reasoning and machine learning has been identified as having great potential and applicability in the development and implementation of the envisaged prototype and for this reason these topics will provide a foundation on which to build this dissertation. / Ehlers, E.M., Prof.
112

An investigation of feature weighting algorithms and validation techniques using blind analysis for analogy-based estimation

Sigweni, Boyce B. January 2016 (has links)
Context: Software effort estimation is a very important component of the software development life cycle. It underpins activities such as planning, maintenance and bidding. Therefore, it has triggered much research over the past four decades, including many machine learning approaches. One popular approach, that has the benefit of accessible reasoning, is analogy-based estimation. Machine learning including analogy is known to significantly benefit from feature selection/weighting. Unfortunately feature weighting search is an NP hard problem, therefore computationally very demanding, if not intractable. Objective: Therefore, one objective of this research is to develop an effi cient and effective feature weighting algorithm for estimation by analogy. However, a major challenge for the effort estimation research community is that experimental results tend to be contradictory and also lack reliability. This has been paralleled by a recent awareness of how bias can impact research results. This is a contributory reason why software effort estimation is still an open problem. Consequently the second objective is to investigate research methods that might lead to more reliable results and focus on blinding methods to reduce researcher bias. Method: In order to build on the most promising feature weighting algorithms I conduct a systematic literature review. From this I develop a novel and e fficient feature weighting algorithm. This is experimentally evaluated, comparing three feature weighting approaches with a na ive benchmark using 2 industrial data sets. Using these experiments, I explore blind analysis as a technique to reduce bias. Results: The systematic literature review conducted identified 19 relevant primary studies. Results from the meta-analysis of selected studies using a one-sample sign test (p = 0.0003) shows a positive effect - to feature weighting in general compared with ordinary analogy-based estimation (ABE), that is, feature weighting is a worthwhile technique to improve ABE. Nevertheless the results remain imperfect so there is still much scope for improvement. My experience shows that blinding can be a relatively straightforward procedure. I also highlight various statistical analysis decisions which ought not be guided by the hunt for statistical significance and show that results can be inverted merely through a seemingly inconsequential statistical nicety. After analysing results from 483 software projects from two separate industrial data sets, I conclude that the proposed technique improves accuracy over the standard feature subset selection (FSS) and traditional case-based reasoning (CBR) when using pseudo time-series validation. Interestingly, there is no strong evidence for superior performance of the new technique when traditional validation techniques (jackknifing) are used but is more effi cient. Conclusion: There are two main findings: (i) Feature weighting techniques are promising for software effort estimation but they need to be tailored for target case for their potential to be adequately exploited. Despite the research findings showing that assuming weights differ in different parts of the instance space ('local' regions) may improve effort estimation results - majority of studies in software effort estimation (SEE) do not take this into consideration. This represents an improvement on other methods that do not take this into consideration. (ii) Whilst there are minor challenges and some limits to the degree of blinding possible, blind analysis is a very practical and an easy-to-implement method that supports more objective analysis of experimental results. Therefore I argue that blind analysis should be the norm for analysing software engineering experiments.
113

A Mix Testing Process Integrating Two Manual Testing Approaches : Exploratory Testing and Test Case Based Testing

Shah, Syed Muhammad Ali, Alvi, Usman Sattar January 2010 (has links)
Software testing is a key phase in software development lifecycle. Testing objectives corresponds to the discovery and detection of faults, which can be attained by utilizing manual or automated testing approaches. In this thesis, we are mainly concerned with the manual test approaches. The most commonly used manual testing approaches in the software industry are the Exploratory Testing (ET) approach and the Test Case Based Testing (TCBT) approach. TCBT is primarily used by software testers to formulize and guide their testing tasks and set the theoretical principles for testing. On the other hand ET is simultaneous learning, test design, and test execution. Software testing might benefit from an intelligent combination of these approaches of testing however there is no proof of any formal process that accommodates the usage of both test approaches in a combination. This thesis presents a process for Mix Testing (MT) based on the strengths and weaknesses of both test approaches, identified through a systematic literature review and interviews with testers in a software organization. The new process is defined through the mapping of weaknesses of one approach to the strengths of other. Static validation of the MT process through interviews in the software organization suggested that MT has ability to resolve the problems of both test approaches to some extent. Furthermore, MT was validated by conducting an experiment in an industrial setting. The analysis of the experimentation results indicated that MT has better defect detection than TCBT and less than ET. In addition, the results of the experiments also indicate that MT provides equal functionality coverage as compared to ET and TCBT.
114

Multi-sensor Information Fusion for Classification of Driver's Physiological Sensor Data

Barua, Shaibal January 2013 (has links)
Physiological sensor signals analysis is common practice in medical domain for diagnosis andclassification of various physiological conditions. Clinicians’ frequently use physiologicalsensor signals to diagnose individual’s psychophysiological parameters i.e., stress tiredness,and fatigue etc. However, parameters obtained from physiological sensors could vary becauseof individual’s age, gender, physical conditions etc. and analyzing data from a single sensorcould mislead the diagnosis result. Today, one proposition is that sensor signal fusion canprovide more reliable and efficient outcome than using data from single sensor and it is alsobecoming significant in numerous diagnosis fields including medical diagnosis andclassification. Case-Based Reasoning (CBR) is another well established and recognizedmethod in health sciences. Here, an entropy based algorithm, “Multivariate MultiscaleEntropy analysis” has been selected to fuse multiple sensor signals. Other physiologicalsensor signals measurements are also taken into consideration for system evaluation. A CBRsystem is proposed to classify ‘healthy’ and ‘stressed’ persons using both fused features andother physiological i.e. Heart Rate Variability (HRV), Respiratory Sinus Arrhythmia (RSA),Finger Temperature (FT) features. The evaluation and performance analysis of the system have been done and the results ofthe classification based on data fusion and physiological measurements are presented in thisthesis work.
115

Case Based Reasoning method for analysis of Physiological sensor data

Islam, Asif Moinul January 2012 (has links)
Remote healthcare is a demanding as well as emergent research area. The rise of healthcare costs in the developed countries have made the policy makers for trying to find an alternate model of healthcare rather than relying on traditional healthcare system. Although advancement in the sensor technology, forthcomingness of devices like smart phones and improvement in artificial intelligence technology have made the remote healthcare close to reality but still there are plenty of issues to be solved before it becomes a commonly used healthcare model. In this thesis, studies of two vital physiological parameters pulse rate and oxygen saturation were done to unearth some patterns using Case-Based Reasoning technique. A three-tiered application is developed focusing remote healthcare. The results of the thesis could be used as a starting point of further research of two above mentioned physiological parameters in order to detect anomalous condition of health.
116

Case Based Reasoning method for analysing Physiological sensor data

Islam, Asif Moinul January 2012 (has links)
Remote healthcare is a demanding as well as emergent research area. The rise of healthcare costs in the developed countries have made the policy makers for trying to find an alternate model of healthcare rather than relying on traditional healthcare system. Although advancement in the sensor technology, forthcomingness of devices like smart phones and improvement in artificial intelligence technology have made the remote healthcare close to reality but still there are plenty of issues to be solved before it becomes a commonly used healthcare model. In this thesis, studies of two vital physiological parameters pulse rate and oxygen saturation were done to unearth some patterns using Case-Based Reasoning technique. A three-tiered application is developed focusing remote healthcare. The results of the thesis could be used as a starting point of further research of two above mentioned physiological parameters in order to detect anomalous condition of health.
117

Case-Based Argumentation in Agent Societies

Heras Barberá, Stella María 02 November 2011 (has links)
Hoy en día los sistemas informáticos complejos se pueden ven en términos de los servicios que ofrecen y las entidades que interactúan para proporcionar o consumir dichos servicios. Los sistemas multi-agente abiertos, donde los agentes pueden entrar o salir del sistema, interactuar y formar grupos (coaliciones de agentes u organizaciones) de forma dinámica para resolver problemas, han sido propuestos como una tecnología adecuada para implementar este nuevo paradigma informático. Sin embargo, el amplio dinamismo de estos sistemas requiere que los agentes tengan una forma de armonizar los conflictos que surgen cuando tienen que colaborar y coordinar sus actividades. En estas situaciones, los agentes necesitan un mecanismo para argumentar de forma eficiente (persuadir a otros agentes para que acepten sus puntos de vista, negociar los términos de un contrato, etc.) y poder llegar a acuerdos. La argumentación es un medio natural y efectivo para abordar los conflictos y contradicciones del conocimiento. Participando en diálogos argumentativos, los agentes pueden llegar a acuerdos con otros agentes. En un sistema multi-agente abierto, los agentes pueden formar sociedades que los vinculan a través de relaciones de dependencia. Estas relaciones pueden surgir de sus interacciones o estar predefinidas por el sistema. Además, los agentes pueden tener un conjunto de valores individuales o sociales, heredados de los grupos a los que pertenecen, que quieren promocionar. Las dependencias entre los agentes y los grupos a los que pertenecen y los valores individuales y sociales definen el contexto social del agente. Este contexto tiene una influencia decisiva en la forma en que un agente puede argumentar y llegar a acuerdos con otros agentes. Por tanto, el contexto social de los agentes debería tener una influencia decisiva en la representación computacional de sus argumentos y en el proceso de gestión de argumentos. / Heras Barberá, SM. (2011). Case-Based Argumentation in Agent Societies [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/12497 / Palancia
118

Modélisation intégratrice du traitement BigData / Integrative modeling of Big Data processing

Hashem, Hadi 19 September 2016 (has links)
Dans le monde d’aujourd’hui de multiples acteurs de la technologie numérique produisent des quantités infinies de données. Capteurs, réseaux sociaux ou e-commerce, ils génèrent tous de l’information qui s’incrémente en temps-réel selon les 3 V de Gartner : en Volume, en Vitesse et en Variabilité. Afin d’exploiter efficacement et durablement ces données, il est important de respecter la dynamicité de leur évolution chronologique au moyen de deux approches : le polymorphisme d’une part, au moyen d’un modèle dynamique capable de supporter le changement de type à chaque instant sans failles de traitement ; d’autre part le support de la volatilité par un modèle intelligent prenant en compte des données clé seulement interprétables à un instant « t », au lieu de traiter toute la volumétrie des données actuelle et historique.L’objectif premier de cette étude est de pouvoir établir au moyen de ces approches une vision intégratrice du cycle de vie des données qui s’établit selon 3 étapes, (1) la synthèse des données via la sélection des valeurs-clés des micro-données acquises par les différents opérateurs au niveau de la source, (2) la fusion en faisant le tri des valeurs-clés sélectionnées et les dupliquant suivant un aspect de dé-normalisation afin d’obtenir un traitement plus rapide des données et (3) la transformation en un format particulier de carte de cartes de cartes, via Hadoop dans le processus classique de MapReduce afin d’obtenir un graphe défini dans la couche applicative.Cette réflexion est en outre soutenue par un prototype logiciel mettant en oeuvre les opérateurs de modélisation sus-décrits et aboutissant à une boîte à outils de modélisation comparable à un AGL et, permettant une mise en place assistée d'un ou plusieurs traitements sur BigData / Nowadays, multiple actors of Internet technology are producing very large amounts of data. Sensors, social media or e-commerce, all generate real-time extending information based on the 3 Vs of Gartner: Volume, Velocity and Variety. In order to efficiently exploit this data, it is important to keep track of the dynamic aspect of their chronological evolution by means of two main approaches: the polymorphism, a dynamic model able to support type changes every second with a successful processing and second, the support of data volatility by means of an intelligent model taking in consideration key-data, salient and valuable at a specific moment without processing all volumes of history and up to date data.The primary goal of this study is to establish, based on these approaches, an integrative vision of data life cycle set on 3 steps, (1) data synthesis by selecting key-values of micro-data acquired by different data source operators, (2) data fusion by sorting and duplicating the selected key-values based on a de-normalization aspect in order to get a faster processing of data and (3) the data transformation into a specific format of map of maps of maps, via Hadoop in the standard MapReduce process, in order to define the related graph in applicative layer.In addition, this study is supported by a software prototype using the already described modeling tools, as a toolbox compared to an automatic programming software and allowing to create a customized processing chain of BigData
119

Multipurpose Case-Based Reasoning System, Using Natural Language Processing

Augustsson, Christopher January 2021 (has links)
Working as a field technician of any sort can many times be a challenging task. Often you find yourself alone, with a machine you have limited knowledge about, and the only support you have are the user manuals. As a result, it is not uncommon for companies to aid the technicians with a knowledge base that often revolves around some share point. But, unfortunately, the share points quickly get cluttered with too much information that leaves the user overwhelmed. Case-based reasoning (CBR), a form of problem-solving technology, uses previous cases to help users solve new problems they encounter, which could benefit the field technician. But for a CBR system to work with a wide variety of machines, the system must have a dynamic nature and handle multiple data types. By developing a prototype focusing on case retrieval, based on .Net core and MySql, this report sets the foundation for a highly dynamic CBR system that uses natural language processing to map case attributes during case retrieval. In addition, using datasets from UCI and Kaggle, the system's accuracy is validated, and by using a dataset created explicitly for this report, the system manifest to be robust.
120

On case-based learnability of languages

Globig, Christoph, Jantke, Klaus P., Lange, Steffen, Sakakibara, Yasubumi 17 January 2019 (has links)
Case-based reasoning is deemed an important technology to alleviate the bottleneck of knowledge acquisition in Artificial Intelligence (AI). In case-based reasoning, knowledge is represented in the form of particular cases with an appropriate similarity measure rather than any form of rules. The case-based reasoning paradigm adopts the view that an Al system is dynamically changing during its life-cycle which immediately leads to learning considerations. Within the present paper, we investigate the problem of case-based learning of indexable classes of formal languages. Prior to learning considerations, we study the problem of case-based representability and show that every indexable class is case-based representable with respect to a fixed similarity measure. Next, we investigate several models of case-based learning and systematically analyze their strengths as well as their limitations. Finally, the general approach to case-based learnability of indexable classes of formal languages is prototypically applied to so-called containmet decision lists, since they seem particularly tailored to case-based knowledge processing.

Page generated in 0.2192 seconds