Spelling suggestions: "subject:"[een] KNOWLEDGE REPRESENTATION"" "subject:"[enn] KNOWLEDGE REPRESENTATION""
171 |
Knowledge representation and stocastic multi-agent plan recognitionSuzic, Robert January 2005 (has links)
To incorporate new technical advances into military domain and make those processes more efficient in accuracy, time and cost, a new concept of Network Centric Warfare has been introduced in the US military forces. In Sweden a similar concept has been studied under the name Network Based Defence (NBD). Here we present one of the methodologies, called tactical plan recognition that is aimed to support NBD in future. Advances in sensor technology and modelling produce large sets of data for decision makers. To achieve decision superiority, decision makers have to act agile with proper, adequate and relevant information (data aggregates) available. Information fusion is a process aimed to support decision makers’ situation awareness. This involves a process of combining data and information from disparate sources with prior information or knowledge to obtain an improved state estimate about an agent or phenomena. Plan recognition is the term given to the process of inferring an agent’s intentions from a set of actions and is intended to support decision making. The aim of this work has been to introduce a methodology where prior (empirical) knowledge (e.g. behaviour, environment and organization) is represented and combined with sensor data to recognize plans/behaviours of an agent or group of agents. We call this methodology multi-agent plan recognition. It includes knowledge representation as well as imprecise and statistical inference issues. Successful plan recognition in large scale systems is heavily dependent on the data that is supplied. Therefore we introduce a bridge between the plan recognition and sensor management where results of our plan recognition are reused to the control of, give focus of attention to, the sensors that are supposed to acquire most important/relevant information. Here we combine different theoretical methods (Bayesian Networks, Unified Modeling Language and Plan Recognition) and apply them for tactical military situations for ground forces. The results achieved from several proof-ofconcept models show that it is possible to model and recognize behaviour of tank units. / QC 20101222
|
172 |
UNIFYING DISTILLATION WITH PERSONALIZATION IN FEDERATED LEARNINGSiddharth Divi (10725357) 29 April 2021 (has links)
<div>Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data. In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients. In this paper, we address this problem with PersFL, a discrete two-stage personalized learning algorithm. In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from optimal teachers into each user's local model. The teacher model provides each client with some rich, high-level representation that a client can easily adapt to its local model, which overcomes the statistical heterogeneity present at different clients. We evaluate PersFL on CIFAR-10 and MNIST datasets using three data-splitting strategies to control the diversity between clients' data distributions.</div><div><br></div><div>We empirically show that PersFL outperforms FedAvg and three state-of-the-art personalization methods, pFedMe, Per-FedAvg and FedPer on majority data-splits with minimal communication cost. Further, we study the performance of PersFL on different distillation objectives, how this performance is affected by the equitable notion of fairness among clients, and the number of required communication rounds. We also build an evaluation framework with the following modules: Data Generator, Federated Model Generation, and Evaluation Metrics. We introduce new metrics for the domain of personalized FL, and split these metrics into two perspectives: Performance, and Fairness. We analyze the performance of all the personalized algorithms by applying these metrics to answer the following questions: Which personalization algorithm performs the best in terms of accuracy across all the users?, and Which personalization algorithm is the fairest amongst all of them? Finally, we make the code for this work available at https://tinyurl.com/1hp9ywfa for public use and validation.</div>
|
173 |
TASK DETECTORS FOR PROGRESSIVE SYSTEMSMaxwell Joseph Jacobson (10669431) 30 April 2021 (has links)
While methods like learning-without-forgetting [11] and elastic weight consolidation [22] accomplish high-quality transfer learning while mitigating catastrophic forgetting, progressive techniques such as Deepmind’s progressive neural network accomplish this while completely nullifying forgetting. However, progressive systems like this strictly require task labels during test time. In this paper, I introduce a novel task recognizer built from anomaly detection autoencoders that is capable of detecting the nature of the required task from input data.Alongside a progressive neural network or other progressive learning system, this task-aware network is capable of operating without task labels during run time while maintaining any catastrophic forgetting reduction measures implemented by the task model.
|
174 |
Approximating Operators and Semantics for Abstract Dialectical FrameworksStrass, Hannes 31 January 2013 (has links)
We provide a systematic in-depth study of the semantics of abstract dialectical frameworks (ADFs), a recent generalisation of Dung\''s abstract argumentation frameworks. This is done by associating with an ADF its characteristic one-step consequence operator and defining various semantics for ADFs as different fixpoints of this operator. We first show that several existing semantical notions are faithfully captured by our definition, then proceed to define new ADF semantics and show that they are proper generalisations of existing argumentation semantics from the literature. Most remarkably, this operator-based approach allows us to compare ADFs to related nonmonotonic formalisms like Dung argumentation frameworks and propositional logic programs. We use polynomial, faithful and modular translations to relate the formalisms, and our results show that both abstract argumentation frameworks and abstract dialectical frameworks are at most as expressive as propositional normal logic programs.
|
175 |
Analyzing the Computational Complexity of Abstract Dialectical Frameworks via Approximation Fixpoint TheoryStraß, Hannes, Wallner, Johannes Peter 22 January 2014 (has links)
Abstract dialectical frameworks (ADFs) have recently been proposed as a versatile generalization of Dung''s abstract argumentation frameworks (AFs). In this paper, we present a comprehensive analysis of the computational complexity of ADFs. Our results show that while ADFs are one level up in the polynomial hierarchy compared to AFs, there is a useful subclass of ADFs which is as complex as AFs while arguably offering more modeling capacities. As a technical vehicle, we employ the approximation fixpoint theory of Denecker, Marek and Truszczyński, thus showing that it is also a useful tool for complexity analysis of operator-based semantics.
|
176 |
AI-powered systems biology models to study human diseaseWennan Chang (12355921) 23 April 2022 (has links)
<p>The fast advancing of high-throughput technology has reinforced the biomedical research ecosystem with highly scaled and commercialized data acquisition standards, which provide us with unprecedented opportunity to interrogate biology in novel and creative ways. However, unraveling the high dimensional data in practice is difficult due to the following challenges: 1) how to handle outlier and data contaminations; 2) how to address the curse of dimensionality; 3) how to utilize occasionally provided auxiliary information such as an external phenotype observation or spatial coordinate; 4) how to derive the unknown non-linear relationship between observed data and underlying mechanisms in complex biological system such as human metabolic network. </p>
<p><br></p>
<p>In sight of the above challenges, this thesis majorly focused on two research directions, for which we have proposed a series of statistical learning and AI-empowered systems biology models. This thesis separates into two parts. The first part focuses on identifying latent low dimensional subspace in high dimensional biomedical data. Firstly, we proposed CAT method which is a robust mixture regression method to detect outliers and estimate parameter simultaneously. Then, we proposed CSMR method in studying the heterogeneous relationship between high dimensional genetic features and a phenotype with penalized mixture regression. At last, we proposed SRMR which investigate mixture linear relationship over spatial domain. The second part focuses on studying the non-linear relationship for human metabolic flux estimation in complex biological system. We proposed the first method in this domain that can robustly estimate flux distribution of a metabolic network at the resolution of individual cells.</p>
|
177 |
Explications pour l’agrégation des préférences — une contribution à l’aide à la décision responsable / Towards accountable decision aiding : explanations for the aggregation of preferencesBelahcene, Khaled 05 December 2018 (has links)
Nous cherchons à équiper un processus d’aide à la décision d’outils permettantde répondre aux exigences de redevabilité. Un décideur fournit de l’information quant à ses préférences au sujet de la façon d’arbitrer entre des points de vue conflictuels. Un analyste, chargé d’éclairer la prise de décision, fait l’hypothèse d’un modèle de raisonnement, et l’ajuste aux informations fournies par le décideur. Nous faisons l’hypothèse d’un processus d’élicitation robuste, dont les recommandations sont déduites des éléments dialectiques. Nous nous sommes donc intéressés à la résolution d’un problème inverse concernant le modèle, ainsi qu’à la production d’explications, si possible correctes, complètes, facile à calculer et à comprendre. Nous avons considéré deux formes de représentation du raisonnement: l’une ayant trait à la comparaison de paires d’alternatives fondée sur un modèle de valeur additive, l’autre ayant trait au tri des alternatives dans des catégories ordonnées fondé sur un raisonnement non-compensatoire. / We consider providing a decision aiding process with tools aiming at complying to the demands of accountability. Decision makers, seeking support, provide preference information in the form of reference cases, that illustrates their views on the way of taking into account conflicting points of view. The analyst, who provides the support, assumes a generic representation of the reasoning with preferences, and fits the aggregation procedure to the preference information. We assume a robust elicitation process, where the recommendations stemming from the fitted procedure can be deduced from dialectical elements. Therefore, we are interested in solving an inverse problem concerning the model, and in deriving explanations, if possible sound, complete, easy to compute and to understand. We address two distinct forms of reasoning: one aimed at comparing pairs of alternatives with an additive value model, the other aimed at sorting alternatives into ordered categories with a noncompensatory model.
|
178 |
On the Computation of Common Subsumers in Description LogicsTurhan, Anni-Yasmin 08 October 2007 (has links)
Description logics (DL) knowledge bases are often build by users with expertise in the application domain, but little expertise in logic. To support this kind of users when building their knowledge bases a number of extension methods have been proposed to provide the user with concept descriptions as a starting point for new concept definitions. The inference service central to several of these approaches is the computation of (least) common subsumers of concept descriptions. In case disjunction of concepts can be expressed in the DL under consideration, the least common subsumer (lcs) is just the disjunction of the input concepts. Such a trivial lcs is of little use as a starting point for a new concept definition to be edited by the user. To address this problem we propose two approaches to obtain &quot;meaningful&quot; common subsumers in the presence of disjunction tailored to two different methods to extend DL knowledge bases. More precisely, we devise computation methods for the approximation-based approach and the customization of DL knowledge bases, extend these methods to DLs with number restrictions and discuss their efficient implementation.
|
179 |
Domain-specific knowledge graph construction from Swedish and English news articlesKrupinska, Aleksandra January 2023 (has links)
In the current age of new textual information emerging constantly, there is a challenge related to processing and structuring it in some ways. Moreover, the information is often expressed in many different languages, but the discourse tends to be dominated by English, which may lead to overseeing important, specific knowledge in less well-resourced languages. Knowledge graphs have been proposed as a way of structuring unstructured data, making it machine-readable and available for further processing. Researchers have also emphasized the potential bilateral benefits of combining knowledge in low- and well-resourced languages. In this thesis, I combine the two goals of structuring textual data with the help of knowledge graphs and including multilingual information in an effort to achieve a more accurate knowledge representation. The purpose of the project is to investigate whether the information about three Swedish companies known worldwide - H&M, Spotify, and Ikea - in Swedish and English data sources is the same and how combining the two sources can be beneficial. Following a natural language processing (NLP) pipeline consisting of such tasks as coreference resolution, entity linking, and relation extraction, a knowledge graph is constructed from Swedish and English news articles about the companies. Refinement techniques are applied to improve the graph. The constructed knowledge graph is analyzed with respect to the overlap of extracted entities and the complementarity of information. Different variants of the graph are further evaluated by human raters. A number of queries illustrate the capabilities of the constructed knowledge graph. The evaluation of the graph shows that the topics covered in the two information sources differ substantially. Only a small number of entities occur in both languages. Combining the two sources can, therefore, contribute to a richer and more connected knowledge graph. The adopted refinement techniques increase the connectedness of the graph. Human evaluators consequently chose the Swedish side of the data as more relevant for the considered questions, which points out the importance of not limiting the research to more easily available and processed English data.
|
180 |
Syntax-based Concept Extraction For Question AnsweringGlinos, Demetrios 01 January 2006 (has links)
Question answering (QA) stands squarely along the path from document retrieval to text understanding. As an area of research interest, it serves as a proving ground where strategies for document processing, knowledge representation, question analysis, and answer extraction may be evaluated in real world information extraction contexts. The task is to go beyond the representation of text documents as "bags of words" or data blobs that can be scanned for keyword combinations and word collocations in the manner of internet search engines. Instead, the goal is to recognize and extract the semantic content of the text, and to organize it in a manner that supports reasoning about the concepts represented. The issue presented is how to obtain and query such a structure without either a predefined set of concepts or a predefined set of relationships among concepts. This research investigates a means for acquiring from text documents both the underlying concepts and their interrelationships. Specifically, a syntax-based formalism for representing atomic propositions that are extracted from text documents is presented, together with a method for constructing a network of concept nodes for indexing such logical forms based on the discourse entities they contain. It is shown that meaningful questions can be decomposed into Boolean combinations of question patterns using the same formalism, with free variables representing the desired answers. It is further shown that this formalism can be used for robust question answering using the concept network and WordNet synonym, hypernym, hyponym, and antonym relationships. This formalism was implemented in the Semantic Extractor (SEMEX) research tool and was tested against the factoid questions from the 2005 Text Retrieval Conference (TREC), which operated upon the AQUAINT corpus of newswire documents. After adjusting for the limitations of the tool and the document set, correct answers were found for approximately fifty percent of the questions analyzed, which compares favorably with other question answering systems.
|
Page generated in 0.0476 seconds