• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 184
  • 69
  • 32
  • 15
  • 8
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 437
  • 437
  • 124
  • 117
  • 116
  • 97
  • 96
  • 94
  • 87
  • 77
  • 71
  • 69
  • 64
  • 63
  • 55
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Bridging The Gap Between Autonomous Skill Learning And Task-Specific Planning

Sen, Shiraj 01 February 2013 (has links)
Skill acquisition and task specific planning are essential components of any robot system, yet they have long been studied in isolation. This, I contend, is due to the lack of a common representational framework. I present a holistic approach to planning robot behavior, using previously acquired skills to represent control knowledge (and objects) directly, and to use this background knowledge to build plans in the space of control actions. Actions in this framework are closed-loop controllers constructed from combinations of sensors, effectors, and potential functions. I will show how robots can use reinforcement learning techniques to acquire sensorimotor programs. The agent then builds a functional model of its interactions with the world as distributions over the acquired skills. In addition, I present two planning algorithms that can reason about a task using the functional models. These algorithms are then applied to a variety of tasks such as object recognition and object manipulation to achieve its objective on two different robot platforms.
182

Causal Reasoning in Equivalence Classes

Amin Jaber (14227610) 07 December 2022 (has links)
<p>Causality is central to scientific inquiry across many disciplines including epidemiology, medicine, and economics, to name a few. Researchers are usually interested not only in knowing how two events are correlated, but also in whether one causes the other and, if so, how. In general, the scientific practice seeks not just a surface description of the observed data, but rather deeper explanations, such as predicting the effects of interventions. The answer to such questions does not lie in the data alone and requires a qualitative understanding of the underlying data-generating process; a knowledge that is articulated in a causal diagram.</p> <p>And yet, delineating the true, underlying causal diagram requires knowledge and assumptions that are usually not available in many non-trivial and large-scale situations. Hence, this dissertation develops necessary theory and algorithms towards realizing a data-driven framework for causal inference. More specifically, this work provides fundamental treatments of the following research questions:</p> <p><br></p> <p><strong>Effect Identification under Markov Equivalence.</strong> One common task in many data sciences applications is to answer questions about the effect of new interventions, like: 'what would happen to <em>Y</em> while observing <em>Z=z</em> if we force <em>X</em> to take the value <em>x</em>?'. Formally, this is known as <em>causal effect identification</em>, where the goal is to determine whether a post-interventional distribution is computable from the combination of an observational distribution and assumptions about the underlying domain represented by a causal diagram. In this dissertation, we assume as the input of the task a less informative structure known as a partial ancestral graph (PAG), which represents a Markov equivalence class of causal diagrams, learnable from observational data. We develop tools and algorithms for this relaxed setting and characterize identifiable effects under necessary and sufficient conditions.</p> <p><br></p> <p><strong>Causal Discovery from Interventions.</strong> A causal diagram imposes constraints on the corresponding generated data; conditional independences are one such example. Given a mixture of observational and experimental data, the goal is to leverage the constraints imprinted in the data to infer the set of causal diagrams that are compatible with such constraints. In this work, we consider soft interventions, such that the mechanism of an intervened variable is modified without fully eliminating the effect of its direct causes, and investigate two settings where the targets of the interventions could be known or unknown to the data scientist. Accordingly, we introduce the first general graphical characterizations to test whether two causal diagrams are indistinguishable given the constraints in the available data. We also develop algorithms that, given a mixture of observational and interventional data, learn a representation of the equivalence class.</p>
183

An Investigation Into ALM as a Knowledge Representation Library Language

Lloyd, Benjamin Tyler 15 December 2022 (has links)
No description available.
184

Motivating a linguistically orientated model for a conversational software agent

Panesar, Kulvinder 07 October 2020 (has links)
Yes / This paper presents a critical evaluation framework for a linguistically orientated conversational software agent (CSA) (Panesar, 2017). The CSA prototype investigates the integration, intersection and interface of the language, knowledge, and speech act constructions (SAC) based on a grammatical object (Nolan, 2014), and the sub-­‐model of belief, desires and intention (BDI) (Rao and Georgeff, 1995) and dialogue management (DM) for natural language processing (NLP). A long-­‐standing issue within NLP CSA systems is refining the accuracy of interpretation to provide realistic dialogue to support the human-­‐to-­‐computer communication. This prototype constitutes three phase models: (1) a linguistic model based on a functional linguistic theory – Role and Reference Grammar (RRG) (Van Valin Jr, 2005); (2) Agent Cognitive Model with two inner models: (a) knowledge representation model employing conceptual graphs serialised to Resource Description Framework (RDF); (b) a planning model underpinned by BDI concepts (Wooldridge, 2013) and intentionality (Searle, 1983) and rational interaction (Cohen and Levesque, 1990); and (3) a dialogue model employing common ground (Stalnaker, 2002). The evaluation approach for this Java-­‐based prototype and its phase models is a multi-­‐approach driven by grammatical testing (English language utterances), software engineering and agent practice. A set of evaluation criteria are grouped per phase model, and the testing framework aims to test the interface, intersection and integration of all phase models and their inner models. This multi-­‐approach encompasses checking performance both at internal processing, stages per model and post-­‐implementation assessments of the goals of RRG, and RRG based specifics tests. The empirical evaluations demonstrate that the CSA is a proof-­‐of-­‐concept, demonstrating RRG’s fitness for purpose for describing, and explaining phenomena, language processing and knowledge, and computational adequacy. Contrastingly, evaluations identify the complexity of lower level computational mappings of NL – agent to ontology with semantic gaps, and further addressed by a lexical bridging consideration (Panesar, 2017).
185

An Evaluation of a Linguistically Motivated Conversational Software Agent Framework

Panesar, Kulvinder 05 October 2020 (has links)
yes / This paper presents a critical evaluation framework for a linguistically motivated conversational software agent (CSA). The CSA prototype investigates the integration, intersection and interface of the language, knowledge, and speech act constructions (SAC) based on a grammatical object, and the sub-model of belief, desires and intention (BDI) and dialogue management (DM) for natural language processing (NLP). A long-standing issue within NLP CSA systems is refining the accuracy of interpretation to provide realistic dialogue to support human-to-computer communication. This prototype constitutes three phase models: (1) a linguistic model based on a functional linguistic theory – Role and Reference Grammar (RRG), (2) an Agent Cognitive Model with two inner models: (a) a knowledge representation model, (b) a planning model underpinned by BDI concepts, intentionality and rational interaction, and (3) a dialogue model. The evaluation strategy for this Java-based prototype is multi-approach driven by grammatical testing (English language utterances), software engineering and agent practice. A set of evaluation criteria are grouped per phase model, and the testing framework aims to test the interface, intersection and integration of all phase models. The empirical evaluations demonstrate that the CSA is a proof-of-concept, demonstrating RRG’s fitness for purpose for describing, and explaining phenomena, language processing and knowledge, and computational adequacy. Contrastingly, evaluations identify the complexity of lower level computational mappings of NL – agent to ontology with semantic gaps, and further addressed by a lexical bridging solution.
186

Context-Sensitive Description Logics in Dynamic Settings

Tirtarasa, Satyadharma 12 April 2024 (has links)
The role-based paradigm has been introduced for the design of adaptive and context sensitive software systems. Naturally, a system built on top of the paradigm is expected to thrive in dynamic environments. In consequence, reasoning services over temporal aspect are essential in such a system. To represent context-dependent domains, various extensions of Description Logics (DLs) with context are introduced and studied. We focus on the family of Contextualized Description Logics (ConDLs) that have been shown capable to represent role-based modelling languages while retaining decidability. However, reasoning problems over dynamic settings under the logics are rather unexplored.
187

A Knowledge Map-Centric Feedback-Based Approach to Information Modeling and Academic Assessment

Castles, Ricky Thomas 24 February 2010 (has links)
The structure of education has changed dramatically in the last few decades. Despite major changes in how students are learning, there has not been as dramatic of a shift in how student learning is assessed. Standard letter grades are still the paradigm for evaluating a student's mastery of course content and the grade point average is still one of the largest determining factors in judging a graduate's academic aptitude. This research presents a modern approach to modeling knowledge and evaluating students. Based upon the model of a closed-loop feedback controller it considers education as a system with an instructor determining the set of knowledge he or she wishes to impart to students, the instruction method as a transfer function, and evaluation methods serving as sensors to provide feedback determining the subset of the information students have learned. This method uses comprehensive concept maps to depict all of the concepts and relationships an educator intends to cover and student maps to depict the subset of knowledge that students have mastered. Concept inventories are used as an assessment tool to determine, at the conceptual level, what students have learned. Each question in the concept inventory is coupled with one or more components of a comprehensive concept map and based upon the answers students give to concept inventory questions those components may or may not appear in a student's knowledge map. The level of knowledge a student demonstrates of each concept and relationship is presented in his or her student map using a color scheme tied to the levels of learning in Bloom's taxonomy. Topological principles are used to establish metrics to quantify the distance between two students' knowledge maps and the distance between a student's knowledge map and the corresponding comprehensive concept map. A method is also developed for forming aggregate maps representative of the knowledge of a group of students. Aggregate maps can be formed for entire classes of students or based upon various demographics including race and gender. XML schemas have been used throughout this research to encapsulate the information in both comprehensive maps and student maps and to store correlations between concept inventory questions and corresponding comprehensive map components. Three software packages have been developed to store concept inventories into an XML Schema, to process student responses to concept inventory questions and generate student maps as a result, and to generate aggregate maps. The methods presented herein have been applied to two learning units that are part of two freshman engineering courses at Virginia Tech. Example student maps and aggregate maps are included for these course units. / Ph. D.
188

Functional linguistic based motivations for a conversational software agent

Panesar, Kulvinder 07 October 2020 (has links)
Yes / This chapter discusses a linguistically orientated model of a conversational software agent (CSA) (Panesar 2017) framework sensitive to natural language processing (NLP) concepts and the levels of adequacy of a functional linguistic theory (LT). We discuss the relationship between NLP and knowledge representation (KR), and connect this with the goals of a linguistic theory (Van Valin and LaPolla 1997), in particular Role and Reference Grammar (RRG) (Van Valin Jr 2005). We debate the advantages of RRG and consider its fitness and computational adequacy. We present a design of a computational model of the linking algorithm that utilises a speech act construction as a grammatical object (Nolan 2014a, Nolan 2014b) and the sub-model of belief, desire and intentions (BDI) (Rao and Georgeff 1995). This model has been successfully implemented in software, using the resource description framework (RDF), and we highlight some implementation issues that arose at the interface between language and knowledge representation (Panesar 2017).
189

Fusion d'images multimodales pour l'aide au diagnostic du cancer du sein / Multimodal image fusion for breast cancer aided diagnosis

Ben salem, Yosra 09 December 2017 (has links)
Le cancer du sein est le cancer le plus répandu chez les femmes de plus de 40 ans. En effet, des études ont montré qu'une détection précoce et un traitement approprié du cancer du sein augmentent de manière significative les chances de survie. La mammographie constitue le moyen d'investigation le plus utilisé dans le diagnostic des lésions mammaires. Cependant, cette technique peut être insuffisante pour montrer les structures du sein et faire apparaître les anomalies présentes et le médecin peut faire appel à d'autres modalités d'imagerie telle que l'imagerie IRM. Ces modalités sont généralement complémentaires. Par conséquent, le médecin procède à une fusion mentale des différentes informations sur les deux images dans le but d'effectuer le diagnostic adéquat. Pour assister le médecin et l'aider dans ce processus, nous proposons une solution permettant de fusionner les deux images. Bien que l'idée de la fusion paraisse simple, sa mise en oeuvre pose de nombreux problèmes liés non seulement au problème de fusion en général mais aussi à la nature des images médicales qui sont généralement des images mal contrastées et présentant des données hétérogènes, imprécises et ambigües. Notons que les images mammographiques et les images IRM présentent des représentations très différentes des informations, étant donnée qu'elles sont prises dans des conditions distinctes. Ce qui nous amène à poser la question suivante: Comment passer de la représentation hétérogène des informations dans l'espace image, à un autre espace de représentation uniforme. Afin de traiter cette problématique, nous optons pour une approche de traitement multi-niveaux : niveau pixel, niveau primitives, niveau objet et niveau scène. Nous modélisons les objets pathologiques extraits des différentes images par des ontologies locales. La fusion est ensuite effectuée sur ces ontologies locales et résulte en une ontologie globale contenant les différentes connaissances sur les objets pathologiques du cas étudié. Cette ontologie globale sert à instancier une ontologie de référence modélisant les connaissances du diagnostic médical des lésions mammaires. Un raisonnement à base de cas est exploité pour fournir les rapports diagnostic des cas les plus similaires pouvant aider le médecin à prendre la meilleure décision. Dans le but de modéliser l'imperfection des informations traitées, nous utilisons la théorie des possibilités avec les différentes ontologies. Le résultat fourni est présenté sous forme de rapports diagnostic comportant les cas les plus similaires au cas étudié avec des degrés de similarité exprimés en mesures de possibilité. Un modèle virtuel 3D complète le rapport diagnostic par un aperçu simplifié de la scène étudiée. / The breast cancer is the most prevalent cancer among women over 40 years old. Indeed, studies evinced that an early detection and an appropriate treatment of breast cancer increases significantly the chances of survival. The mammography is the most tool used in the diagnosis of breast lesions. However, this technique may be insufficient to evince the structures of the breast and reveal the anomalies present. The doctor can use additional imaging modalities such as MRI (Magnetic Reasoning Image). Therefore, the doctor proceeds to a mental fusion of the different information on the two images in order to make the adequate diagnosis. To assist the doctor in this process, we propose a solution to merge the two images. Although the idea of the fusion seems simple, its implementation poses many problems not only related to the paradigm of fusion in general but also to the nature of medical images that are generally poorly contrasted images, and presenting heterogeneous, inaccurate and ambiguous data. Mammography images and IRM images present very different information representations, since they are taken under different conditions. Which leads us to pose the following question: How to pass from the heterogeneous representation of information in the image space, to another space of uniform representation from the two modalities? In order to treat this problem, we opt a multilevel processing approach : the pixel level, the primitive level, the object level and the scene level. We model the pathological objects extracted from the different images by local ontologies. The fusion is then performed on these local ontologies and results in a global ontology containing the different knowledge on the pathological objects of the studied case. This global ontology serves to instantiate a reference ontology modeling knowledge of the medical diagnosis of breast lesions. Case-based reasoning (CBR) is used to provide the diagnostic reports of the most similar cases that can help the doctor to make the best decision. In order to model the imperfection of the treated information, we use the possibility theory with the ontologies. The final result is a diagnostic reports containing the most similar cases to the studied case with similarity degrees expressed with possibility measures. A 3D symbolic model complete the diagnostic report with a simplified overview of the studied scene.
190

Deep Learning Based Crop Row Detection

Rashed Mohammad Doha (12468498) 12 July 2022 (has links)
<p>Detecting crop rows from video frames in real time is a fundamental challenge in the field of precision agriculture. Deep learning based semantic segmentation method, namely U-net, although successful in many tasks related to precision agriculture, performs poorly for solving this task. The reasons include paucity of large scale labeled datasets in this domain, diversity in crops, and the diversity of appearance of the same crops at various stages of their growth. In this work, we discuss the development of a practical real-life crop row</p> <p>detection system in collaboration with an agricultural sprayer company. Our proposed method takes the output of semantic segmentation using U-net, and then apply a clustering based probabilistic temporal calibration which can adapt to different fields and crops without the need for retraining the network. Experimental results validate that our method can be used for both refining the results of the U-net to reduce errors and also for frame interpolation of the input video stream. Upon the availability of more labeled data, we switched our approach from a semi-supervised model to a fully supervised end-to-end crop row detection model using a Feature Pyramid Network or FPN. Central to the FPN is a pyramid pooling module that extracts features from the input image at multiple resolutions. This results in the network’s ability to use both local and global features in classifying pixels to be crop rows. After training the FPN on the labeled dataset, our method obtained a mean IoU or Jaccard Index score of over 70% as reported on the test set. We trained our method on only a subset of the corn dataset and tested its performance on multiple variations of weed pressure and crop growth stages to verify that the performance does translate over the variations and is consistent across the entire dataset.</p>

Page generated in 0.0399 seconds