• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 526
  • 526
  • 146
  • 138
  • 122
  • 121
  • 118
  • 109
  • 102
  • 100
  • 96
  • 82
  • 79
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Epistemic Structures of Interrogative Domains

Hughes, Cameron A. 24 November 2008 (has links)
No description available.
432

Reinforcement Learning for Multi-Agent Strategy Synthesis Using Higher-Order Knowledge

Forsell, Gustav, Gergi, Shamoun January 2023 (has links)
Imagine for a moment we are living in the distant future where autonomous robots are patrollingthe streets as police officers. Two such robots are chasing a robber through the city streets. Fearingthe thief might listen in to any potential transmission, both robots remain radio silent and are thuslimited to a strictly visual pursuit. Since the robots cannot see the robber the entire time, they haveto deduce the potential location of the robber. What would the best strategy be for these robots toachieve their objective? This bachelor's thesis investigated the above example by creating strategies through reinforcementlearning. The thesis also investigated the performance of the players when they have differentabilities of deduction. This was tested by creating a suitable game and corresponding reinforcementlearning algorithm and running the simulations for different degrees of knowledge. The study provedthat reinforcement learning is a viable method for strategy construction, reaching nearly guaranteedvictory for cases when the agent knows everything about the environment and a slightly lower winratio when there is uncertainty introduced. The implementation yielded only a small gain in win ratiowhen the agents could deduce even more about each other. / Föreställ dig för ett ögonblick att vi lever i en avlägsen framtid där autonoma robotar patrullerar pågatorna som poliser. Två sådana robotar jagar en rånare genom stadens gator. Eftersom de är räddaför att tjuven kan lyssna på alla möjliga sändningar, förblir båda robotarna radiotysta och är därförbegränsade till en strikt visuell strävan. Eftersom robotarna inte kan se rånaren hela tiden, måste dehärleda den potentiella platsen för rånaren. Vilken skulle den bästa strategin vara för dessa robotarför att uppnå sitt mål? Denna kandidatuppsats undersökte ovanstående exempel genomskapa strategier genomförstärkningsinlärning. Avhandlingen undersökte också spelarnas prestationer när de har olikaavdragsförmåga. Detta testades genom att skapa ett lämpligt spel och motsvarandeförstärkningsinlärningsalgoritm och köra simuleringarna för olika kunskapsgrader. Studien visade attförstärkningsinlärning är en användbar metod för strategikonstruktion, och når nästan garanteradseger i fall då agenten vet allt om miljön och en något lägre vinstkvot när det finns osäkerhet.Implementeringen gav bara en liten vinst i vinstförhållandet när agenterna kunde härleda ännu merom varandra. / Kandidatexjobb i elektroteknik 2023, KTH, Stockholm
433

Trustworthy and Causal Artificial Intelligence in Environmental Decision Making

Suleyman Uslu (18403641) 03 June 2024 (has links)
<p dir="ltr">We present a framework for Trustworthy Artificial Intelligence (TAI) that dynamically assesses trust and scrutinizes past decision-making, aiming to identify both individual and community behavior. The modeling of behavior incorporates proposed concepts, namely trust pressure and trust sensitivity, laying the foundation for predicting future decision-making regarding community behavior, consensus level, and decision-making duration. Our framework involves the development and mathematical modeling of trust pressure and trust sensitivity, drawing on social validation theory within the context of environmental decision-making. To substantiate our approach, we conduct experiments encompassing (i) dynamic trust sensitivity to reveal the impact of learning actors between decision-making, (ii) multi-level trust measurements to capture disruptive ratings, and (iii) different distributions of trust sensitivity to emphasize the significance of individual progress as well as overall progress.</p><p dir="ltr">Additionally, we introduce TAI metrics, trustworthy acceptance, and trustworthy fairness, designed to evaluate the acceptance of decisions proposed by AI or humans and the fairness of such proposed decisions. The dynamic trust management within the framework allows these TAI metrics to discern support for decisions among individuals with varying levels of trust. We propose both the metrics and their measurement methodology as contributions to the standardization of trustworthy AI.</p><p dir="ltr">Furthermore, our trustability metric incorporates reliability, resilience, and trust to evaluate systems with multiple components. We illustrate experiments showcasing the effects of different trust declines on the overall trustability of the system. Notably, we depict the trade-off between trustability and cost, resulting in net utility, which facilitates decision-making in systems and cloud security. This represents a pivotal step toward an artificial control model involving multiple agents engaged in negotiation.</p><p dir="ltr">Lastly, the dynamic management of trust and trustworthy acceptance, particularly in varying criteria, serves as a foundation for causal AI by providing inference methods. We outline a mechanism and present an experiment on human-driven causal inference, where participant discussions act as interventions, enabling counterfactual evaluations once actor and community behavior are modeled.</p>
434

ENABLING RIDE-SHARING IN ON-DEMAND AIR SERVICE OPERATIONS THROUGH REINFORCEMENT LEARNING

Apoorv Maheshwari (11564572) 22 November 2021 (has links)
The convergence of various technological and operational advancements has reinstated the interest in On-Demand Air Service (ODAS) as a viable mode of transportation. ODAS enables an end-user to be transported in an aircraft between their desired origin and destination at their preferred time without advance notice. Industry, academia, and the government organizations are collaborating to create technology solutions suited for large-scale implementation of this mode of transportation. Market studies suggest reducing vehicle operating cost per passenger as one of the biggest enablers of this market. To enable ODAS, an ODAS operator controls a fleet of aircraft that are deployed across a set of nodes (e.g., airports, vertiports) to satisfy end-user transportation requests. There is a gap in the literature for a tractable and online methodology that can enable ride-sharing in the on-demand operations while maintaining a publicly acceptable level of service (such as with low waiting time). The need for an approach that not only supports a dynamic-stochastic formulation but can also handle uncertainty with unknowable properties, drives me towards the field of Reinforcement Learning (RL). In this work, a novel two-layer hierarchical RL framework is proposed that can distribute a fleet of aircraft across a nodal network as well as perform real-time scheduling for an ODAS operator. The top layer of the framework - the Fleet Distributor - is modeled as a Partially Observable Markov Decision Process whereas the lower layer - the Trip Request Manager - is modeled as a Semi-Markov Decision Process. This framework is successfully demonstrated and assessed through various studies for a hypothetical ODAS operator in the Chicago region. This approach provides a new way of solving fleet distribution and scheduling problems in aviation. It also bridges the gap between the state-of-the-art RL advancements and node-based transportation network problems. Moreover, this work provides a non-proprietary approach to reasonably model ODAS operations that can be leveraged by researchers and policy makers.
435

不同教學介入對幼兒知識表徵的影響-以幼兒科學問題解決歷程為例

丘嘉慧, Chiu, Chia Hui Unknown Date (has links)
本研究主要目的在探討不同年齡幼兒解決需同時考量兩個因素的科學問題及遷移的表現,並探討不同教學介入對幼兒解決科學問題表現及遷移表現的影響。本研究共分二個研究,使用「幼兒認知作業」,分別在研究一探討128位,研究二探討286位3至6歲幼兒的表現。研究結果顯示,幼兒的蒐集訊息、分析整理訊息與解決問題表現間有因果關係。年齡愈大的幼兒在各個歷程的表現愈好。若是作業內容與幼兒生活經驗相關,4歲幼兒在行為上可以表現出同時考量兩個因素解決問題。解決不同概念的問題,幼兒有不同的表現。6歲幼兒無法表現需同時考量兩個因素問題的遷移。幼兒在解決需同時考量兩個因素問題時,行為及語言知識表徵層次有六個層次(層次0至層次5)。各種教學介入對層次1幼兒沒有影響。當幼兒處於層次3時,教學介入有影響效果,透過語言的說明可以幫助此時的幼兒提升表徵層次,但這些教學介入效果不足以讓幼兒達到層次5,也無法影響至幼兒的遷移表現上。 / The purposes of this study were to investigate young children’s ability to resolve problems with two dimensions and transfer, and the influences of instructional interventions on resolving these problems and transferring. There were 128 and 286 3 to 6-year-old young children in study 1 and study 2 respectively. Young children’s cognitive task was used. Results revealed the processes of searching information and analyzing information were related to the problem-solving. 4-year-old children could resolve the problems with two dimensions, when the problems were familiar. There was domain-specific knowledge on problem-solving. 6-year-old children could not transfer two dimensions to new and similar conditions. There were 6 levels knowledge representations of resolving problems with two dimensions in this study. The instructional intervention of explaining improved the level of children with level 3 to level 4, but not to level 5. And there were not effects of instructional interventions on transferring.
436

Advanced Reasoning about Dynamical Systems

Gu, Yilan 17 February 2011 (has links)
In this thesis, we study advanced reasoning about dynamical systems in a logical framework -- the situation calculus. In particular, we consider promoting the efficiency of reasoning about action in the situation calculus from three different aspects. First, we propose a modified situation calculus based on the two-variable predicate logic with counting quantifiers. We show that solving the projection and executability problems via regression in such language are decidable. We prove that generally these two problems are co-NExpTime-complete in the modified language. We also consider restricting the format of regressable formulas and basic action theories (BATs) further to gain better computational complexity for reasoning about action via regression. We mention possible applications to formalization of Semantic Web services. Then, we propose a hierarchical representation of actions based on the situation calculus to facilitate development, maintenance and elaboration of very large taxonomies of actions. We show that our axioms can be more succinct, while still using an extended regression operator to solve the projection problem. Moreover, such representation has significant computational advantages. For taxonomies of actions that can be represented as finitely branching trees, the regression operator can sometimes work exponentially faster with our theories than it works with the BATs current situation calculus. We also propose a general guideline on how a taxonomy of actions can be constructed from the given set of effect axioms. Finally, we extend the current situation calculus with the order-sorted logic. In the new formalism, we add sort theories to the usual initial theories to describe taxonomies of objects. We then investigate what is the well-sortness for BATs under such framework. We consider extending the current regression operator with well-sortness checking and unification techniques. With the modified regression, we gain computational efficiency by terminating the regression earlier when reasoning tasks are ill-sorted and by reducing the search spaces for well-sorted objects. We also study that the connection between the order-sorted situation calculus and the current situation calculus.
437

Représentation des connaissances sémantiques lexicales de la Théorie Sens-Texte : conceptualisation, représentation, et opérationnalisation des définitions lexicographiques / Meaning-Text Theory lexical semantic knowledge representation : conceptualization, representation, and operationalization of lexicographic definitions

Lefrançois, Maxime 24 June 2014 (has links)
Nous présentons une recherche en ingénierie des connaissances appliquée aux prédicats linguistiques et aux définitions lexicographiques de la théorie Sens-Texte (TST). Notre méthodologie comporte trois étapes. 1. Nous montrons en quoi la conceptualisation de la TST devrait être étendue pour faciliter sa formalisation. Nous justifions la nécessité de définir un niveau sémantique profond (SemP) à base de graphes. Nous y définissons la notion de type d'unité sémantique profonde et sa structure actancielle, de sorte que leur organisation hiérarchique puisse correspondre à une hiérarchie de sens au sein de laquelle ces structures actancielles sont héritées et spécialisées. Nous reconceptualisons les définitions lexicographiques au niveau SemP, et au niveau du dictionnaire. Finalement, nous présentons un prototype d'éditeur de définitions basé sur la manipulation directe de graphes. 2. Nous proposons un formalisme de représentation des connaissances adapté à cette conceptualisation. Nous démontrons que les logiques de description et le formalisme des Graphes Conceptuels ne sont pas adaptés, et nous construisons alors un nouveau formalisme, dit des Graphes d'Unités. 3. Nous étudions l'opérationnalisation du formalisme des Graphes d'Unités. Nous lui associons une sémantique formelle basée sur la théorie des modèles et l'algèbre relationnelle, et montrons que les conditions de décidabilité du raisonnement logique correspondent aux intuitions des lexicographes. Nous proposons également une implémentation du formalisme avec les standards du web sémantique, ce qui permet de profiter des architectures existantes pour l'interopérationnalisation sur le web des données lexicales liées. / We present our research in applying knowledge engineering to linguistic predicates and lexicographic definitions of the Meaning-Text Theory (MTT). We adopt a three-step methodology. 1. We first show how the MTT conceptualization should be extended to ease its formalization. We justify the need of defining a new graph-based deep semantic level. We define the notion of deep semantic unit types and its actantial structure, so that their hierarchical organization may correspond to a hierarchy of meanings, inside which actantial structures are inherited and specialized. We re-conceptualize lexicographic definitions at the deep semantic level, and at the level of dictionaries. Finally, we present a definition editor prototype based on graph direct manipulation, which will allow us, in future work, to integrate our formal model into explanatory combinatorial lexicographic projects. 2. We then propose a knowledge representation formalism (KR) adapted for this conceptualization. We demonstrate that Description Logics and the Conceptual Graphs formalism do not fit our needs. This leads us to construct a new knowledge representation formalism: the Unit Graphs formalism. 3. Finally, we operationalize the Unit Graphs formalism. We assign it a formal semantic model, which we create based on model theory and relational algebra. We then show that the reasoning decidability conditions match the intuitions that lexicographers have. We also provide an implementation using semantic web standards, which enable us to use existing architectures for sharing, interoperability, and knowledge querying over the web of lexical linked data.
438

Residual Capsule Network

Sree Bala Shrut Bhamidi (6990443) 13 August 2019 (has links)
<p>The Convolutional Neural Network (CNN) have shown a substantial improvement in the field of Machine Learning. But they do come with their own set of drawbacks. Capsule Networks have addressed the limitations of CNNs and have shown a great improvement by calculating the pose and transformation of the image. Deeper networks are more powerful than shallow networks but at the same time, more difficult to train. Residual Networks ease the training and have shown evidence that they can give good accuracy with considerable depth. Putting the best of Capsule Network and Residual Network together, we present Residual Capsule Network and 3-Level Residual Capsule Network, a framework that uses the best of Residual Networks and Capsule Networks. The conventional Convolutional layer in Capsule Network is replaced by skip connections like the Residual Networks to decrease the complexity of the Baseline Capsule Network and seven ensemble Capsule Network. We trained our models on MNIST and CIFAR-10 datasets and have seen a significant decrease in the number of parameters when compared to the Baseline models.</p>
439

Systèmes à base de traces modélisées : modèles et langages pour l'exploitation des traces d'interactions / Modelled trace-based systems : models and languages for exploiting interactions traces

Settouti, Lotfi 14 January 2011 (has links)
Ce travail de thèse s'inscrit dans le cadre du projet < personnalisation des environnements informatiques pour l'apprentissage humain (EIAH) > financé par la Région Rhône-Alpes. La personnalisation des EIAH est essentiellement dépendante de la capacité à produire des traces pertinentes et exploitables des activités des apprenants interagissant avec un EIAH. Dans ce domaine, l'exploitation des traces relève explicitement plusieurs problématiques allant de sa représentation de manière normalisée et intelligible à son traitement et interprétation en temps différé ou en temps réel au moment même de l'apprentissage. La multiplication des pratiques et des usages des traces requiert des outils génériques pour soutenir leurs exploitations. L'objectif de cette thèse est de définir les fondements théoriques de tels outils génériques permettant l'exploitation des traces d'interaction. Ceci nous a amené à définir la notion de Systèmes à Base de Trace modélisées : une classe de systèmes à base de connaissances facilitant le raisonnement et l'exploitation des traces modélisées. L'approche théorique proposée pour construire de tels systèmes s'articule autour de deux contributions : (1) La définition d'un cadre conceptuel définissant les concepts, l'architecture et les services mobilisés par les SBT. (2) La définition d'un cadre formel pour les systèmes à base de traces modélisées. Plus précisément, la proposition d'un langage pour l'interrogation et la transformation de trace modélisées à base de règles permettant des évaluations ponctuelles et continues. La sémantique formelle de ce langage est définie sous forme d'une théorie des modèles et d'une théorie de point fixe, deux formalismes habituellement utilisés pour décrire la sémantique formelle des langages de représentation de connaissances / This thesis is funded by the Rhône-Alpes Region as a part of the project < Personalisation of Technology-Enhanced Learning (TEL) Systems >. Personalising TEL Systems is, above all, dependent on the capacity to produce relevant and exploitable traces of individual or collaborative learning activities. In this field, exploiting interaction traces addresses several problems ranging from its representation in a normalised and intelligible manner to its processing and interpretation in continuous way during the ongoing TEL activities. The proliferation of trace-based exploitations raises the need of generic tools to support their representation and exploitation. The main objective of this thesis is to define the theoretical foundations of such generic tools. To do that, we define the notion of Trace-Based System (TBS) as a kind of Knowledge-based system whose main source of knowledge is a set of trace of user-system interactions. This thesis investigates practical and theoretical issues related to TBS, covering the spectrum from concepts, services and architecture involved by such TBS (conceptual framework) to language design over declarative semantics (formal framework). The central topic of our framework is the development of a high-level trace transformation language supporting deductive rules as an abstraction and reasoning mechanism for traces. The declarative semantics for such language is defined by a (Tarski-style) model theory with accompanying fixpoint theory
440

Recherche d’entités nommées complexes sur le web : propositions pour l’extraction et pour le calcul de similarité / Retrieval of Comple Named Entities on the web : proposals for extraction and similarity computation

Fotsoh Tawaofaing, Armel 27 February 2018 (has links)
Les récents développements des nouvelles technologies de l’information et de la communication font du Web une véritable mine d’information. Cependant, les pages Web sont très peu structurées. Par conséquent, il est difficile pour une machine de les traiter automatiquement pour en extraire des informations pertinentes pour une tâche ciblée. C’est pourquoi les travaux de recherche s’inscrivant dans la thématique de l’Extraction d’Information dans les pages web sont en forte croissance. Aussi, l’interrogation de ces informations, généralement structurées et stockées dans des index pour répondre à des besoins d’information précis correspond à la Recherche d’Information (RI). Notre travail de thèse se situe à la croisée de ces deux thématiques. Notre objectif principal est de concevoir et de mettre en œuvre des stratégies permettant de scruter le web pour extraire des Entités Nommées (EN) complexes (EN composées de plusieurs propriétés pouvant être du texte ou d’autres EN) de type entreprise ou de type événement, par exemple. Nous proposons ensuite des services d’indexation et d’interrogation pour répondre à des besoins d’informations. Ces travaux ont été réalisés au sein de l’équipe T2I du LIUPPA, et font suite à une commande de l’entreprise Cogniteev, dont le cœur de métier est centré sur l’analyse du contenu du Web. Les problématiques visées sont, d’une part, l’extraction d’EN complexes sur le Web et, d’autre part, l’indexation et la recherche d’information intégrant ces EN complexes. Notre première contribution porte sur l’extraction d’EN complexes dans des textes. Pour cette contribution, nous prenons en compte plusieurs problèmes, notamment le contexte bruité caractérisant certaines propriétés (pour un événement par exemple, la page web correspondante peut contenir deux dates : la date de l’événement et celle de mise en vente des billets). Pour ce problème en particulier, nous introduisons un module de détection de blocs qui permet de focaliser l’extraction des propriétés sur des blocs de texte pertinents. Nos expérimentations montrent une nette amélioration des performances due à cette approche. Nous nous sommes également intéressés à l’extraction des adresses, où la principale difficulté découle du fait qu’aucun standard ne se soit réellement imposé comme modèle de référence. Nous proposons donc un modèle étendu et une approche d’extraction basée sur des patrons et des ressources libres.Notre deuxième contribution porte sur le calcul de similarité entre EN complexes. Dans l’état de l’art, ce calcul se fait généralement en deux étapes : (i) une première calcule les similarités entre propriétés et (ii) une deuxième agrège les scores obtenus pour le calcul de la similarité globale. En ce qui concerne cette première étape, nous proposons une fonction de calcul de similarité entre EN spatiale, l’une représentée par un point et l’autre par un polygone. Elle complète l’état de l’art. Notons que nos principales propositions se situent au niveau de la deuxième étape. Ainsi, nous proposons trois techniques pour l’agrégation des scores intermédiaires. Les deux premières sont basées sur la somme pondérée des scores intermédiaires (combinaison linéaire et régression logistique). La troisième exploite les arbres de décisions pour agréger les scores intermédiaires. Enfin, nous proposons une dernière approche basée sur le clustering et le modèle vectoriel de Salton pour le calcul de similarité entre EN complexes. Son originalité vient du fait qu’elle ne nécessite pas de passer par le calcul de scores de similarités intermédiaires. / Recent developments in information technologies have made the web an important data source. However, the web content is very unstructured. Therefore, it is a difficult task to automatically process this web content in order to extract relevant information. This is a reason why research work related to Information Extraction (IE) on the web are growing very quickly. Similarly, another very explored research area is the querying of information extracted on the web to answer an information need. This other research area is known as Information Retrieval (IR). Our research work is at the crossroads of both areas. The main goal of our work is to develop strategies and techniques for crawling the web in order to extract complex Named Entities (NEs) (NEs with several properties that may be text or other NEs). We then propose to index them and to query them in order to answer information needs. This work was carried out within the T2I team of the LIUPPA laboratory, in collaboration with Cogniteev, a company which core business is focused on the analysis of web content. The issues we had to deal with were the extraction of complex NEs on the web and the development of IR services supplied by the extracted data. Our first contribution is related to complex NEs extraction from text content. For this contribution, we take into consideration several problems, in particular the noisy context characterizing some properties (the web page describing an event for example, may contain more than one dates: the event’s date and the date of ticket’s sales opening). For this particular problem, we introduce a block detection module that focuses property's extraction on relevant text blocks. Our experiments show an improvement of system’s performances. We also focused on address extraction where the main issue arises from the fact that there is not a standard way for writing addresses in general and on the web in particular. We therefore propose a pattern-based approach which uses some lexicons for extracting addresses from text, regardless of proprietary resources.Our second contribution deals with similarity computation between complex NEs. In the state of the art, this similarity computation is generally performed in two steps: (i) first, similarities between properties are calculated; (ii) then the obtained similarities are aggregated to compute the overall similarity. Our main proposals focuses on the second step. We propose three techniques for aggregating property’s similarities. The first two are based on the weighted sum of these property’s similarities (simple linear combination and logistic regression). The third technique however, uses decision trees for the aggregation. Finally, we also propose a last approach based on clustering and Salton vector model. This last approach evaluates the similarity at the complex NE level without computing property’s similarities. We also propose a similarity computation function between spatial EN, one represented by a point and the other by a polygon. This completes those of the state of the art.

Page generated in 0.1133 seconds