• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 218
  • 71
  • 32
  • 19
  • 10
  • 6
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 526
  • 526
  • 146
  • 138
  • 122
  • 121
  • 118
  • 109
  • 102
  • 100
  • 96
  • 82
  • 79
  • 64
  • 64
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Konceptuální struktury jako nástroj reprezentace znalost / Conceptual Structures As a Tool for Knowledge Representation

Ferbarová, Gabriela January 2016 (has links)
(in English): Conceptual graphs are a formal knowledge representation language introduced by John F. Sowa, an American specialist on Artificial Intelligence, at the end of the seventies. They are the synthesis of heuristic and formalistic approach to Artificial Intelligence and knowledge procession. They provide meaning and knowledge in form, which is logically precise, human- readable and untestable, and it is applicable in the computing domain in general. Conceptual graphs can be expressed through a first-order logic, which makes them a quality tool for intelligent reasoning. Their notation CGIF was standardised by norm ISO/IEC 24707:2007 as one of the three dialects of Common logic, which frames the set of logic based on logic. Conceptual graphs are also mappable to knowledge representation languages standardised for the Semantic Web; OWL and RDF (S). This work introduces the conceptual graph theory in the context of scientific fields like linguistics, logic and artificial intelligence. It represents the formalism proposed by John F. Sowa and some extensions that have emerged over the past decades, along with the need for improvements to the representational properties of graphs. Finally, the work provides an illustrative overview of the implementation and use of conceptual graphs in practice....
412

Relational Representation Learning Incorporating Textual Communication for Social Networks

Yi-Yu Lai (10157291) 01 March 2021 (has links)
<div>Representation learning (RL) for social networks facilitates real-world tasks such as visualization, link prediction and friend recommendation. Many methods have been proposed in this area to learn continuous low-dimensional embedding of nodes, edges or relations in social and information networks. However, most previous network RL methods neglect social signals, such as textual communication between users (nodes). Unlike more typical binary features on edges, such as post likes and retweet actions, social signals are more varied and contain ambiguous information. This makes it more challenging to incorporate them into RL methods, but the ability to quantify social signals should allow RL methods to better capture the implicit relationships among real people in social networks. Second, most previous work in network RL has focused on learning from homogeneous networks (i.e., single type of node, edge, role, and direction) and thus, most existing RL methods cannot capture the heterogeneous nature of relationships in social networks. Based on these identified gaps, this thesis aims to study the feasibility of incorporating heterogeneous information, e.g., texts, attributes, multiple relations and edge types (directions), to learn more accurate, fine-grained network representations. </div><div> </div><div>In this dissertation, we discuss a preliminary study and outline three major works that aim to incorporate textual interactions to improve relational representation learning. The preliminary study learns a joint representation that captures the textual similarity in content between interacting nodes. The promising results motivate us to pursue broader research on using social signals for representation learning. The first major component aims to learn explicit node and relation embeddings in social networks. Traditional knowledge graph (KG) completion models learn latent representations of entities and relations by interpreting them as translations operating on the embedding of the entities. However, existing approaches do not consider textual communications between users, which contain valuable information to provide meaning and context for social relationships. We propose a novel approach that incorporates textual interactions between each pair of users to improve representation learning of both users and relationships. The second major component focuses on analyzing how users interact with each other via natural language content. Although the data is interconnected and dependent, previous research has primarily focused on modeling the social network behavior separately from the textual content. In this work, we model the data in a holistic way, taking into account the connections between the social behavior of users and the content generated when they interact, by learning a joint embedding over user characteristics and user language. In the third major component, we consider the task of learning edge representations in social networks. Edge representations are especially beneficial as we need to describe or explain the relationships, activities, and interactions among users. However, previous work in this area lack well-defined edge representations and ignore the relational signals over multiple views of social networks, which typically contain multi-view contexts (due to multiple edge types) that need to be considered when learning the representation. We propose a new methodology that captures asymmetry in multiple views by learning well-defined edge representations and incorporates textual communications to identify multiple sources of social signals that moderate the impact of different views between users.</div>
413

Automated Theorem Proving for General Game Playing

Haufe, Sebastian 22 June 2012 (has links)
While automated game playing systems like Deep Blue perform excellent within their domain, handling a different game or even a slight change of rules is impossible without intervention of the programmer. Considered a great challenge for Artificial Intelligence, General Game Playing is concerned with the development of techniques that enable computer programs to play arbitrary, possibly unknown n-player games given nothing but the game rules in a tailor-made description language. A key to success in this endeavour is the ability to reliably extract hidden game-specific features from a given game description automatically. An informed general game player can efficiently play a game by exploiting structural game properties to choose the currently most appropriate algorithm, to construct a suited heuristic, or to apply techniques that reduce the search space. In addition, an automated method for property extraction can provide valuable assistance for the discovery of specification bugs during game design by providing information about the mechanics of the currently specified game description. The recent extension of the description language to games with incomplete information and elements of chance further induces the need for the detection of game properties involving player knowledge in several stages of the game. In this thesis, we develop a formal proof method for the automatic acquisition of rich game-specific invariance properties. To this end, we first introduce a simple yet expressive property description language to address knowledge-free game properties which may involve arbitrary finite sequences of successive game states. We specify a semantic based on state transition systems over the Game Description Language, and develop a provably correct formal theory which allows to show the validity of game properties with respect to their semantic across all reachable game states. Our proof theory does not require to visit every single reachable state. Instead, it applies an induction principle on the game rules based on the generation of answer set programs, allowing to apply any off-the-shelf answer set solver to practically verify invariance properties even in complex games whose state space cannot totally be explored. To account for the recent extension of the description language to games with incomplete information and elements of chance, we correctly extend our induction method to properties involving player knowledge. With an extensive evaluation we show its practical applicability even in complex games.
414

Using n-layer graph models for representing and transforming knowledge on biological pathways

Hammoud, Zaynab 23 March 2021 (has links)
No description available.
415

Hypothesis testing and community detection on networks with missingness and block structure

Guilherme Maia Rodrigues Gomes (8086652) 06 December 2019 (has links)
Statistical analysis of networks has grown rapidly over the last few years with increasing number of applications. Graph-valued data carries additional information of dependencies which opens the possibility of modeling highly complex objects in vast number of fields such as biology (e.g. brain networks , fungi networks, genes co-expression), chemistry (e.g. molecules fingerprints), psychology (e.g. social networks) and many others (e.g. citation networks, word co-occurrences, financial systems, anomaly detection). While the inclusion of graph structure in the analysis can further help inference, simple statistical tasks in a network is very complex. For instance, the assumption of exchangeability of the nodes or the edges is quite strong, and it brings issues such as sparsity, size bias and poor characterization of the generative process of the data. Solutions to these issues include adding specific constraints and assumptions on the data generation process. In this work, we approach this problem by assuming graphs are globally sparse but locally dense, which allows exchangeability assumption to hold in local regions of the graph. We consider problems with two types of locality structure: block structure (also framed as multiple graphs or population of networks) and unstructured sparsity which can be seen as missing data. For the former, we developed a hypothesis testing framework for weighted aligned graphs; and a spectral clustering method for community detection on population of non-aligned networks. For the latter, we derive an efficient spectral clustering approach to learn the parameters of the zero inflated stochastic blockmodel. Overall, we found that incorporating multiple local dense structures leads to a more precise and powerful local and global inference. This result indicates that this general modeling scheme allows for exchangeability assumption on the edges to hold while generating more realistic graphs. We give theoretical conditions for our proposed algorithms, and we evaluate them on synthetic and real-world datasets, we show our models are able to outperform the baselines on a number of settings. <br>
416

Learning Multi-step Dual-arm Tasks From Demonstrations

Natalia S Sanchez Tamayo (9156518) 29 July 2020 (has links)
Surgeon expertise can be difficult to capture through direct robot programming. Deep imitation learning (DIL) is a popular method for teaching robots to autonomously execute tasks through learning from demonstrations. DIL approaches have been previously applied to surgical automation. However, previous approaches do not consider the full range of robot dexterous motion required in general surgical task, by leaving out tooltip rotation changes or modeling one robotic arm only. Hence, they are not directly applicable for tasks that require rotation and dual-arm collaboration such as debridement. We propose to address this limitation by formulating a DIL approach for the execution of dual-arm surgical tasks including changes in tooltip orientation, position and gripper actions.<br><br>In this thesis, a framework for multi-step surgical task automation is designed and implemented by leveraging deep imitation learning. The framework optimizes Recurrent Neural Networks (RNNs) for the execution of the whole surgical tasks while considering tooltip translations, rotations as well as gripper actions. The network architecture proposed implicitly optimizes for the interaction between two robotic arms as opposed to modeling each arm independently. The networks were trained directly from the human demonstrations and do not require to create task specific hand-crafted models or to manually segment the demonstrations.<br><br>The proposed framework was implemented and evaluated in simulation for two relevant surgical tasks, the peg transfer task and the surgical debridement. The tasks were tested under random initial conditions to challenge the robustness of the networks to generalize to variable settings. The performance of the framework was assessed using task and subtask success as well as a set of quantitative metrics. Experimental evaluation showed favorable results for automating surgical tasks under variable conditions for the surgical debridement, which obtained a task success rate comparable to the human task success. For the peg transfer task, the framework displayed moderate overall task success. Quantitative metrics indicate that the robot generated trajectories possess similar or better motion economy that the human demonstrations.
417

Toward Supporting Fine-Grained, Structured, Meaningful and Engaging Feedback in Educational Applications

Bulgarov, Florin Adrian 12 1900 (has links)
Recent advancements in machine learning have started to put their mark on educational technology. Technology is evolving fast and, as people adopt it, schools and universities must also keep up (nearly 70% of primary and secondary schools in the UK are now using tablets for various purposes). As these numbers are likely going to follow the same increasing trend, it is imperative for schools to adapt and benefit from the advantages offered by technology: real-time processing of data, availability of different resources through connectivity, efficiency, and many others. To this end, this work contributes to the growth of educational technology by developing several algorithms and models that are meant to ease several tasks for the instructors, engage students in deep discussions and ultimately, increase their learning gains. First, a novel, fine-grained knowledge representation is introduced that splits phrases into their constituent propositions that are both meaningful and minimal. An automated extraction algorithm of the propositions is also introduced. Compared with other fine-grained representations, the extraction model does not require any human labor after it is trained, while the results show considerable improvement over two meaningful baselines. Second, a proposition alignment model is created that relies on even finer-grained units of text while also outperforming several alternative systems. Third, a detailed machine learning based analysis of students' unrestricted natural language responses to questions asked in classrooms is made by leveraging the proposition extraction algorithm to make computational predictions of textual assessment. Two computational approaches are introduced that use and compare manually engineered machine learning features with word embeddings input into a two-hidden layers neural network. Both methods achieve notable improvements over two alternative approaches, a recent short answer grading system and DiSAN – a recent, pre-trained, light-weight neural network that obtained state-of-the-art performance on multiple NLP tasks and corpora. Fourth, a clustering algorithm is introduced in order to bring structure to the feedback offered to instructors in classrooms. The algorithm organizes student responses based on three important aspects: propositional importance classifications, computational textual understanding of student understanding and algorithm similarity metrics between student responses. Moreover, a dynamic cluster selection algorithm is designed to decide which are the best groups of responses resulting from the cluster hierarchy. The algorithm achieves a performance that is 86.3% of the performance achieved by humans on the same task and dataset. Fifth, a deep neural network is built to predict, for each cluster, an engagement response that is meant to help generate insightful classroom discussion. This is the first ever computational model to predict how engaging student responses will be in classroom discussion. Its performance reaches 86.8% of the performance obtained by humans on the same task and dataset. Moreover, I also demonstrate the effectiveness of a dynamic algorithm that can self-improve with minimal help from the teachers, in order to reduce its relative error by up to 32%.
418

DEVELOPING A DECISION SUPPORT SYSTEM FOR CREATING POST DISASTER TEMPORARY HOUSING

Mahdi Afkhamiaghda (10647542) 07 May 2021 (has links)
<p>Post-disaster temporary housing has been a significant challenge for the emergency management group and industries for many years. According to reports by the Department of Homeland Security (DHS), housing in states and territories is ranked as the second to last proficient in 32 core capabilities for preparedness.The number of temporary housing required in a geographic area is influenced by a variety of factors, including social issues, financial concerns, labor workforce availability, and climate conditions. Acknowledging and creating a balance between these interconnected needs is considered as one of the main challenges that need to be addressed. Post-disaster temporary housing is a multi-objective process, thus reaching the optimized model relies on how different elements and objectives interact, sometimes even conflicting, with each other. This makes decision making in post-disaster construction more restricted and challenging, which has caused ineffective management in post-disaster housing reconstruction.</p> <p>Few researches have studied the use of Artificial Intelligence modeling to reduce the time and cost of post-disaster sheltering. However, there is a lack of research and knowledge gap regarding the selection and the magnitude of effect of different factors of the most optimized type of Temporary Housing Units (THU) in a post-disaster event.</p> The proposed framework in this research uses supervised machine learing to maximize certain design aspects of and minimize some of the difficulties to better support creating temporary houses in post-disaster situations. The outcome in this study is the classification type of the THU, more particularly, classifying THUs based on whether they are built on-site or off-site. In order to collect primary data for creating the model and evaluating the magnitude of effect for each factor in the process, a set of surveys were distributed between the key players and policymakers who play a role in providing temporary housing to people affected by natural disasters in the United States. The outcome of this framework benefits from tacit knowledge of the experts in the field to show the challenges and issues in the subject. The result of this study is a data-based multi-objective decision-making tool for selecting the THU type. Using this tool, policymakers who are in charge of selecting and allocating post-disaster accommodations can select the THU type most responsive to the local needs and characteristics of the affected people in each natural disaster.
419

Integrative Analysis of Multimodal Biomedical Data with Machine Learning

Zhi Huang (11170170) 23 July 2021 (has links)
<div>With the rapid development in high-throughput technologies and the next generation sequencing (NGS) during the past decades, the bottleneck for advances in computational biology and bioinformatics research has shifted from data collection to data analysis. As one of the central goals in precision health, understanding and interpreting high-dimensional biomedical data is of major interest in computational biology and bioinformatics domains. Since significant effort has been committed to harnessing biomedical data for multiple analyses, this thesis is aiming for developing new machine learning approaches to help discover and interpret the complex mechanisms and interactions behind the high dimensional features in biomedical data. Moreover, this thesis also studies the prediction of post-treatment response given histopathologic images with machine learning.</div><div><br></div><div>Capturing the important features behind the biomedical data can be achieved in many ways such as network and correlation analyses, dimensionality reduction, image processing, etc. In this thesis, we accomplish the computation through co-expression analysis, survival analysis, and matrix decomposition in supervised and unsupervised learning manners. We use co-expression analysis as upfront feature engineering, implement survival regression in deep learning to predict patient survival and discover associated factors. By integrating Cox proportional hazards regression into non-negative matrix factorization algorithm, the latent clusters of human genes are uncovered. Using machine learning and automatic feature extraction workflow, we extract thirty-six image features from histopathologic images, and use them to predict post-treatment response. In addition, a web portal written by R language is built in order to bring convenience to future biomedical studies and analyses.</div><div><br></div><div>In conclusion, driven by machine learning algorithms, this thesis focuses on the integrative analysis given multimodal biomedical data, especially the supervised cancer patient survival prognosis, the recognition of latent gene clusters, and the application of predicting post-treatment response from histopathologic images. The proposed computational algorithms present its superiority comparing to other state-of-the-art models, provide new insights toward the biomedical and cancer studies in the future.</div>
420

Fusing DL Reasoning with HTN Planning as a Deliberative Layer in Mobile Robotics

Hartanto, Ronny 08 March 2010 (has links)
Action planning has been used in the field of robotics for solving long-running tasks. In the robot architectures field, it is also known as the deliberative layer. However, there is still a gap between the symbolic representation on the one hand and the low-level control and sensor representation on the other. In addition, the definition of a planning problem for a complex, real-world robot is not trivial. The planning process could become intractable as its search spaces become large. As the defined planning problem determines the complexity and the computationability for solving the problem, it should contain only relevant states. In this work, a novel approach which amalgamates Description Logic (DL) reasoning with Hierarchical Task Network (HTN) planning is introduced. The planning domain description as well as fundamental HTN planning concepts are represented in DL and can therefore be subject to DL reasoning; from these representations, concise planning problems are generated for HTN planning. The method is presented through an example in the robot navigation domain. In addition, a case study of the RoboCup@Home domain is given. As proof of concept, a well-known planning problem that often serves as a benchmark, namely that of the blocks-world, is modeled and solved using this approach. An analysis of the performance of the approach has been conducted and the results show that this approach yields significantly smaller planning problem descriptions than those generated by current representations in HTN planning.

Page generated in 0.1006 seconds