• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1301
  • 555
  • 320
  • 111
  • 83
  • 57
  • 54
  • 53
  • 37
  • 37
  • 28
  • 25
  • 24
  • 21
  • 21
  • Tagged with
  • 3073
  • 977
  • 495
  • 467
  • 423
  • 407
  • 392
  • 353
  • 314
  • 290
  • 284
  • 273
  • 252
  • 251
  • 239
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

HealthCyberMap : mapping the health cyberspace using hypermedia GIS and clinical codes

Boulos, Maged Nabih Kamel January 2002 (has links)
No description available.
22

Understanding readers' navigation in WWW documents

Gonzáles de Cosío, María January 2002 (has links)
No description available.
23

An experimental study of semantics and affect in schizophrenic patients with delusions

Rossell, Susan Lee January 1999 (has links)
No description available.
24

Content based image retrieval using scale space object trees

Dupplaw, David Paul January 2002 (has links)
No description available.
25

Verbal overshadowing of face and car recognition

Brown, Charity January 2003 (has links)
No description available.
26

Enhancing workflow with a semantic description of scientific intent

Pignotti, Edoardo January 2010 (has links)
In recent years there has been a proliferation of scientific resources available through the Internet including, for example, datasets and computational modelling services.  Scientists are becoming increasingly dependent upon these resources, which are changing the way they conduct their research activities with increasing emphasis on conducting ‘in silico’ experiments as a way to test hypotheses.  Scientific workflow technologies provide researchers with a flexible problem-solving environment by facilitating the creation and execution of experiments from a pool of available services.  This thesis investigates the use of workflow tools enhanced with semantics to facilitate the design, execution, analysis and interpretation of workflow experiments and exploratory studies.  It is argued that in order to better characterise such experiments we need to go beyond low-level service composition and execution details by capturing higher-level descriptions of the scientific process.  Current workflow technologies do not incorporate any representation of such experimental constraints and goals, which is referred to in this thesis as scientist’s intent.  This thesis proposes an abstract model of scientific intent based on the concept of an Agent in the Open Provenance Model (OPM) specification.  To realise this model a framework based upon a number of Semantic Web technologies has been developed, including the OWL ontology language and the Semantic Web Rule Language (SWRL).  Through the use of social simulation case studies the thesis illustrates the benefits of using this framework in terms of workflow monitoring, workflow provenance and annotation of experimental results.
27

The Semantic Differential as a Measure of Sexual Differences

Lynd, Robert S. 05 1900 (has links)
The purpose of this research study was to determine whether the semantic differential could measure in the college population the variation in meaning of selected masculine and feminine concepts as a function of sex difference.
28

Automatic novice program comprehension for semantic bug detection

Ade-Ibijola, Abejide Olu January 2016 (has links)
A thesis submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, in fulfillment of the requirements for the Degree of Doctor of Philosophy in Computer Science April 2016 / Automatically comprehending novice programs with the aim of giving useful feedback has been an Artificial Intelligence problem for over four decades. Solving this problem basically entails manipulating the underlying program plans; i.e. extracting and comparing the novice's plan to the expert's plan and inferring where the novice's bug is from. The bugs of interest in this domain are often semantic bugs as all syntactic bugs are handled by automatic debuggers --- built in most compilers. Hence, a program that debugs like the human expert should understand the problem and know the expected solution(s) in order to detect semantic bugs. This work proposes a new approach to comprehending novice programs using: regular expressions for the recognition of plans in program text, principles from formal language theory for defining the space of program plan variations, and automata-based algorithms for the detection of semantic bugs. The new approach is tested with a repository of novice programs with known semantic bugs and specific bugs were detected. As a proof of concept, the theories presented in this work are further implemented in software prototypes. If the new idea is implemented in a robust software tool, it will find applications in comprehending first year students' programs, thereby supporting the human expert in teaching programming.
29

Inferring unobserved co-occurrence events in Anchored Packed Trees

Kober, Thomas Helmut January 2018 (has links)
Anchored Packed Trees (APTs) are a novel approach to distributional semantics that takes distributional composition to be a process of lexeme contextualisation. A lexeme's meaning, characterised as knowledge concerning co-occurrences involving that lexeme, is represented with a higher-order dependency-typed structure (the APT) where paths associated with higher-order dependencies connect vertices associated with weighted lexeme multisets. The central innovation in the compositional theory is that the APT's type structure enables the precise alignment of the semantic representation of each of the lexemes being composed. Like other count-based distributional spaces, however, Anchored Packed Trees are prone to considerable data sparsity, caused by not observing all plausible co-occurrences in the given data. This problem is amplified for models like APTs, that take the grammatical type of a co-occurrence into account. This results in a very sparse distributional space, requiring a mechanism for inferring missing knowledge. Most methods face this challenge in ways that render the resulting word representations uninterpretable, with the consequence that distributional composition becomes difficult to model and reason about. In this thesis, I will present a practical evaluation of the Apt theory, including a large-scale hyperparameter sensitivity study and a characterisation of the distributional space that APTs give rise to. Based on the empirical analysis, the impact of the problem of data sparsity is investigated. In order to address the data sparsity challenge and retain the interpretability of the model, I explore an alternative algorithm — distributional inference — for improving elementary representations. The algorithm involves explicitly inferring unobserved co-occurrence events by leveraging the distributional neighbourhood of the semantic space. I then leverage the rich type structure in APTs and propose a generalisation of the distributional inference algorithm. I empirically show that distributional inference improves elementary word representations and is especially beneficial when combined with an intersective composition function, which is due to the complementary nature of inference and composition. Lastly, I qualitatively analyse the proposed algorithms in order to characterise the knowledge that they are able to infer, as well as their impact on the distributional APT space.
30

Agent-based ontology management towards interoperability

Li, Li, llI@it.swin.edu.au January 2005 (has links)
Ontologies are widely used as data representations for knowledge bases and marking up data on the emerging Semantic Web. Hence, techniques for managing ontol- ogy come to the centre of any practical and general solution of knowledge-based systems. Challenges arise when we look a step further in order to achieve flexibility and scalability of the ontology management. Previous works in ontology management, primarily for ontology mapping, ontology integration and ontology evolution, have exploited only one form or another of ontology management in restrictive settings. However, a distributed and heterogeneous environment makes it necessary for re- searchers in this field to consider ontology interoperability in order to achieve the vision of the Semantic Web. Several challenges arise when we set our goal to achieve ontology interoperability on the Web. The first one is to decide which soft- ware engineering paradigm to employ. The issue of such a paradigm is the core of ontology management when dynamic property is involved. It should make it easy to model complex systems and significantly improve current practice in software engineering. Moreover, it allows the extension of the range of applications that can feasibly be tackled. The second challenge is to exploit frameworks based on the pro- posed paradigm. Such a framework should make possible flexibility, interactivity, reusability and reliability for systems which are built on it. The third challenge is to investigate suitable mechanisms to cope with ontology mapping, integration and evolution based on the framework. It is known that predefined rules or hypotheses may not apply given that the environment hosting an ontology is changing over time. Fortunately, agents are being advocated as a next generation model for en- gineering complex and distributed systems. Also some researchers in this field have given a qualitative analysis to provide a justification for precisely why the agent-based approach is well suited to engineer complex software systems. From a multi-agent perspective, agent technology fits well in developing applications in uncontrolled and distributed environments which require substantial support for change. Agents in multi-agent systems (MAS) are autonomous and can engage in interactions which are essential for any ongoing agents� actions. A MAS approach is thus regarded as an intuitive and suitable way of modelling dynamic systems. Following the above discussion, an agent-based framework for managing ontology in a dynamic environment is developed. The framework has several key characteris- tics such as flexibility and extensibility that differentiate this research from others. Three important issues of the ontology management are also investigated. It is be- lieved that inter-ontology processes like ontology mapping with logical semantics are foundations of ontology-based applications. Hence, firstly, ontology mapping is discussed. Several types of semantic relations are proposed. Following these, the mapping mechanisms are developed. Secondly, based on the previous mapping results, ontology integration is developed to provide abstract views for participating organisations in the presence of a variety of ontologies. Thirdly, as an ontology is subject to evolution in its life cycle, there must be some kind of mechanisms to reflect their changes in corresponding interrelated ontologies. Ontology refinement is investigated to take ontology evolution into consideration. Process algebra is employed to catch and model information exchanges between ontologies. Agent negotiation strategy is applied to guide corresponding ontologies to react properly. A prototype is built to demonstrate the above design and functionalities. It is applied to ontologies dealing with the subject of beer (type). This prototype con- sists of four major types of agents, ranging from user agent, interface agent, ontology agent, and functionary agent. Evaluations such as query, consistency checking are conducted on the prototype. This shows that the framework is not only flexible but also completely workable. All agents derived from the framework exhibit their behaviours appropriately as expected.

Page generated in 0.0664 seconds