• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1332
  • 556
  • 320
  • 111
  • 84
  • 57
  • 54
  • 54
  • 37
  • 37
  • 31
  • 28
  • 25
  • 24
  • 23
  • Tagged with
  • 3119
  • 979
  • 511
  • 475
  • 424
  • 415
  • 401
  • 354
  • 326
  • 290
  • 289
  • 276
  • 258
  • 256
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

A Semantics-based User Interface Model for Content Annotation, Authoring and Exploration

Khalili, Ali 26 January 2015 (has links)
The Semantic Web and Linked Data movements with the aim of creating, publishing and interconnecting machine readable information have gained traction in the last years. However, the majority of information still is contained in and exchanged using unstructured documents, such as Web pages, text documents, images and videos. This can also not be expected to change, since text, images and videos are the natural way in which humans interact with information. Semantic structuring of content on the other hand provides a wide range of advantages compared to unstructured information. Semantically-enriched documents facilitate information search and retrieval, presentation, integration, reusability, interoperability and personalization. Looking at the life-cycle of semantic content on the Web of Data, we see quite some progress on the backend side in storing structured content or for linking data and schemata. Nevertheless, the currently least developed aspect of the semantic content life-cycle is from our point of view the user-friendly manual and semi-automatic creation of rich semantic content. In this thesis, we propose a semantics-based user interface model, which aims to reduce the complexity of underlying technologies for semantic enrichment of content by Web users. By surveying existing tools and approaches for semantic content authoring, we extracted a set of guidelines for designing efficient and effective semantic authoring user interfaces. We applied these guidelines to devise a semantics-based user interface model called WYSIWYM (What You See Is What You Mean) which enables integrated authoring, visualization and exploration of unstructured and (semi-)structured content. To assess the applicability of our proposed WYSIWYM model, we incorporated the model into four real-world use cases comprising two general and two domain-specific applications. These use cases address four aspects of the WYSIWYM implementation: 1) Its integration into existing user interfaces, 2) Utilizing it for lightweight text analytics to incentivize users, 3) Dealing with crowdsourcing of semi-structured e-learning content, 4) Incorporating it for authoring of semantic medical prescriptions.
302

Embodied Metarepresentations

Hinrich, Nicolás, Foradi, Maryam, Yousef, Tariq, Hartmann, Elisa, Triesch, Susanne, Kaßel, Jan, Pein, Johannes 06 June 2023 (has links)
Meaning has been established pervasively as a central concept throughout disciplines that were involved in cognitive revolution. Its metaphoric usage comes to be, first and foremost, through the interpreter’s constraint: representational relationships and contents are considered to be in the “eye” or mind of the observer and shared properties among observers themselves are knowable through interlinguistic phenomena, such as translation. Despite the instability of meaning in relation to its underdetermination by reference, it can be a tertium comparationis or “third comparator” for extended human cognition if gauged through invariants that exist in transfer processes such as translation, as all languages and cultures are rooted in pan-human experience and, thus, share and express species-specific ontology. Meaning, seen as a cognitive competence, does not stop outside of the body but extends, depends, and partners with other agents and the environment. A novel approach for exploring the transfer properties of some constituent items of the original natural semantic metalanguage in English, that is, semantic primitives, is presented: FrameNet’s semantic frames, evoked by the primes SEE and FEEL, were extracted from EuroParl, a parallel corpus that allows for the automatic word alignment of items with their synonyms. Large Ontology Multilingual Extraction was used. Afterward, following the Semantic Mirrors Method, a procedure that consists back-translating into source language, a translatological examination of translated and original versions of items was performed. A fully automated pipeline was designed and tested, with the purpose of exploring associated frame shifts and, thus, beginning a research agenda on their alleged universality as linguistic features of translation, which will be complemented with and contrasted against further massive feedback through a citizen science approach, as well as cognitive and neurophysiological examinations. Additionally, an embodied account of frame semantics is proposed.
303

Automatically Acquiring A Semantic Network Of Related Concepts

Szumlanski, Sean 01 January 2013 (has links)
We describe the automatic acquisition of a semantic network in which over 7,500 of the most frequently occurring nouns in the English language are linked to their semantically related concepts in the WordNet noun ontology. Relatedness between nouns is discovered automatically from lexical co-occurrence in Wikipedia texts using a novel adaptation of an information theoretic inspired measure. Our algorithm then capitalizes on salient sense clustering among these semantic associates to automatically disambiguate them to their corresponding WordNet noun senses (i.e., concepts). The resultant concept-to-concept associations, stemming from 7,593 target nouns, with 17,104 distinct senses among them, constitute a large-scale semantic network with 208,832 undirected edges between related concepts. Our work can thus be conceived of as augmenting the WordNet noun ontology with RelatedTo links. The network, which we refer to as the Szumlanski-Gomez Network (SGN), has been subjected to a variety of evaluative measures, including manual inspection by human judges and quantitative comparison to gold standard data for semantic relatedness measurements. We have also evaluated the network’s performance in an applied setting on a word sense disambiguation (WSD) task in which the network served as a knowledge source for established graph-based spreading activation algorithms, and have shown: a) the network is competitive with WordNet when used as a stand-alone knowledge source for WSD, b) combining our network with WordNet achieves disambiguation results that exceed the performance of either resource individually, and c) our network outperforms a similar resource, WordNet++ (Ponzetto & Navigli, 2010), that has been automatically derived from annotations in the Wikipedia corpus. iii Finally, we present a study on human perceptions of relatedness. In our study, we elicited quantitative evaluations of semantic relatedness from human subjects using a variation of the classical methodology that Rubenstein and Goodenough (1965) employed to investigate human perceptions of semantic similarity. Judgments from individual subjects in our study exhibit high average correlation to the elicited relatedness means using leave-one-out sampling (r = 0.77, σ = 0.09, N = 73), although not as high as average human correlation in previous studies of similarity judgments, for which Resnik (1995) established an upper bound of r = 0.90 (σ = 0.07, N = 10). These results suggest that human perceptions of relatedness are less strictly constrained than evaluations of similarity, and establish a clearer expectation for what constitutes human-like performance by a computational measure of semantic relatedness. We also contrast the performance of a variety of similarity and relatedness measures on our dataset to their performance on similarity norms and introduce our own dataset as a supplementary evaluative standard for relatedness measures.
304

A framework for semantic web implementation based on context-oriented controlled automatic annotation.

Hatem, Muna Salman January 2009 (has links)
The Semantic Web is the vision of the future Web. Its aim is to enable machines to process Web documents in a way that makes it possible for the computer software to "understand" the meaning of the document contents. Each document on the Semantic Web is to be enriched with meta-data that express the semantics of its contents. Many infrastructures, technologies and standards have been developed and have proven their theoretical use for the Semantic Web, yet very few applications have been created. Most of the current Semantic Web applications were developed for research purposes. This project investigates the major factors restricting the wide spread of Semantic Web applications. We identify the two most important requirements for a successful implementation as the automatic production of the semantically annotated document, and the creation and maintenance of semantic based knowledge base. This research proposes a framework for Semantic Web implementation based on context-oriented controlled automatic Annotation; for short, we called the framework the Semantic Web Implementation Framework (SWIF) and the system that implements this framework the Semantic Web Implementation System (SWIS). The proposed architecture provides for a Semantic Web implementation of stand-alone websites that automatically annotates Web pages before being uploaded to the Intranet or Internet, and maintains persistent storage of Resource Description Framework (RDF) data for both the domain memory, denoted by Control Knowledge, and the meta-data of the Web site¿s pages. We believe that the presented implementation of the major parts of SWIS introduce a competitive system with current state of art Annotation tools and knowledge management systems; this is because it handles input documents in the ii context in which they are created in addition to the automatic learning and verification of knowledge using only the available computerized corporate databases. In this work, we introduce the concept of Control Knowledge (CK) that represents the application¿s domain memory and use it to verify the extracted knowledge. Learning is based on the number of occurrences of the same piece of information in different documents. We introduce the concept of Verifiability in the context of Annotation by comparing the extracted text¿s meaning with the information in the CK and the use of the proposed database table Verifiability_Tab. We use the linguistic concept Thematic Role in investigating and identifying the correct meaning of words in text documents, this helps correct relation extraction. The verb lexicon used contains the argument structure of each verb together with the thematic structure of the arguments. We also introduce a new method to chunk conjoined statements and identify the missing subject of the produced clauses. We use the semantic class of verbs that relates a list of verbs to a single property in the ontology, which helps in disambiguating the verb in the input text to enable better information extraction and Annotation. Consequently we propose the following definition for the annotated document or what is sometimes called the ¿Intelligent Document¿ ¿The Intelligent Document is the document that clearly expresses its syntax and semantics for human use and software automation¿. This work introduces a promising improvement to the quality of the automatically generated annotated document and the quality of the automatically extracted information in the knowledge base. Our approach in the area of using Semantic Web iii technology opens new opportunities for diverse areas of applications. E-Learning applications can be greatly improved and become more effective.
305

Психосемантическое исследование воли у студентов : магистерская диссертация / Psychosemantic study of the will of students

Киселева, Д. О., Kiseleva, D. O. January 2018 (has links)
The object of the study was volitional sphere of personality. The subject of the study was the semantic fields of the concept of "will." The master's thesis consists of an introduction, three chapters, conclusion, a list of literature (70 sources) and applications, including forms of applied techniques, the classifier of associative connections, and the scheme of volitional action. The volume of the master's thesis is 103 pages, on which are placed 8 figures and 13 tables. The introduction reveals the relevance of the research problem, the development of the problem, the purpose and objectives of the research, the object and subject of the research, the main hypothesis are formulated, the methods and the empirical base are specified. The first and second chapters include a review of foreign and domestic literature on the topic of the study. The first and second chapters include a review of foreign and domestic literature on the topic of the study. The first chapter includes a description of approaches to the study of will and methods of its investigation. The second chapter includes a description of the psychosemantic approach in psychology in general and the study of the will in particular. Conclusions on the first and second chapters are the results of the study of theoretical material. The third Chapter is devoted to the empirical part of the study. It describes the organization and methods studies and the results obtained for all methodologies used: Ch. Osgood semantic differential, associative experiment. The conclusions of Chapter 3 include the main results of the empirical study. In conclusion, the results of the theoretical and empirical parts of the work, as well as conclusions on the hypotheses put forward, the practical significance of the study. / Объектом исследования является волевая сфера личности. Предметом исследования стали смысловые поля понятия «воля». Магистерская диссертация состоит из введения, трех глав, заключения, списка литературы (70 источников) и приложений, включающих в себя бланки применявшихся методик, классификатор ассоциативных связей и схему волевого действия. Объем магистерской диссертации 103 страницы, на которых размещены 8 рисунков и 13 таблиц. Во введении раскрывается актуальность проблемы исследования, разработанность проблематики, ставятся цель и задачи исследования, определяются объект и предмет исследования, формулируется основная гипотеза, указываются методы и эмпирическая база. Первая и вторая главы включают в себя обзор иностранной и отечественной литературы по теме исследования. Первая глава включает в себя описание подходов к изучению воли и методов ее исследования. Вторая глава включает в себя описание психосемантического подхода в психологии в целом и к изучению воли в частности. Выводы по первой и второй главам представляют собой итоги по изучению теоретического материала. Третья глава посвящена эмпирической части исследования. В ней представлено описание организации и методов проведенного исследования и результатов, полученных по всем использованным методам: семантический дифференциал Ч. Осгуда и ассоциативный эксперимент. Выводы по главе 3 включают в себя основные результаты эмпирического исследования. В заключении в обобщенном виде изложены результаты теоретической и эмпирической частей работы, а также выводы по выдвинутым гипотезам, обоснована практическая значимость исследования.
306

Novel processes for smart grid information exchange and knowledge representation using the IEC common information model

Hargreaves, Nigel January 2013 (has links)
The IEC Common Information Model (CIM) is of central importance in enabling smart grid interoperability. Its continual development aims to meet the needs of the smart grid for semantic understanding and knowledge representation for a widening domain of resources and processes. With smart grid evolution the importance of information and data management has become an increasingly pressing issue not only because far more data is being generated using modern sensing, control and measuring devices but also because information is now becoming recognised as the ‘integral component’ that facilitates the optimal flexibility required of the smart grid. This thesis looks at the impacts of CIM implementation upon the landscape of smart grid issues and presents research from within National Grid contributing to three key areas in support of further CIM deployment. Taking the issue of Enterprise Information Management first, an information management framework is presented for CIM deployment at National Grid. Following this the development and demonstration of a novel secure cloud computing platform to handle such information is described. Power system application (PSA) models of the grid are partial knowledge representations of a shared reality. To develop the completeness of our understanding of this reality it is necessary to combine these representations. The second research contribution reports on a novel methodology for a CIM-based model repository to align PSA representations and provide a knowledge resource for building utility business intelligence of the grid. The third contribution addresses the need for greater integration of information relating to energy storage, an essential aspect of smart energy management. It presents the strategic rationale for integrated energy modeling and a novel extension to the existing CIM standards for modeling grid-scale energy storage. Significantly, this work has already contributed to a larger body of work on modeling Distributed Energy Resources currently under development at the Electric Power Research Institute (EPRI) in the USA.
307

Ontological approach for database integration

Alalwan, Nasser Alwan January 2011 (has links)
Database integration is one of the research areas that have gained a lot of attention from researcher. It has the goal of representing the data from different database sources in one unified form. To reach database integration we have to face two obstacles. The first one is the distribution of data, and the second is the heterogeneity. The Web ensures addressing the distribution problem, and for the case of heterogeneity there are many approaches that can be used to solve the database integration problem, such as data warehouse and federated databases. The problem in these two approaches is the lack of semantics. Therefore, our approach exploits the Semantic Web methodology. The hybrid ontology method can be facilitated in solving the database integration problem. In this method two elements are available; the source (database) and the domain ontology, however, the local ontology is missing. In fact, to ensure the success of this method the local ontologies should be produced. Our approach obtains the semantics from the logical model of database to generate local ontology. Then, the validation and the enhancement can be acquired from the semantics obtained from the conceptual model of the database. Now, our approach can be applied in the generation phase and the validation-enrichment phase. In the generation phase in our approach, we utilise the reverse engineering techniques in order to catch the semantics hidden in the SQL language. Then, the approach reproduces the logical model of the database. Finally, our transformation system will be applied to generate an ontology. In our transformation system, all the concepts of classes, relationships and axioms will be generated. Firstly, the process of class creation contains many rules participating together to produce classes. Our unique rules succeeded in solving problems such as fragmentation and hierarchy. Also, our rules eliminate the superfluous classes of multi-valued attribute relation as well as taking care of neglected cases such as: relationships with additional attributes. The final class creation rule is for generic relation cases. The rules of the relationship between concepts are generated with eliminating the relationships between integrated concepts. Finally, there are many rules that consider the relationship and the attributes constraints which should be transformed to axioms in the ontological model. The formal rules of our approach are domain independent; also, it produces a generic ontology that is not restricted to a specific ontology language. The rules consider the gap between the database model and the ontological model. Therefore, some database constructs would not have an equivalent in the ontological model. The second phase consists of the validation and the enrichment processes. The best way to validate the transformation result is to facilitate the semantics obtained from the conceptual model of the database. In the validation phase, the domain expert captures the missing or the superfluous concepts (classes or relationships). In the enrichment phase, the generalisation method can be applied to classes that share common attributes. Also, the concepts of complex or composite attributes can be represented as classes. We implement the transformation system by a tool called SQL2OWL in order to show the correctness and the functionally of our approach. The evaluation of our system showed the success of our proposed approach. The evaluation goes through many techniques. Firstly, a comparative study is held between the results produced by our approach and the similar approaches. The second evaluation technique is the weighting score system which specify the criteria that affect the transformation system. The final evaluation technique is the score scheme. We consider the quality of the transformation system by applying the compliance measure in order to show the strength of our approach compared to the existing approaches. Finally the measures of success that our approach considered are the system scalability and the completeness.
308

Semantically-enhanced image tagging system

Rahuma, Awatef January 2013 (has links)
In multimedia databases, data are images, audio, video, texts, etc. Research interests in these types of databases have increased in the last decade or so, especially with the advent of the Internet and Semantic Web. Fundamental research issues vary from unified data modelling, retrieval of data items and dynamic nature of updates. The thesis builds on findings in Semantic Web and retrieval techniques and explores novel tagging methods for identifying data items. Tagging systems have become popular which enable the users to add tags to Internet resources such as images, video and audio to make them more manageable. Collaborative tagging is concerned with the relationship between people and resources. Most of these resources have metadata in machine processable format and enable users to use free- text keywords (so-called tags) as search techniques. This research references some tagging systems, e.g. Flicker, delicious and myweb2.0. The limitation with such techniques includes polysemy (one word and different meaning), synonymy (different words and one meaning), different lexical forms (singular, plural, and conjugated words) and misspelling errors or alternate spellings. The work presented in this thesis introduces semantic characterization of web resources that describes the structure and organization of tagging, aiming to extend the existing Multimedia Query using similarity measures to cater for collaborative tagging. In addition, we discuss the semantic difficulties of tagging systems, suggesting improvements in their accuracies. The scope of our work is classified as follows: (i) Increase the accuracy and confidence of multimedia tagging systems. (ii) Increase the similarity measures of images by integrating varieties of measures. To address the first shortcoming, we use the WordNet based on a tagging system for social sharing and retrieval of images as a semantic lingual ontology resource. For the second shortcoming we use the similarity measures in different ways to recognise the multimedia tagging system. Fundamental to our work is the novel information model that we have constructed for our computation. This is based on the fact that an image is a rich object that can be characterised and formulated in n-dimensions, each dimension contains valuable information that will help in increasing the accuracy of the search. For example an image of a tree in a forest contains more information than an image of the same tree but in a different environment. In this thesis we characterise a data item (an image) by a primary description, followed by n-secondary descriptions. As n increases, the accuracy of the search improves. We give various techniques to analyse data and its associated query. To increase the accuracy of the tagging system we have performed different experiments on many images using similarity measures and various techniques from VoI (Value of Information). The findings have shown the linkage/integration between similarity measures and that VoI improves searches and helps/guides a tagger in choosing the most adequate of tags.
309

Methods for measuring semantic similarity of texts

Gaona, Miguel Angel Rios January 2014 (has links)
Measuring semantic similarity is a task needed in many Natural Language Processing (NLP) applications. For example, in Machine Translation evaluation, semantic similarity is used to assess the quality of the machine translation output by measuring the degree of equivalence between a reference translation and the machine translation output. The problem of semantic similarity (Corley and Mihalcea, 2005) is de ned as measuring and recognising semantic relations between two texts. Semantic similarity covers di erent types of semantic relations, mainly bidirectional and directional. This thesis proposes new methods to address the limitations of existing work on both types of semantic relations. Recognising Textual Entailment (RTE) is a directional relation where a text T entails the hypothesis H (entailment pair) if the meaning of H can be inferred from the meaning of T (Dagan and Glickman, 2005; Dagan et al., 2013). Most of the RTE methods rely on machine learning algorithms. de Marne e et al. (2006) propose a multi-stage architecture where a rst stage determines an alignment between the T-H pairs to be followed by an entailment decision stage. A limitation of such approaches is that instead of recognising a non-entailment, an alignment that ts an optimisation criterion will be returned, but the alignment by itself is a poor predictor for iii non-entailment. We propose an RTE method following a multi-stage architecture, where both stages are based on semantic representations. Furthermore, instead of using simple similarity metrics to predict the entailment decision, we use a Markov Logic Network (MLN). The MLN is based on rich relational features extracted from the output of the predicate-argument alignment structures between T-H pairs. This MLN learns to reward pairs with similar predicates and similar arguments, and penalise pairs otherwise. The proposed methods show promising results. A source of errors was found to be the alignment step, which has low coverage. However, we show that when an alignment is found, the relational features improve the nal entailment decision. The task of Semantic Textual Similarity (STS) (Agirre et al., 2012) is de- ned as measuring the degree of bidirectional semantic equivalence between a pair of texts. The STS evaluation campaigns use datasets that consist of pairs of texts from NLP tasks such as Paraphrasing and Machine Translation evaluation. Methods for STS are commonly based on computing similarity metrics between the pair of sentences, where the similarity scores are used as features to train regression algorithms. Existing methods for STS achieve high performances over certain tasks, but poor results over others, particularly on unknown (surprise) tasks. Our solution to alleviate this unbalanced performances is to model STS in the context of Multi-task Learning using Gaussian Processes (MTL-GP) ( Alvarez et al., 2012) and state-of-the-art iv STS features ( Sari c et al., 2012). We show that the MTL-GP outperforms previous work on the same datasets.
310

Exploring nature of the structured data in GP electronic patient records

Ranandeh Kalankesh, Leila January 2011 (has links)
No description available.

Page generated in 0.0617 seconds