• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1332
  • 556
  • 320
  • 111
  • 84
  • 57
  • 54
  • 54
  • 37
  • 37
  • 31
  • 28
  • 25
  • 24
  • 23
  • Tagged with
  • 3119
  • 979
  • 511
  • 475
  • 424
  • 415
  • 401
  • 354
  • 326
  • 290
  • 289
  • 276
  • 258
  • 256
  • 243
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

An Ontology-based Retrieval System Using Semantic Indexing

Kara, Soner 01 July 2010 (has links) (PDF)
In this thesis, we present an ontology-based information extraction and retrieval system and its application to soccer domain. In general, we deal with three issues in semantic search, namely, usability, scalability and retrieval performance. We propose a keyword-based semantic retrieval approach. The performance of the system is improved considerably using domain-specific information extraction, inference and rules. Scalability is achieved by adapting a semantic indexing approach. The system is implemented using the state-of-the-art technologies in SemanticWeb and its performance is evaluated against traditional systems as well as the query expansion methods. Furthermore, a detailed evaluation is provided to observe the performance gain due to domain-specific information extraction and inference. Finally, we show how we use semantic indexing to solve simple structural ambiguities.
242

Exploiting Information Extraction Techniques For Automatic Semantic Annotation And Retrieval Of News Videos In Turkish

Kucuk, Dilek 01 February 2011 (has links) (PDF)
Information extraction (IE) is known to be an effective technique for automatic semantic indexing of news texts. In this study, we propose a text-based fully automated system for the semantic annotation and retrieval of news videos in Turkish which exploits several IE techniques on the video texts. The IE techniques employed by the system include named entity recognition, automatic hyperlinking, person entity extraction with coreference resolution, and event extraction. The system utilizes the outputs of the components implementing these IE techniques as the semantic annotations for the underlying news video archives. Apart from the IE components, the proposed system comprises a news video database in addition to components for news story segmentation, sliding text recognition, and semantic video retrieval. We also propose a semi-automatic counterpart of system where the only manual intervention takes place during text extraction. Both systems are executed on genuine video data sets consisting of videos broadcasted by Turkish Radio and Television Corporation. The current study is significant as it proposes the first fully automated system to facilitate semantic annotation and retrieval of news videos in Turkish, yet the proposed system and its semi-automated counterpart are quite generic and hence they could be customized to build similar systems for video archives in other languages as well. Moreover, IE research on Turkish texts is known to be rare and within the course of this study, we have proposed and implemented novel techniques for several IE tasks on Turkish texts. As an application example, we have demonstrated the utilization of the implemented IE components to facilitate multilingual video retrieval.
243

Neuropsychology of Semantic Memory: Theories, Models, and Tests

Laurila, Linda January 2007 (has links)
<p>Semantic memory is part of the long-term memory system, and there are several theories concerning this type of memory. Some of these will be described in this essay. There are also several types of neuropsychological semantic memory deficits. For example, test results have shown that patients tend to have more difficulties naming living than nonliving things, and one probable explanation is that living things are more dependent on sensory than on functional features. Description of concrete concepts is a new test of semantic memory, in which cueing is used, both to capture the maximum performance of patients, and to give insight into the access versus storage problem. The theoretical ideas and empirical results relating to this new test will be described in detail. Furthermore, other tests of semantic memory that have been commonly used will also be briefly described. In conclusion semantic memory is a complex cognitive system that needs to be studied further.</p>
244

Semantos : a semantically smart information query language

Crous, Theodorus. January 2008 (has links)
Thesis (M.Sc.(Computer Science))--University of Pretoria, 2008. / Includes bibliographical references (leaves 99-116).
245

An ontology based approach towards a universal description framework for home networks

Docherty, Liam S. January 2009 (has links)
Current home networks typically involve two or more machines sharing network resources. The vision for the home network has grown from a simple computer network, to every day appliances embedded with network capabilities. In this environment devices and services within the home can interoperate, regardless of protocol or platform. Network clients can discover required resources by performing network discovery over component descriptions. Common approaches to this discovery process involve simple matching of keywords or attribute/value pairings. Interest emerging from the Semantic Web community has led to ontology languages being applied to network domains, providing a logical and semantically rich approach to both describing and discovering network components. In much of the existing work within this domain, developers have focused on defining new description frameworks in isolation from existing protocol frameworks and vocabularies. This work proposes an ontology-based description framework which takes the ontology approach to the next step, where existing description frameworks are in- corporated into the ontology-based framework, allowing discovery mechanisms to cover multiple existing domains. In this manner, existing protocols and networking approaches can participate in semantically-rich discovery processes. This framework also includes a system architecture developed for the purpose of reconciling existing home network solutions with the ontology-based discovery process. This work also describes an implementation of the approach and is deployed within a home-network environment. This implementation involves existing home networking frameworks, protocols and components, allowing the claims of this work to be examined and evaluated from a ‘real-world’ perspective.
246

Χρήση τεχνολογιών σημασιολογικού ιστού για συστήματα συστάσεων

Κάββουρας, Δημήτριος 01 October 2014 (has links)
Σκοπός της εργασίας είναι η μελέτη και εφαρμογή τεχνολογιών σημασιολογικού ιστού για συστήματα συστάσεων, πάνω σε περιεχόμενο που προέρχεται από το διαδίκτυο. Στα πλαίσια της εργασίας σχεδιάστηκε και υλοποιήθηκε διαδικτυακή εφαρμογή που προτείνει άρθρα ειδήσεων λαμβάνοντας υπόψη το προφίλ/ιστορικό του κάθε χρήστη. Λόγω του μεγάλου όγκου πληροφοριών που κατακλύζει το διαδίκτυο συχνά οι χρήστες δυσκολεύονται να ξεχωρίσουν τις πληροφορίες που πραγματικά σχετίζονται με τα ενδιαφέροντα τους. Επιπλέον οι χρήστες έχουν πολύ διαφορετικά ενδιαφέροντα ή προτιμήσεις που μπορούν να ληφθούν υπόψη ώστε να φιλτραριστούν ή να ταξινομηθούν τα αποτελέσματα μιας ερώτησης με σκοπό το αποτέλεσμα να ικανοποιεί τις εξατομικευμένες ανάγκες κάθε χρήστη. Η κατηγορία αυτών των συστημάτων εξατομίκευσης ονομάζεται συστήματα συστάσεων (recommender systems). Τα συστήματα συστάσεων εκμεταλλεύονται τις ιδιαιτερότητες των χρηστών με σκοπό να διευκολύνουν στο να προσδιορίζουν ακριβέστερα τις πληροφορίες ή τις υπηρεσίες για τις οποίες ενδιαφέρονται περισσότερο ή σχετίζονται με τις ανάγκες τους, κάνοντας χρήση ειδικών αλγορίθμων. Οι αλγόριθμοι που χρησιμοποιούνται λαμβάνουν ως είσοδο τα χαρακτηριστικά και τις προτιμήσεις των χρηστών, ή τις σχέσεις μεταξύ των χρηστών ή τα γνωρίσματα των προς σύσταση αντικειμένων και υπολογίζουν το εκτιμώμενο ενδιαφέρον του χρήστη για κάθε αντικείμενο. Στην συνέχεια ταξινομούν ή φιλτράρουν τα αντικείμενα με κριτήριο το εκτιμώμενο ενδιαφέρον. Παρά τη μεγάλη ερευνητική δραστηριότητα στα συστήματα συστάσεων υπάρχουν σημαντικά προβλήματα που δεν έχουν λυθεί ακόμα πλήρως και απαιτείται περαιτέρω έρευνα. Για παράδειγμα οι τυπικές προσεγγίσεις εξαρτώνται από το πεδίο ορισμού(domain). Τα μοντέλα τους δημιουργούνται από τις πληροφορίες που συλλέγονται μέσα σε ένα συγκεκριμένο πεδίο(domain), και δεν μπορούν να επεκταθούν ή να ενσωματωθούν σε άλλα συστήματα. Επιπλέον η ανάγκη για περαιτέρω ευελιξία με τη μορφή συστάσεων που εξάγονται από επερωτήσεις ή προτάσεων που προσανατολίζονται σε ομάδες χρηστών, καθώς και η εξέταση πλαισιακών χαρακτηριστικών στη διάρκεια των διαδικασιών δημιουργίας συστάσεων είναι και αυτές απαιτήσεις που δεν πληρούνται στα περισσότερα συστήματα. Στην εργασία αυτή παρουσιάζουμε ένα σύστημα συστάσεων που χρησιμοποιεί τεχνολογίες σημασιολογικού ιστού για να περιγράψει και να συνδέσει τις ειδήσεις με τις προτιμήσεις του χρήστη ώστε να δημιουργήσει βελτιωμένες συστάσεις. Οι περιγραφές των ειδήσεων και τα προφίλ των χρηστών δημιουργούνται με την βοήθεια εννοιών που ορίζονται σε ένα σύνολο οντολογιών πεδίου. Ανάλογα με τις ομοιότητες μεταξύ των περιγραφών των ειδήσεων και των προφίλ των χρηστών καθώς και τις σημασιολογικές σχέσεις μεταξύ των εννοιών, το σύστημα υποστηρίζει μοντέλα συστάσεων βάσει περιεχομένου που έχουν σαν επίκεντρο το μεμονωμένο χρήστη, και επιτρέπει την εξαγωγή συμπερασμάτων βασισμένα σε κανόνες για την υποστήριξη εξατομικευμένων συστάσεων. Συγκεκριμένα γίνεται αξιολόγηση του μοντέλου που εξατομικεύει τη σειρά με την οποία τα άρθρα ειδήσεων παρουσιάζονται στο χρήστη λαμβάνοντας υπόψη το προφίλ/ιστορικό των βραχυπρόθεσμων και των μακροπρόθεσμων ενδιαφερόντων. / The scope of this Msc Thesis is the study and applies Semantic Web Technologies, for Recommendation Systems, over content for the internet. For the purpose of work, we designed and implemented web application that proposes news articles considering the profile/ history of each user. Because of the information overload which invading the internet, often the users are complicated to distinguish the information that really is related to their interests. The category of these personalization systems called recommendation systems. More over the users have very different interests or preferences that can taken into account in order to classify or filtering the results of question with scope the result to satisfies the personalized needs of each user. The category of these personalization systems called recommendation systems. Recommendation systems exploit the particularities of users with scope facilitate to identify precisely the information or the services for which they are more interested or related to their needs, using special algorithms. The algorithms used take as input the attributes and the user’s preferences, or the relations between users or the attributes of the items to be recommender and calculate the estimated interest of user for each item. Then classify or filtering the items with criterion the estimated interest. Despite the great research activity in recommendation systems common problem have not fully solved yet, and further investigation is needed. For example, typical approach dependent from domain. The model are created from the information where collected in specific domain, and cannot be extended or integrated in other systems. More over the need for further flexibility in the recommendation derived from question or oriented recommendation to group users, and the consideration of contextual features during the recommendation process are also unfulfilled requirements in most systems. This thesis presents news recommendations systems which used semantic web technologies to describe and relate news items, and the user preferences in order to produce enhanced recommendations. The items descriptions and the user profiles are created with concepts in the domain ontology. According to the similarity between the description items and the user profiles, and the semantic relation between concepts, the system supported content –based model that centered on a single user, and allows the Inference rule-based for the supported personalized recommendation. Specifically an evaluation of the model that personalized the order in which news articles are presented to the user, considering the profile/ history of sort – terms and long – terms interests.
247

Lietuvių kalbos semantinių požymių lentelės valdymo programinė įranga / Lithuanian language semantic attributes tables ruling software

Boiko, Irena 11 June 2004 (has links)
The purpose of this paper covered execution of one stage of semantic analysis compiuterization by development of a software able to improve the guality of automated translation. Such software "Lexes", the browser and editor routine of Lithuanian words and related to such words semantic attributes.
248

Semantinis teksto transformavimas ir jo taikymas kompiuterinio vertimo sistemose / Semantic text conversion and using it in computerized automatic translation systems

Pavlovas, Andrijanas 04 June 2006 (has links)
Today Lithuania have a real need in having automatic translation system, which can simplificate a process of translation English language to Lithuanian language. But how we can realize this. First of all we must have a text semantic transformation system. It is a main purpose of this work – to create text semantic transformation system. Semantic transformation – process, which can help us in simplificating a sentence structure, but also to save a connections between different parts of sentence and the main means of sentence is not disappear. In my project I selected several trends(realized as functions) how we can transform a text. For example it can be a shorten sentence length or remove from sentence modal verbs, because in described rules this type of verb not need. And Iwill try to realized this my project.
249

MODELING CLINICAL PATHWAYS AS BUSINESS PROCESS MODELS USING BUSINESS PROCESS MODELING NOTATION

Hashemian, Nima 05 March 2012 (has links)
We take a healthcare knowledge management approach to represent the Clinical Pathway (CP) as workflows. We have developed a semantic representation of CP in terms of a CP ontology that outlines the different clinical processes, their properties, constraints and relationships, and is able to computerize a range of CP. To model business workflows we use the graphical Business Process Modeling Notation (BPMN) modeling language that generates a BPMN ontology. To represent a CP as a BPMN workflow, we have developed a semantic interoperability (mapping ontology) framework between the CP ontology and the BPMN ontology. The mapping ontology allows the alignment of relations between two ontologies and ensures that a clinical process defined in the CP ontology is mapped to a standard BPMN workflow element. We execute our BPMN-based CP in the Lombardi workflow engine, whereby users can view the execution of the CP and make the necessary adjustments.
250

Linked Data Quality Assessment and its Application to Societal Progress Measurement

Zaveri, Amrapali 19 May 2015 (has links) (PDF)
In recent years, the Linked Data (LD) paradigm has emerged as a simple mechanism for employing the Web as a medium for data and knowledge integration where both documents and data are linked. Moreover, the semantics and structure of the underlying data are kept intact, making this the Semantic Web. LD essentially entails a set of best practices for publishing and connecting structure data on the Web, which allows publish- ing and exchanging information in an interoperable and reusable fashion. Many different communities on the Internet such as geographic, media, life sciences and government have already adopted these LD principles. This is confirmed by the dramatically growing Linked Data Web, where currently more than 50 billion facts are represented. With the emergence of Web of Linked Data, there are several use cases, which are possible due to the rich and disparate data integrated into one global information space. Linked Data, in these cases, not only assists in building mashups by interlinking heterogeneous and dispersed data from multiple sources but also empowers the uncovering of meaningful and impactful relationships. These discoveries have paved the way for scientists to explore the existing data and uncover meaningful outcomes that they might not have been aware of previously. In all these use cases utilizing LD, one crippling problem is the underlying data quality. Incomplete, inconsistent or inaccurate data affects the end results gravely, thus making them unreliable. Data quality is commonly conceived as fitness for use, be it for a certain application or use case. There are cases when datasets that contain quality problems, are useful for certain applications, thus depending on the use case at hand. Thus, LD consumption has to deal with the problem of getting the data into a state in which it can be exploited for real use cases. The insufficient data quality can be caused either by the LD publication process or is intrinsic to the data source itself. A key challenge is to assess the quality of datasets published on the Web and make this quality information explicit. Assessing data quality is particularly a challenge in LD as the underlying data stems from a set of multiple, autonomous and evolving data sources. Moreover, the dynamic nature of LD makes assessing the quality crucial to measure the accuracy of representing the real-world data. On the document Web, data quality can only be indirectly or vaguely defined, but there is a requirement for more concrete and measurable data quality metrics for LD. Such data quality metrics include correctness of facts wrt. the real-world, adequacy of semantic representation, quality of interlinks, interoperability, timeliness or consistency with regard to implicit information. Even though data quality is an important concept in LD, there are few methodologies proposed to assess the quality of these datasets. Thus, in this thesis, we first unify 18 data quality dimensions and provide a total of 69 metrics for assessment of LD. The first methodology includes the employment of LD experts for the assessment. This assessment is performed with the help of the TripleCheckMate tool, which was developed specifically to assist LD experts for assessing the quality of a dataset, in this case DBpedia. The second methodology is a semi-automatic process, in which the first phase involves the detection of common quality problems by the automatic creation of an extended schema for DBpedia. The second phase involves the manual verification of the generated schema axioms. Thereafter, we employ the wisdom of the crowds i.e. workers for online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) to assess the quality of DBpedia. We then compare the two approaches (previous assessment by LD experts and assessment by MTurk workers in this study) in order to measure the feasibility of each type of the user-driven data quality assessment methodology. Additionally, we evaluate another semi-automated methodology for LD quality assessment, which also involves human judgement. In this semi-automated methodology, selected metrics are formally defined and implemented as part of a tool, namely R2RLint. The user is not only provided the results of the assessment but also specific entities that cause the errors, which help users understand the quality issues and thus can fix them. Finally, we take into account a domain-specific use case that consumes LD and leverages on data quality. In particular, we identify four LD sources, assess their quality using the R2RLint tool and then utilize them in building the Health Economic Research (HER) Observatory. We show the advantages of this semi-automated assessment over the other types of quality assessment methodologies discussed earlier. The Observatory aims at evaluating the impact of research development on the economic and healthcare performance of each country per year. We illustrate the usefulness of LD in this use case and the importance of quality assessment for any data analysis.

Page generated in 0.1192 seconds