• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 5
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 36
  • 36
  • 14
  • 14
  • 10
  • 9
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Interleaved Prefetching Ray Traversal on CPUs

Meyer, Viktor January 2021 (has links)
Ray tracing is used in computer graphics to generate images. The process of rendering images using ray tracing includes testing large counts of rays for intersection against geometry. Testing for ray-geometry intersection is more formally known as the Ray Shooting Problem (RSP) and has broad applications across multiple communities. Hierarchical acceleration structures are frequently employed to index geometry and increase processing speed. Such hierarchical structures make it almost impossible for central processing units to predict memory access and branching patterns. This project focuses on the Bounding Volume Hierarchy (BVH) structure and improving its performance when querying large batches of first hit ray-geometry intersections. The core contribution is an Interleaved Prefetching Ray Traversal (IPRT) algorithm that addresses memory and branching issues. Five standardized test scenarios with varying geometric complexity provide evaluation data. The experimental evaluation suggests that for incoherent rays, IPRT achieves 6:1 - 41:8% faster performance compared to Stackless Traversal. However, for fully coherent rays, performance is 68:8 - 149:1% slower. These results suggest that for select ray tracing workloads that elicit low coherence, the IPRT algorithm is likely to outperform Stackless Traversal. A microarchitectural analysis affirms previous research; memory accesses and branching behavior are critical for performance. Surprisingly, addressing each component in isolation yields no significant performance improvement. It is paramount to address the two simultaneously, as the IPRT algorithm does successfully. / Ray tracing används inom datorgrafik för att generera bilder. Att rendera bilder med ray tracing kräver att datorer simulerar stora mängder ljusstrålar i en virtuell miljö. Problemet att beräkna om en stråle träffar geometri är mer formellt känt som Ray Shooting Problem (RSP) och har breda applikationer inom flera områden. Hierarkiska accelerationsstrukturer används ofta för att indexera geometri och öka beräkningshastighet. Sådana hierarkiska strukturer gör det nästan omöjligt för centrala processorenheter att förutsäga minnesåtkomst och förgreningsmönster. Detta projekt fokuserar på Bounding Volume Hierarchy (BVH) strukturen och dess prestanda. En ny algoritm tas fram för att behandla dessa identifierade problem: Interleaved Prefetching Ray Traversal (IPRT). Fem standardiserade testscener med varierande geometrisk komplexitet används för experimentell utvärdering. Den experimentella utvärderingen antyder i jämförelse med Stackless Traversal så uppnår IPRT-algoritmen 6:1 - 41:8% bättre prestanda för osammanhängande strålar. När det gäller helt sammanhängande strålar är prestandan dock 68:8 - 149:1% långsammare. En mikroarkitektisk analys bekräftar tidigare forskning; minnesåtkomst och förgreningsbeteende är mycket viktigt för prestanda. Isolerad optimering av faktorerna ger dessvärre ingen signifikant prestandaförbättring. Det är därför ytterst viktigt att optimera båda komponenter samtidigt, vilket IPRT-algoritmen lyckas med.
22

From Horn-SRIQ to Datalog: A Data-Independent Transformation that Preserves Assertion Entailment: Extended Version

Carral, David, González, Larry, Koopmann, Patrick 20 June 2022 (has links)
Ontology-based access to large data-sets has recently gained a lot of attention. To access data e_ciently, one approach is to rewrite the ontology into Datalog, and then use powerful Datalog engines to compute implicit entailments. Existing rewriting techniques support Description Logics (DLs) from ELH to Horn-SHIQ. We go one step further and present one such data-independent rewriting technique for Horn-SRIQ⊓, the extension of Horn-SHIQ that supports role chain axioms, an expressive feature prominently used in many real-world ontologies. We evaluated our rewriting technique on a large known corpus of ontologies. Our experiments show that the resulting rewritings are of moderate size, and that our approach is more efficient than state-of-the-art DL reasoners when reasoning with data-intensive ontologies. / This is an extended version of the article to appear in the proceedings of AAAI 2019.
23

Temporal Query Answering in EL

Borgwardt, Stefan, Thost, Veronika 20 June 2022 (has links)
Context-aware systems use data about their environment for adaptation at runtime, e.g., for optimization of power consumption or user experience. Ontology-based data access (OBDA) can be used to support the interpretation of the usually large amounts of data. OBDA augments query answering in databases by dropping the closed-world assumption (i.e., the data is not assumed to be complete any more) and by including domain knowledge provided by an ontology. We focus on a recently proposed temporalized query language that allows to combine conjunctive queries with the operators of the well-known propositional temporal logic LTL. In particular, we investigate temporalized OBDA w.r.t. ontologies in the DL EL, which allows for efficient reasoning and has been successfully applied in practice. We study both data and combined complexity of the query entailment problem.
24

On Implementing Temporal Query Answering in DL-Lite

Thost, Veronika, Holste, Jan, Özçep, Özgür 20 June 2022 (has links)
Ontology-based data access augments classical query answering over fact bases by adopting the open-world assumption and by including domain knowledge provided by an ontology. We implemented temporal query answering w.r.t. ontologies formulated in the Description Logic DL-Lite. Focusing on temporal conjunctive queries (TCQs), which combine conjunctive queries via the operators of propositional linear temporal logic, we regard three approaches for answering them: an iterative algorithm that considers all data available; a window-based algorithm; and a rewriting approach, which translates the TCQs to be answered into SQL queries. Since the relevant ontological knowledge is already encoded into the latter queries, they can be answered by a standard database system. Our evaluation especially shows that implementations of both the iterative and the window-based algorithm answer TCQs within a few milliseconds, and that the former achieves a constant performance, even if data is growing over time.
25

Temporal Query Answering w.r.t. DL-Lite-Ontologies

Borgwardt, Stefan, Lippmann, Marcel, Thost, Veronika 20 June 2022 (has links)
Ontology-based data access (OBDA) generalizes query answering in relational databases. It allows to query a database by using the language of an ontology, abstracting from the actual relations of the database. For ontologies formulated in Description Logics of the DL-Lite family, OBDA can be realized by rewriting the query into a classical first-order query, e.g. an SQL query, by compiling the information of the ontology into the query. The query is then answered using classical database techniques. In this report, we consider a temporal version of OBDA. We propose a temporal query language that combines a linear temporal logic with queries over DL-Litecore-ontologies. This language is well-suited for expressing temporal properties of dynamical systems and is useful in context-aware applications that need to detect specific situations. Using a first-order rewriting approach, we transform our temporal queries into queries over a temporal database. We then present three approaches to answering the resulting queries, all having different advantages and drawbacks. / This revised version proves that the presented algorithm achieves a bounded history encoding.
26

Reasoning with Temporal Properties over Axioms of DL-Lite

Borgwardt, Stefan, Lippmann, Marcel, Thost, Veronika 20 June 2022 (has links)
Recently, a lot of research has combined description logics (DLs) of the DL-Lite family with temporal formalisms. Such logics are proposed to be used for situation recognition and temporalized ontology-based data access. In this report, we consider DL-Lite-LTL, in which axioms formulated in a member of the DL-Lite family are combined using the operators of propositional linear-time temporal logic (LTL). We consider the satisfiability problem of this logic in the presence of so-called rigid symbols whose interpretation does not change over time. In contrast to more expressive temporalized DLs, the computational complexity of this problem is the same as for LTL, even w.r.t. rigid symbols.
27

On the Complexity of Temporal Query Answering

Baader, Franz, Borgwardt, Stefan, Lippmann, Marcel 20 June 2022 (has links)
Ontology-based data access (OBDA) generalizes query answering in databases towards deduction since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as ontology language we use the prototypical expressive DL ALC. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem.
28

Vers plus d'automatisation dans la construction de systèmes mediateurs pour le web semantique : une application des logiques de description / Towards more automation in building mediator systems in the semantic web context : a description logic application

Niang, Cheikh Ahmed Tidiane 05 July 2013 (has links)
Les travaux que nous présentons dans cette thèse concernent l’automatisation de la construction de systèmes médiateurs pour le web sémantique. L’intégration de données de manière générale et la médiation en particulier sont des processus qui visent à exploiter conjointement des sources d’information indépendantes, hétérogènes et distribuées. L’objectif final est de permettre à un utilisateur d’interroger ces données comme si elles provenaient d’un système unique et centralisé grâce à une interface d’interrogation uniforme basée sur un modèle du domaine d’application, appelé schéma global. Durant ces dernières années, beaucoup de projets de recherche se sont intéressés à cette problématique et de nombreux systèmes d’intégration ont été proposés. Cependant, la quantité d’intervention humaine nécessaire pour construire ces systèmes est beaucoup trop importante pour qu’il soit envisageable de les mettre en place dans bien des situations. De plus, face à la diversité et à l’évolution croissante des sources d’information apparaissent de nouveaux chalenges relatifs notamment à la flexibilité et à la rapidité d’accès à l’information. Nos propositions s’appuient sur les modèles et technologies du web sémantique. Cette généralisation du web qui est un vaste espace d’échange de ressources, non seulement entre êtres humains, mais également entre machines, offre par essence les moyens d’une automatisation des processus d’intégration. Ils reposent d’une part sur des langages et une infrastructure dont l’objectif est d’enrichir le web d’informations "sémantiques", et d’autre part sur des usages collaboratifs qui produisent des ressources ontologiques pertinentes et réutilisables. / This thesis is set in a research effort that aims to bring more automation in building mediator-based data integration systems in the semantic Web context. The mediator approach is a conceptual architecture of data integration that involves combining data residing in different sources and providing users with a unified view of these data. The problem of designing effective data integration solutions has been addressed by several researches, and well-known data integration projects have been developed during the 90’s. However, the building process of these systems rely heavily on human intervention so that it is difficult to implement them in many situations. Moreover, faced with the diversity and the increase of available information sources, the easiness and fastness of information access are new challenges. Our proposals are based on models and technologies of semantic web. The semantic web is recognized as a generalization of the current web which enables to find, combine and share resources, not only between humans but also between machines. It provides a good track for automating the integration process. Possibilities offered by the semantic web are based, on the one hand, on languages and an infrastructure aiming to enrich the web with "semantic" information and, on the other hand, on collaborative practices that allow the production of relevant and reusable ontological resources.
29

Σύστημα υλοποίησης και γραφικής αναπαράστασης αλγορίθμων ανάθεσης υπερσυνδέσμων στον παγκόσμιο ιστό

Τριανταφυλλίδης, Γρηγόριος 02 September 2008 (has links)
Ο παγκόσμιος ιστός έχει εδραιωθεί πλέον ως το δημοφιλέστερο μέσο ανάκτησης πληροφοριών. Όπως είναι λογικό, όσο παλαιώνει τόσο μεγαλύτερη πληροφορία εμπεριέχει. Πληθαίνουν έτσι εκείνοι οι ιστότοποι που γιγαντώνονται άναρχα και ενώ σαν στόχο έχουν να προσφέρουν την πληροφορία στον χρήστη που τους επισκέπτεται, λόγω του τεράστιου όγκου της, κάνουν συχνά δύσκολη την πρόσβαση σε συγκεκριμένα κομμάτια αυτής. Με στόχο την αντιμετώπιση αυτής της κατάστασης, αναπτύσσονται τα τελευταία χρόνια αλγόριθμοι ανάθεσης υπερσυνδέσμων σε ιστοτόπους. Η λογική τους είναι ο εντοπισμός της πιο δημοφιλούς ή πιθανής πληροφορίας και η εξασφάλιση καλύτερης πρόσβασης σε αυτήν, αναθέτοντας υπερσυνδέσμους (hotlinks) προς τις ιστοσελίδες που την περιέχουν. Οι αλγόριθμοι αυτοί εφαρμόζονται όχι σε πραγματικές αναπαραστάσεις ιστοτόπων, αλλά κατά κανόνα στα αντίστοιχα κατευθυνόμενα άκυκλα γραφήματα (DAG) αυτών. Όπως είναι γνωστό κανένας ιστότοπος δεν έχει μορφή DAG, με συνέπεια να υπάρχει μία απόσταση από τη θεωρητική ανεύρεση υπερσυνδέσμων και την πιθανή εφαρμογή τους στην πραγματικότητα. Σε αυτήν την εργασία ασχολούμαστε αρχικά με την μεθοδική καταγραφή της πραγματικής συνδεσμολογίας ενός ιστότοπου, που αποτελεί ένα πρώτο βήμα στην ανάθεση υπερσυνδέσμων σε πραγματικούς ιστοτόπους. Αυτό επιτυγχάνεται με την κατάλληλη προδιαγραφή και υλοποίηση μιας δικτυακής μηχανής αναζήτησης, ώστε να ανταποκρίνεται στις ανάγκες μας. Προτείνουμε στη συνέχεια το εργαλείο ‘HotLink Visualizer’, το οποίο αρχικά μετατρέπει την πληροφορία της συνδεσμολογίας ενός ιστοτόπου σε απλά δεδομένα μορφής πίνακα και στη συνέχεια οπτικοποιεί το αποτέλεσμα. Τέλος, υλοποιεί την απευθείας ανάθεση υπερσυνδέσμων προσθέτοντας αυτόματα μέσα στις σελίδες του ιστοτόπου τους υπερσυνδέσμους και οπτικοποιεί εκ νέου το αποτέλεσμα. Παρέχει έτσι τη δυνατότητα διατήρησης διαφορετικών εκδόσεων της μορφής ενός ιστοτόπου, ανάλογα με το σύνολο από υπερσυνδέσμους που έχουν ανατεθεί σε αυτό. / The World Wide Web has become established as the most popular source of information retrieval. As expected, the older it gets the more information it contains and thus the number of the web sites with gigantic growth and bad information access rates is constantly increased within it. During the last years the matter is being addressed with the development of several hotlink assignment algorithms for web sites. The main idea behind those algorithms is to spot the most popular or more likely to be accessed piece of information and provide better access to it by assigning links (hotlinks) to the web pages containing it. These algorithms are not applied to the actual representations of these web sites but usually to their corresponding direct acyclic graphs (DAGs). However, it is widely known that a web site in its true form is not a DAG, since there can be found hundreds of links pointing to just one page. Hence, there is a gap between the theoretical determination of a set of hotlinks and the possible application of this set to a real web site. In this paper we first address the issue of recording and persisting the exact map of a web site with its full connectivity, which can be considered as a first step towards the assignment of hotlinks in real web sites. We succeed in that, with the appropriate specification and implementation of a web crawler, with functionality suited to our specific needs. We then propose an administrative tool, the ‘Hotlink Visualizer’, which, after persisting in tabular data all the necessary information to capture a web site’s real map, visualizes the outcome and implements hotlink additions by adding with an automated procedure the generated hotlinks in the web pages of the site. Thus we have the ability to maintain in row data different forms and versions of the originally parsed web site, as it can be formed from the assignment of different hotlink sets to it.
30

Návrh databázově neutrální objektově-relační vrstvy / Design of a Database Neutral OR Mapper in C++

Ježa, Pavel January 2007 (has links)
This diploma work deals with design and implementation of the database neutral object-relational (OR) layer in C++ language over inherited database. The goal is to create the layer to encase the access to database from the application layer. Suggested layer will stem from the object-relation mapping technology, which is currently available for many object-programming language, such as C#, Java or Visual Basic. The work consists of three main parts. The forepart is focused on clearing object-relation mapping technology. It briefly overviews differences in capabilities and levels of implementation of various approaches. The next part describes significant properties of databases considered as back-ends for data storage in the project. The aim of this part is to present enough information to support database neutral design of the OR layer. The rest of the document deals with design and implementation of OR layer for the considered environment followed by the summarization of results and overall evaluational.

Page generated in 0.1883 seconds