Spelling suggestions: "subject:"death"" "subject:"heath""
41 |
Effiziente Ad-Hoc-Abfragen in Objektdatenbanken am Beispiel der ZODBWehrmann, Sebastian. Theune, Christian. January 2008 (has links)
Chemnitz, Techn. Univ., Diplomarb., 2008.
|
42 |
Erweiterte Konzepte zur FunktionsintegrationWissmann, Klaus. January 2000 (has links)
Ulm, Univ., Diplomarbeit, 2000.
|
43 |
Einsatz von XML in einem LiegenschaftsverwaltungssystemEickhoff, Luis Gustavo. January 2004 (has links)
Konstanz, FH, Diplomarb., 2004.
|
44 |
An automated XPATH to SQL transformation methodology for XML dataJandhyala, Sandeep. January 2006 (has links)
Thesis (M.S.)--Georgia State University, 2006. / Rajshekhar Sunderraman, committee chair; Sushil Prasad, Alex Zelikovsky, committee members. Electronic text (58 p.) : digital, PDF file. Description based on contents viewed Aug. 13, 2007. Includes bibliographical references (p. 58).
|
45 |
Experimental Database Export/Import for InPUTKarlsson, Stefan January 2013 (has links)
The Intelligent Parameter Utilization Tool (InPUT) is a format and API for thecross-language description of experiments, which makes it possible to defineexperiments and their contexts at an abstract level in the form of XML- andarchive-based descriptors. By using experimental descriptors, programs can bereconfigured without having to be recoded and recompiled and the experimentalresults of third-parties can be reproduced independently of the programminglanguage and algorithm implementation. Previously, InPUT has supported theexport and import of experimental descriptors to/from XML documents, archivefiles and LaTex tables. The overall aim of this project was to develop an SQLdatabase design that allows for the export, import, querying, updating anddeletion of experimental descriptors, implementing the design as an extensionof the Java implementation of InPUT (InPUTj) and to verify the generalapplicability of the created implementation by modeling real-world use cases.The use cases covered everything from simple database transactions involvingsimple descriptors to complex database transactions involving complexdescriptors. In addition, it was investigated whether queries and updates ofdescriptors are executed more rapidly if the descriptors are stored in databasesin accordance with the created SQL schema and the queries and updates arehandled by the DBMS PostgreSQL or, if the descriptors are stored directly infiles and the queries and updates are handled by the default XML-processingengine of InPUTj (JDOM). The results of the test case indicate that the formerusually allows for a faster execution of queries while the latter usually allowsfor a faster execution of updates. Using database-stored descriptors instead offile-based descriptors offers many advantages, such as making it significantlyeasier and less costly to manage, analyze and exchange large amounts of experi-mental data. However, database-stored descriptors complement file-baseddescriptors rather than replace them. The goals of the project were achieved,and the different types of database transactions involving descriptors can nowbe handled via a simple API provided by a Java facade class.
|
46 |
Implementierung der XPath-Anfragesprache für XML-Daten in RDBMS unter Ausnutzung des Nummerierungsschemas DLNSchmidt, Oliver 16 November 2017 (has links)
XML-Dokumente haben sich in den letzten Jahren zu einem wichtigen Datenformat für die standardisierte Übertragung von vielfältigen Informationen entwickelt. Dementsprechend gibt es einen großen Bedarf nach Speicherlösungen für XML-Daten. Neben den nativen XML-Datenbanken bieten sich aber zunehmend auch relationale Datenbanksysteme mit unterschiedlichen Ansätzen für die Speicherung der Dokumente an. Die Art der Speicherung ist jedoch nur ein Aspekt der XML-Datenhaltung - die Anwender wollen auch mit ihren gewohnten XML-Schnittstellen auf die Daten zurückgreifen. Das XMLRDB-Projekt bietet dafür ein Nummerierungsschema für XML-Knoten an, welches es erlaubt, aus den in Relationen gespeicherten Daten Strukturinformationen zu gewinnen. In dieser Diplomarbeit werden diese Informationen für eine XPath-Schnittstelle in XMLRDB genutzt, welche dadurch in der Lage ist, XPath-Anfragen nach SQL zu konvertieren und effizient deren Lösungsmenge zu bestimmen. Für diese Schnittstelle werden verschiedene Verfahren für die Umsetzung der XPath-Strukturen vorgestellt. An Hand einer Implementierung wird gezeigt, wie die Fähigkeiten von unterschiedliche Datenbanksystemen gewinnbringend in das Schema integriert werden können. Mittels eines Benchmarks findet schließlich eine Analyse der XPath-Umsetzung hinsichtlich Effizienz und Performanz statt.
|
47 |
XSLTHübsch, Chris 18 May 2004 (has links)
Workshop "Netz- und Service-Infrastrukturen"
Überblicksvortrag XSLT.
Arbeitsweise, Kontrollfluss, Elemente und Funktionen
|
48 |
Comparison and Implementation of Query Containment Algorithms for XPath / Jämförelse och implementation av Query Containment-algoritmer för XPathWåreus, Linus, Wällstedt, Max January 2016 (has links)
This thesis investigates the practical aspects of implementing Query Containment algorithms for the query language XPath. Query Containment is the problem to decide if the results of one query are a subset of the results of another query for any database. Query Containment algorithms can be used for the purpose of optimising the querying process in database systems. Two algorithms have been implemented and compared, The Canonical Model and The Homomorphism Technique. The algorithms have been compared with respect to speed, ease of implementation, accuracy and usability in database systems. Benchmark tests were developed to measure the execution times of the algorithms on a specific set of queries. A simple database system was developed to investigate the performance gain of using the algorithms. It was concluded that The Homomorphism Technique outperforms The Canonical Model in every test case with respect to speed. The Canonical Model is however more accurate than The Homomorphism Technique. Both algorithms were easy to implement, but The Homomorphism Technique was easier. In the database system, there was performance to be gained by using Query Containment algorithms for a certain type of queries, but in most cases there was a performance loss. A database system that utilises Query Containment algorithms for optimisation would for every issued query have to evaluate if such an algorithm should be used. / Denna rapport undersöker de praktiska aspekterna av att implementera Query Containment-algoritmer för queryspråket XPath. Query Containment är problemet att avgöra om resultaten av en query är en delmängd av resultaten av en annan query, oavsett databas. Query Containment-algoritmer kan användas för ändamålet att optimera queryingprocessen i databassystem. Två algoritmer har implementerats och jämförts, The Canonical Model och The Homomorphism Technique. Algoritmerna har jämförts med avseende på hastighet, lätthet att implementera, exakthet och användbarhet i riktiga databassystem. Prestandatester utvecklades för att mäta exekveringstider för algoritmerna på specifikt framtagna queries. Ett enkelt databassystem utvecklades för att undersöka prestandavinsten av att använda algoritmerna. Slutsatsen att The Homomorphism Technique presterar bättre än The Canonical Model i samtliga testfall med avseende på hastighet drogs. The Canonical Model är dock mer exakt än The Homomorphism Technique. Båda algoritmerna var lätta att implementera, men The Homomorphism Technique var lättare. I databassystemet fanns det en prestandavinst i att använda Query Containment-algoritmer för en viss typ av queries, men i de flesta fall var det en prestandaförlust. Ett databassystem som använder Query Containment-algoritmer för optimering bör för varje query avgöra om en sådan algoritm ska användas.
|
49 |
Contrôle d'accès efficace pour des données XML : problèmes d'interrogation et de mise-à-jour / Efficient Access Control to XML Data : Querying and Updating ProblemsMahfoud, Houari 18 February 2014 (has links)
Le langage XML est devenu un standard de représentation et d'échange de données à travers le web. Le but de la réplication de données au sein de différents sites est de minimiser le temps d'accès à ces données partagées. Cependant, différents problèmes sont liés à la sécurisation de ces données. Le but de cette thèse est de proposer des modèles de contrôles d'accès XML qui prennent en compte les droits de lecture et de mise-à-jour et qui permettent de surmonter les limites des modèles qui existent. Nous considérons les langages XPath et XQuery Update Facility pour la formalisation des requêtes d'accès et des requêtes de mise-à-jour respectivement. Nous donnons des descriptions formelles de nos modèles de contrôles d'accès et nous présentons des algorithmes efficaces pour le renforcement des politiques de sécurité spécifiées à la base de ces modèles. L'autre partie de cette thèse est consacrée à l'étude pratique de nos propositions. Nous présentons notre système appelé SVMAX qui met en oeuvre nos solutions, et nous conduisons une étude expérimentale basée sur une DTD réelle pour montrer son efficacité. Plusieurs systèmes de bases de données natives (systèmes de BDNs) ont été proposés récemment qui permettent une manipulation efficace des données XML en utilisant la plupart des standards du W3C. Nous montrons que notre système SVMAX peut être intégré facilement et efficacement au sein d'un large ensemble de systèmes de BDNs. A nos connaissances, SVMAX est le premier système qui permet la sécurisation des données XML conformes à des DTDs arbitraires (récursives ou non) et ceci en moyennant un fragment significatif de XPath et une classe riche d'opérations de mise-à-jour XML / XML has become a standard for representation and exchange of data across the web. Replication of data within different sites is used to increase the availability of data by minimizing the access's time to the shared data. However, the safety of the shared data remains an important issue. The aim of the thesis is to propose some models of XML access control that take into account both read and update rights and that overcome limitations of existing models. We consider the XPath language and the XQuery Update Facility to formalize respectively user access queries and user update operations. We give formal descriptions of our read and update access control models and we present efficient algorithms to enforce policies that can be specified using these models. Detailed proofs are given that show the correctness of our proposals. The last part of this thesis studies the practicality of our proposals. Firstly, we present our system, called SVMAX, that implements our solutions and we conduct an extensive experimental study, based on real-life DTD, to show that it scales well. Many native XML databases systems (NXD systems) have been proposed recently that are aware of the XML data structure and provide efficient manipulation of XML data by the use of most of W3C standards. Finally, we show that our system can be integrated easily and efficiently within a large set of NXD systems, namely BaseX, Sedna and eXist-db. To the best of our knowledge, SVMAX is the first system for securing XML data in the presence of arbitrary DTDs (recursive or not), a significant fragment of XPath and a rich class of XML update operations
|
50 |
Evaluation of web scraping methods : Different automation approaches regarding web scraping using desktop tools / Utvärdering av webbskrapningsmetoder : Olika automatiserings metoder kring webbskrapning med hjälp av skrivbordsverktygOucif, Kadday January 2016 (has links)
A lot of information can be found and extracted from the semantic web in different forms through web scraping, with many techniques emerging throughout time. This thesis is written with the objective to evaluate different web scraping methods in order to develop an automated, performance reliable, easy implemented and solid extraction process. A number of parameters are set to better evaluate and compare consisting techniques. A matrix of desktop tools are examined and two were chosen for evaluation. The evaluation also includes the learning of setting up the scraping process with so called agents. A number of links gets scraped by using the presented techniques with and without executing JavaScript from the web sources. Prototypes with the chosen techniques are presented with Content Grabber as a final solution. The result is a better understanding around the subject along with a cost-effective extraction process consisting of different techniques and methods, where a good understanding around the web sources structure facilitates the data collection. To sum it all up, the result is discussed and presented with regard to chosen parameters. / En hel del information kan bli funnen och extraherad i olika format från den semantiska webben med hjälp av webbskrapning, med många tekniker som uppkommit med tiden. Den här rapporten är skriven med målet att utvärdera olika webbskrapnings metoder för att i sin tur utveckla en automatiserad, prestandasäker, enkelt implementerad och solid extraheringsprocess. Ett antal parametrar är definierade för att utvärdera och jämföra befintliga webbskrapningstekniker. En matris av skrivbords verktyg är utforskade och två är valda för utvärdering. Utvärderingen inkluderar också tillvägagångssättet till att lära sig sätta upp olika webbskrapnings processer med så kallade agenter. Ett nummer av länkar blir skrapade efter data med och utan exekvering av JavaScript från webbsidorna. Prototyper med de utvalda teknikerna testas och presenteras med webbskrapningsverktyget Content Grabber som slutlig lösning. Resultatet utav det hela är en bättre förståelse kring ämnet samt en prisvärd extraheringsprocess bestående utav blandade tekniker och metoder, där en god vetskap kring webbsidornas uppbyggnad underlättar datainsamlingen. Sammanfattningsvis presenteras och diskuteras resultatet med hänsyn till valda parametrar.
|
Page generated in 0.0355 seconds