• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 557
  • 231
  • 139
  • 127
  • 110
  • 68
  • 65
  • 43
  • 30
  • 24
  • 19
  • 14
  • 10
  • 9
  • 8
  • Tagged with
  • 1548
  • 408
  • 263
  • 240
  • 233
  • 231
  • 226
  • 213
  • 171
  • 155
  • 145
  • 131
  • 127
  • 120
  • 112
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Towards guidelines for TEI encoding of text artefacts in Egyptology

Werning, Daniel A. 21 April 2017 (has links) (PDF)
The presentation presents the state of discussion for guidelines for TEI XML encoding of Ancient Egyptian text artefacts in Egyptology as of middle of 2016. It introduces Egyptological projects actively involved in the development of TEI encoding recommendations and online thesauri/ontologies. Special attention is paid to the TEI encoding of toponyms, personal names, relative and absolute dates, as well as language varieties and script varieties. Furthermore, the presentation introduces the current state of an EpiDoc Cheatsheet for Egyptology compiled by Daniel A. Werning, which gives recommendations for the encoding of traditional philological markup in Egyptology which, in turn, is largely conform to the EpiDoc Guidelines (v8.21). A specific topic, in this respect, is the adaptation of the TEI ‘regularization’ tag <reg> to the needs of Egyptology.
332

Modelo de sistema para gerenciamento de conhecimentos explícitos em abordagens de DFA (Design for Assembly) / Management model for explicit knowledge in DFA (Design For Assembly) approaches

Savi, Antonio Francisco 29 April 2009 (has links)
Uma importante fonte de vantagem competitiva para muitas empresas é a capacidade de criar projetos de produtos compostos por um número pequeno de partes e de fácil montagem sem deixar de atender às expectativas do consumidor, denominada abordagem DFA - Design For Assembly (Projeto para Montagem). Para o reprojeto ou para reduzir o custo do projeto de novos produtos com esse foco, é necessário obter informações que podem estar muitas vezes armazenadas em locais de difícil acesso e nas mais variadas formas de repositórios do conhecimento. Uma maneira de obter essas informações é criar uma classe sistema, chamada peer-to-peer, que permite a sincronização e compartilhamento desses documentos entre cada local espalhado numa rede. Esse tipo de sistema busca a descentralização das informações, ou seja, estas ficam espalhadas pela rede (interna ou externa) com a vantagem de que cada organização poderá manter sob sua \"guarda\" as informações sem nunca dispô-las em servidores de terceiros e estas poderão chegar automaticamente até os usuários por meio de transações XML. Sendo assim, o objetivo deste trabalho é desenvolver uma ferramenta de auxílio à gestão do conhecimento que atue na geração, codificação e transferência do conhecimento sobre técnicas DFA. A avaliação de soluções existentes foi utilizada como metodologia para propor um modelo teórico que especifica o desenvolvimento do sistema. Pode-se concluir que esse tipo de sistema fornece aos participantes meios de coleta de informações mais eficazes, já que informações sobre DFA podem ser consultadas. / An important source of competitive advantages for many organizations worldwide is the capacity to create projects for products consisting of a small number of parts of easy assemblage, nonetheless attending to the consumers\' expectations, named DFA approach - Design for Assembly. In order to re-project or decrease the expenses caused by designing new products aimed at such, it is necessary to retrieve information data which might be stored in inaccessible places and in a large variety of knowledge repositories. A way of retrieving this data is the creation of a system class, called peer-to-peer, which allows these documents to be synchronized and shared among terminals connected to a network. This system aims at decentralizing data, that is, they are spread throughout the network (internal or external) with the advantage that each company may protect the information by not making it available in computers belonging to third parties. Moreover, this information can automatically reach users by means of XML transactions. Therefore, the aim of this study is to develop a tool to help knowledge management work in generating, codifying and transferring knowledge concerning DFA techniques. The assessment of available solutions was used as a method to propose a theoretical model which pinpoints the system development. It can be concluded that this type of system provides the users with means to collect data more efficiently, since DFA data may be accessed.
333

Serviços semânticos: uma abordagem RESTful. / Semantic web services: a RESTful approach

Ferreira Filho, Otávio Freitas 06 April 2010 (has links)
Este trabalho foca na viabilização do desenvolvimento de serviços semânticos de acordo com o estilo arquitetural REST. Mais especificamente, considera-se a realização REST baseada no protocolo HTTP, resultando em serviços semânticos RESTful. A viabilização de serviços semânticos tem sido tema de diversas publicações no meio acadêmico. Porém, a grande maioria dos esforços considera apenas os serviços desenvolvidos sob o estilo arquitetural RPC, através do protocolo SOAP. A abordagem RPC, fortemente incentivada pela indústria de software, é perfeitamente realizável em termos tecnológicos, mas agrega computações e definições desnecessárias, o que resulta em serviços mais complexos, com baixo desempenho e pouca escalabilidade. O fato é que serviços REST compõem a maioria dos serviços disponibilizados na Web 2.0 nome amplamente adotado para referenciar a atual fase da Web, notoriamente focada na geração colaborativa de conteúdo. A proposta oferecida por este trabalho utiliza uma seleção específica de linguagens e protocolos já existentes, reforçando sua realizabilidade. Utiliza-se a linguagem OWL-S como ontologia de serviços e a linguagem WADL para a descrição sintática dos mesmos. O protocolo HTTP é utilizado na transferência das mensagens, na definição da ação a ser executada e no escopo de execução desta ação. Identificadores URI são utilizados na definição da interface de acesso ao serviço. A compilação final dá origem à ontologia RESTfulGrounding, uma especialização de OWL-S. / The proposal is to allow the development of semantic Web services according to an architectural style called REST. More specifically, it considers a REST implementation based on the HTTP protocol, resulting in RESTful Semantic Web Services. The development of semantic Web services has been the subject of various academic papers. However, the predominant effort considers Web services designed according to another architectural style named RPC, mainly through the SOAP protocol. The RPC approach, strongly stimulated by the software industry, aggregates unnecessary processing and definitions that make Web services more complex than desired. Therefore, services end up being not as scalable and fast as possible. In fact, REST services form the majority of Web services developed within the Web 2.0 context, an environment clearly focused on user-generated content and social aspects. The proposal presented here makes use of a specific selection of existing languages and protocols, reinforcing its feasibility. Firstly, OWL-S is used as the base ontology for services, whereas WADL is for syntactically describing them. Secondly, the HTTP protocol is used for transferring messages; defining the action to be executed; and also defining the execution scope. Finally, URI identifiers are responsible for specifying the service interface. The final compilation proposed results in an ontology named RESTfulGrounding, which extends OWL-S.
334

Incremental Maintenance Of Materialized XQuery Views

El-Sayed, Maged F 23 August 2005 (has links)
"Keeping views fresh by maintaining the consistency between materialized views and their base data in the presence of base updates is a critical problem for many applications, including data warehousing and data integration. While heavily studied for traditional databases, the maintenance of XML views remains largely unexplored. Maintaining XML views is complex due to the richness of the XML data model and the powerful capabilities of XML query languages, such as XQuery. This dissertation proposes a comprehensive solution for the general problem of maintaining materialized XQuery views. Our solution is the first to enable the maintenance of a large class of XQuery views including XPath expressions, FLWOR expressions, and Element Constructors. These views may contain arbitrary result construction and arbitrary grouping and join operations. Our solution also supports the unique order requirements of XQuery including source document order and query order. The contributions of this dissertation include: (i) an efficient solution for supporting order in XML query processing and view maintenance, (ii) an identifier-based technique for enabling incremental construction of XML views, (iii) a mechanism for modeling and validating source XML updates, (iv) a counting algorithm for supporting view maintenance on delete and modify updates, (v) an algebraic solution for propagating bulk XML updates, and (vi) an efficient mechanism for refreshing materialized XML views on propagated updates. We provide proofs of correctness of our proposed techniques for materialized XQuery maintenance. We have implemented a prototype of our view maintenance solution on top of the Rainbow XML query engine, developed at WPI. Our experiments confirm that our solution provides a practical and efficient solution for maintaining materialized XQuery views even when handling heterogeneous batches of possibly large source updates. Our solution follows the widely adopted propagate-apply framework for view maintenance common to all mainstream query engines. That is, our solution produces incremental maintenance plans in the same algebraic language used to define the views. These plans can thus be optimized and executed by standard query processing techniques. Being compatible with standard frameworks paves the way for our XML view maintenance solution to be easily adopted by existing database engines."
335

Updating XML Views

Wang, Ling 24 August 2006 (has links)
"Update operations over XML views are essential for applications using XML views. In this dissertation work, we provide scalable solutions to support updating through XML views defined over relational databases. Especially we focus on the update-public semantic, where updates are always public (made to the public database), and the update-local semantic, where update effects are first kept local and then made public as and when required. Towards this, we propose the clean extended-source theory for determining whether a correct view update translation exists, which then serves as a theoretical foundation for us to design practical XML view updating algorithms. Under update-public semantic, state-of-the-art view updating work focus on identifying the correct update translation purely on the data. We instead take a schema-centric solution, which utilizes the schema of the underlying source to effectively prune updates that are guaranteed to be not translatable and pass updates that are guaranteed to be translatable directly to the SQL engine. Only those updates that could not be classified using schema knowledge are finally analyzed by examining the data. This required data-level check is further optimized under schema guidance to prune the search space for finding a correct translation. As the first work addressing the update-local semantic, we propose a practical framework, called LoGo. LoGo Localizes the view update translation, while preserves the properties of views being side-effect free and updates being always updatable. LoGo also supports on-demand merging of the local database of the subject viewinto the public database (also called global database), while still guaranteeing the subject view being free of side effects. A flexible synchronization service is provided in LoGo that enables all other views defined over the same public database to be refreshed, i.e., synchronized with the publically committed changes, if so desired. Further, given that XMLis an ordered datamodel,we propose an ordersensitive solution named O-HUX to support XML view updating with order. We have implemented the algorithms, along with respective optimization techniques. Experimental results confirm the effectiveness of the proposed services, and highlight its performance characteristics."
336

Automaton Meet Algebra: A Hybrid Paradigm for Efficiently Processing XQuery over XML Stream

Su, Hong 30 January 2006 (has links)
XML stream applications bring the challenge of efficiently processing queries on sequentially accessible token-based data streams. The automaton paradigm is naturally suited for pattern retrieval on tokenized XML streams, but requires patches for implementing the filtering or restructuring functionalities common for the XML query languages. In contrast, the algebraic paradigm is well-established for processing self-contained tuples. However, it does not traditionally support token inputs. This dissertation proposes a framework called Raindrop, which accommodates both the automaton and algebra paradigms to take advantage of both. First, we propose an architecture for Raindrop. Raindrop is an algebra framework that models queries at different abstraction levels. We represent the token-based automaton computations as an algebraic subplan at the high level while exposing the automaton details at the low level. The algebraic subplan modeling automaton computations can thus be integrated with the algebraic subplan modeling the non-automaton computations. Second, we explore a novel optimization opportunity. Other XML stream processing systems always retrieve all the patterns in a query in the automaton. In contrast, Raindrop allows a plan to retrieve some of the pattern retrieval in the automaton and some out of the automaton. This opens up an automaton-in-or-out optimization opportunity. We study this optimization in two types of run-time environments, one with stable data characteristics and one with fluctuating data characteristics. We provide search strategies catering to each environment. We also describe how to migrate from a currently running plan to a new plan at run-time. Third, we optimize the automaton computations using the schema knowledge. A set of criteria are established to decide what schema constraints are useful to a given query. Optimization rules utilizing different types of schema constraints are proposed based on the criteria. We design a rule application algorithm which ensures both completeness (i.e., no optimization is missed) and minimality (i.e., no redundant optimization is introduced). The experimentations on both real and synthetic data illustrate that these techniques bring significant performance improvement with little overhead.
337

XEM: XML Evolution Management

Kramer, Diane S. 21 July 2001 (has links)
"As information on the World Wide Web continues to proliferate at an astounding rate, the Extensible Markup Language (XML) has been emerging as a standard format for data representation on the web. In many application domains, specific document type definitions (DTDs) are designed to enforce a semantically agreed-upon structure of the XML documents. In XML context, these structural definitions serve as schemata. However, both the data and the structure (schema) of XML documents tend to change over time for a multitude of reasons, including to correct design errors in the DTD, to allow expansion of the application scope over time, or to account for the merging of several businesses into one. Most of the current software tools that enable the use of XML do not provide explicit support for such data or schema changes. Using these tools in a changing environment entails making manual edits to DTDs and XML data and reloading them from scratch. In this vein, we put forth the first solution framework, called XML Evolution Manager (XEM), to manage the evolution of DTDs and XML documents. XEM provides a minimal yet complete taxonomy of basic change primitives. These primitives, classified as either data or schema changes, are consistency-preserving. For a data change, they ensure that the modified XML document conforms to its DTD both in structure and constraints. For a schema change, they ensure that the new DTD is well-formed, and all existing XML documents are transformed also to conform to the modified DTD. We prove both the completeness of our evolution taxonomy, as well as its consistency-preserving nature. To verify the feasibility of our XEM approach we have implemented a working prototype system in Java, using the XML4J parser from IBM and PSE Pro as our backend storage system. We present an experimental study run on this system where we compare the relative efficiencies of the primitive operations in terms of their execution times. We then contrast these execution times against the time to reload the data, which would be required in a manual system. Based on the results of these experiments we conclude that our approach improves upon the previous method of making manual changes and reloading data from scratch by providing automated evolution management facilities for DTDs and XML documents."
338

Semantic Query Optimization for Processing XML Streams with Minimized Memory Footprint

Li, Ming 25 August 2007 (has links)
"XML streams have become increasingly prevalent in modern applications, ranging from network traffic monitoring to real-time information publishing. XQuery evaluation over XML streams require the temporary buffering of XML elements, which not only utilizes system buffer and CPU resources but also causes un-necessary output latency. This thesis presents a semantic query optimization solution to minimize memory footprint during XQuery evaluation by exploiting XML schema knowledge. In many practical applications, XML streams are generated conforming to pre-defined schema constraints typically expressed via a DTD or an XML schema specification. Utilizing such constraints enables us to on-the-fly predict the non-occurrence of a given pattern within a bound context. This helps us to avoid data buffering and to release buffered data at an earlier moment, thus achieving a minimized memory footprint. In this work, we focus on one particular class of constraints, namely, the Pattern Non-Occurrence (PNO) constraint. We develop an automaton-based technique to detect PNO constraints at runtime. For a given query, optimization opportunities which can be triggered by runtime PNO detection are explored for memory footprint minimization. Optimization decisions are encoded using our proposed Condition-Action Graph (CAG). The optimization-embedded execution strategy is then proposed to execute an optimized plan by detecting PNO constraints at run-time and then triggering the corresponding encoded actions when certain predefined conditions are satisfied. To ensure the efficiency of such PNO-triggered optimization, we propose optimization strategy on shrinking the CAGs by utilizing constraint knowledge during the query plan compiling phase. We implement our optimization technique within the Raindrop XQuery engine. Our system implementation processes XQuery utilizing the Raindrop algebra. It is efficiently augmented by our optimization module, which uses Glushkov automaton technique to capture and monitor PNO constraints in parallel with the query-driven pattern retrieval. Finally, we conduct experimental studies using both real and synthetic data streams to illustrate that our techniques bring significant performance improvement in both memory and CPU usage as well as improved output latency over state-of-the-art solutions, with little overhead."
339

Consistently Updating XML Documents Using Incremental checks With XQueries

Kane, Bintou 05 May 2003 (has links)
When updating a valid XML Data or Schema, an efficient yet light-weight mechanism is needed to determine if the update would invalidate the document. Towards this goal, we have developed a framework called SAXE. First, we analyzed the constraints expressed in XML schema specifications to establish constraint rules that must be observed when a schema or an XML data conforming to a given XML Schema is altered. We then classify the rules based on their relevancy for a given update case. That is, we show the minimal set of rules that must be checked to guarantee the safety for each update primitive. Next, we illustrate that this set of incremental constraint checks can be specified using generic XQuery expressions composed of three type of components. Safe updates for the XML data have the following components: (1) XML schema meta-queries to retrieve any con-straint knowledge potentially relevant to the given update from the schema or XMl data being altered, (2) retrieval of specific characteristics from the to-be-modified XML, and (3) lastly an analysis of information collected about the XML schema and the affected XML document to determine validity of the update. For the safe schema alteration, the components are: (1) XML schema meta-queries to retrieve relevant information from the schema (2)analysis and usage of retrieved information to update the schema, and lastly to (3) propagate the changes to the XML data when necessary. As a proof of concept, we have established a library of these generic XQuery constraint checks for the type-related XML constraints. The key idea of SAXE is to rewrite each XQuery update into a safe XML Query by extending it with appropriate constraint check subqueries. This en-hanced XML update query can then safely be executed using any existing XQuery engine that supports updates - thus turning any update engine automatically into an incremen-tal constraint-check engine. In order to verify the feasibility of our approach, we have implemented a prototype system SAXE that generates safe XQuery updates. Our experimental evaluation assesses the overhead of rewriting as well as the relative performance of our loosely-coupled incremental constraint check approach against the more traditional first-change-document and then revalidate-it approach.
340

Updating XML Views of Relational Data

Mulchandani, Mukesh K 29 April 2003 (has links)
XML has emerged as the standard data format for Internet-based business applications. In many bussiness settings, a relational database management system(RDBMS) will serve as the storage manager for data from XML documents. In such a system, once the XML data is shredded and loaded into the storage system, XML queries posed against these (now virtual) XML documents are processed by translating them as much as possible into SQL queries against the underlying relational storage. Clearly, in order to support full database functionalities over XML data, we must allow users not only to query but also to specify updates on XML documents. Today while the XML query language XQuery is being standardized by W3C, no syntax for updating XML documents is included in this language proposal as of now. In this thesis, we have developed techniques for supporting translation of XML updates on XML views of relational data into SQL updates on the underlying relations. These techniques are based on techniques for supporting translation of updates on object-based views of relational data into SQL updates on underlying relations cite{keller91}. The system has been implemented as a part of XML Management System, called Rainbow, that is being developed at the Worcester Polytechnic Institute (WPI). We have used XQuery as XML query language and Oracle as the backend relational store for implementation of the system. Experimental studies show that incremental XML updates supported by our system is a better choice than complete reload of XML documents under a variety of system settings.

Page generated in 0.0669 seconds