• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 6
  • 6
  • 1
  • Tagged with
  • 23
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Homem do Barranco : pesquisa e dramaturgia na cena ribeirinha

Ferreira, Carlos Roberto 27 June 2014 (has links)
Submitted by Valquíria Barbieri (kikibarbi@hotmail.com) on 2017-08-16T21:38:57Z No. of bitstreams: 1 DISS_2014_Carlos Roberto Ferreira.pdf: 1184162 bytes, checksum: e7551c40e1e160729d64985e7a27580a (MD5) / Approved for entry into archive by Jordan (jordanbiblio@gmail.com) on 2017-08-21T13:38:47Z (GMT) No. of bitstreams: 1 DISS_2014_Carlos Roberto Ferreira.pdf: 1184162 bytes, checksum: e7551c40e1e160729d64985e7a27580a (MD5) / Made available in DSpace on 2017-08-21T13:38:47Z (GMT). No. of bitstreams: 1 DISS_2014_Carlos Roberto Ferreira.pdf: 1184162 bytes, checksum: e7551c40e1e160729d64985e7a27580a (MD5) Previous issue date: 2014-06-27 / Processo de reescritura do poema dramático Homem do Barranco, de Carlos Roberto Ferreira, cuja imagem e local de investigação encontraram apoio na comunidade de São Gonçalo Beira Rio em Cuiabá, Mato Grosso. A reescritura está respaldada por um olhar sobre o bairro no tempo presente, com registros das mudanças nele ocorridas nas duas últimas décadas e por estudos sobre o teatro contemporâneo. A pesquisa e a dramaturgia realizadas para a primeira encenação estão impressas no resultado da experiência de uma convivência com o povoado - trabalhada a partir do barro, do pertencimento e da memória coletiva. O texto reescrito Arrimo das Águas tem como proposta uma encenação performática, revelando-o simultaneamente, para o tempo presente, como arrimo assolado, fluxo e contra-fluxo das mudanças da imagem sociocultural e ambiental do bairro de São Gonçalo Beira Rio, na atualidade. / Rewrite the dramatic poem man Barranco, Carlos Roberto Ferreira, whose image process and site investigation found support in the community of São Gonçalo Beira Rio Cuiaba, Mato Grosso. The rewriting is supported by a look at the neighborhood in this time, with records of changes in the last two decades and studies of contemporary theater in it. Research and drama performed for the first scenario are printed on the result of the experience of living with the village - crafted from clay, of belonging and collective memory. The text rewritten Arrimo Waters's proposal is staging a performance, revealing it simultaneously to the present time, plagued as breadwinner, flow and counter-flow changes of sociocultural and environmental image of São Gonçalo Riverside neighborhood today.
12

La variabilité dans quatre versions de l’épopée mandingue / Variability in four versions of the Mandinka epic

Kouyaté, Mamadou 20 February 2015 (has links)
Cette thèse a pour objet la variabilité dans la réécriture de quatre versions de l’épopée mandingue. En se référant à la comparaison différentielle initiée par Ute Heidmann (2005) qui défend une approche des textes non-hiérarchisant, cette thèse se propose de mettre en évidence les indices de variabilité dans la diversité de leurs formes textuelles. Ceux-ci sont générés par différentes sources énonciatives dont notamment la figure du griot qui représente différemment les personnages, la description de certains faits historiques et de la nature, mettant en scène selon son gré des joutes oratoires. Enfin, les indices de variabilité concernent les différentes éditions du même texte patrimonial de la littérature mandingue. Sur la base du corpus, cette étude explore les variables qui constituent des lieux de mouvance du texte induisant des glissements de sens dus parfois à la rivalité entre griots, auteurs des performances. / This thesis is to variability in rewriting four versions of the Mandingo epic. By referring to comparison differential initiated by Ute Heidmann (2005) who defend an approach to no-hierachizing texts, this thesis proposes to put an evidence variability indices in the diversity of their textual forms. These are generated by different enunciative sources including the figure of the griot representing different characters, the description of some historical facts and nature, featuring as it sees fit cut and thrust. Finally, the variability indices refer to the different editions of the same text of the Mandinka heritage literature. Based on the corpus, this study explores the variables that represent editorial movement places inducing shifts in meaning sometimes due the rivalry between the griots, authors of the performance.
13

A Rewriting-based, Parameterized Exploration Scheme for the Dynamic Analysis of Complex Software Systems

Frechina Navarro, Francisco 17 November 2014 (has links)
Los sistemas software actuales son artefactos complejos cuyo comportamiento es a menudo extremadamente difícil de entender. Este hecho ha llevado al desarrollo de metodologías formales muy sofisticadas para el análisis, comprensión y depuración de programas. El análisis de trazas de ejecución consiste en la búsqueda dinámica de contenidos específicos dentro de las trazas de ejecución de un cierto programa. La búsqueda puede llevarse a cabo hacia adelante o hacia atrás. Si bien el análisis hacia adelante se traduce en una forma de análisis de impacto que identifica el alcance y las posibles consecuencias de los cambios en la entrada del programa, el análisis hacia atrás permite llevar a cabo un rastreo de la procedencia; es decir, muestra como (partes de) la salida del programa depende de (partes de) su entrada y ayuda a estimar qué dato de la entrada es necesario modificar para llevar a cabo un cambio en el resultado. En esta tesis se investiga una serie de metodologías de análisis de trazas que son especialmente adecuadas para el análisis de trazas de ejecución largas y complejas en la lógica de reescritura, que es un marco lógico y semántico especialmente adecuado para la formalización de sistemas altamente concurrentes. La primera parte de la tesis se centra en desarrollar una técnica de análisis de trazas hacia atrás que alcanza enormes reducciones en el tamaño de la traza. Esta metodología se basa en la fragmentación incremental y favorece un mejor análisis y depuración ya que la mayoría de las inspecciones, tediosas e irrelevantes, que se realizan rutinariamente en el diagnostico y la localización de errores se pueden eliminar de forma automática. Esta técnica se ilustra por medio de varios ejemplos que ejecutamos mediante el sistema iJulienne, una herramienta interactiva de fragmentación que hemos desarrollado y que implementa la técnica de análisis de trazas hacia atrás. En la segunda parte de la tesis se formaliza un sistema paramétrico, flexible y dinámico, para la exploración de computaciones en la lógica de reescritura. El esquema implementa un algoritmo de animación gen érico que permite la ejecución indeterminista de una teoría de reescritura condicional dada y que puede ser objeto de seguimiento mediante el uso de diferentes modalidades, incluyendo una ejecución gradual paso a paso y una fragmentación automática hacia adelante y/o hacia atrás, lo que reduce drásticamente el tamaño y la complejidad de las trazas bajo inspección y permite a los usuarios evaluar de forma aislada los efectos de una declaración o instrucción dada, el seguimiento de los efectos del cambio de la entrada, y obtener información sobre el comportamiento del programa (o mala conducta del mismo). Por otra parte, la fragmentación de la traza de ejecución puede identificar nuevas oportunidades de optimización del programa. Con esta metodología, un analista puede navegar, fragmentar, filtrar o buscar en la traza durante la ejecución del programa. El marco de análisis de trazas gen érico se ha implementado en el sistema Anima y describimos una profunda evaluación experimental de este que demuestra la utilidad del enfoque propuesto. / Frechina Navarro, F. (2014). A Rewriting-based, Parameterized Exploration Scheme for the Dynamic Analysis of Complex Software Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/44234 / TESIS
14

srcDiff: Syntactic Differencing to Support Software Maintenance and Evolution

Decker, Michael John 24 July 2017 (has links)
No description available.
15

Práticas de linguagem na sala de aula: caminho para a formação da competência comunicativa

Rodrigues Júnior, Hélio 01 November 2011 (has links)
Made available in DSpace on 2016-04-28T19:33:31Z (GMT). No. of bitstreams: 1 Helio Rodrigues Junior.pdf: 1678662 bytes, checksum: 11bac551374f4cb7f39f24a48e8fbe03 (MD5) Previous issue date: 2011-11-01 / Secretaria da Educação do Estado de São Paulo / Portuguese PCN (Brazil, 1998) for elementary school were expressively a breakthrough in the teaching of writing production, appropriating textual genres as a way to bring the educational practices of language in the classroom. However, methodological proposals have not the teaching of these situations in which the tongue is placed into operation, i.e. not signal paths to the act of professor, generating numerous doubts as to how to think the teaching of genres and how to direct them in a manner satisfactory to the training of student's communicative competence. Arises from the main objective of this work connected to the contribution to the teaching of Portuguese language by means of a written production strategy of a textual genre, seated in epilinguagem activities, precisely, in rewritten, guided by aspects of Interactionism sociodiscursive, Linguistics education, Textual linguistics and Textual analysis of the speech. Two groups were formed, one that he wrote without any interference from the researcher and the other which was accompanied by researcher while teaching sequences were applied. We then compared the results with the purpose to prove our hypothesis that students who have had an approach of teaching by practices of language, in the focus of textual and discursive analyses on interaction activityoriented textual genres presented greater competence in the use of the language. We found that our hypothesis is true, since the rewrite of the text, together with a control list, didactic instrument used by the Geneva Group, allows the student, in textual refacção, take ownership of the genres of text, allowing you to tailor your speech to a particular communication situation. We propose and apply conscious intervention, revealed as an important tool for teaching and learning of writing in the school production / Os PCN de Língua Portuguesa (BRASIL, 1998) para o Ensino Fundamental assinalaram expressivamente um avanço no ensino da produção da escrita, apropriando-se dos gêneros textuais como forma de reconduzir as práticas pedagógicas de língua em sala de aula. Entretanto, não apresentam propostas metodológicas ao ensino dessas situações em que a língua é colocada em funcionamento, ou seja, não sinalizam caminhos para o agir do professor, gerando inúmeras dúvidas quanto a como pensar o ensino dos gêneros e como encaminhálos de maneira satisfatória para a formação da competência comunicativa do aluno. Surge daí o principal objetivo deste trabalho ligado à contribuição com o ensino de língua portuguesa por meio de uma estratégia de produção escrita de um gênero textual, assentada em atividades de epilinguagem, precisamente, na reescrita, guiadas pelos aspectos do Interacionismo sociodiscursivo, da Educação linguística, da Linguística textual e da Análise textual do discurso. Dois grupos foram formados, um que escreveu sem qualquer interferência do pesquisador e o outro que foi acompanhado pelo pesquisador enquanto sequências didáticas eram aplicadas. Partimos, depois, para a comparação dos resultados com a finalidade de comprovar a nossa hipótese de que os alunos que tiveram uma abordagem do ensino por práticas de linguagem, no foco das análises textuais e discursivas em atividades de interação - orientada pelos gêneros textuais apresentaram maior competência no uso da língua. Concluímos que nossa hipótese é verdadeira, já que a reescrita do texto, aliada a uma lista de controle, instrumento didático utilizado pelo Grupo de Genebra, permite ao aluno, na refacção textual, apropriar-se dos gêneros de texto, autorizando-lhe adequar o seu discurso a uma determinada situação de comunicação. Propomos e aplicamos uma intervenção consciente, revelada como uma importante ferramenta de ensino e aprendizagem da produção escrita na escola
16

Scalable Preservation, Reconstruction, and Querying of Databases in terms of Semantic Web Representations

Stefanova, Silvia January 2013 (has links)
This Thesis addresses how Semantic Web representations, in particular RDF, can enable flexible and scalable preservation, recreation, and querying of databases. An approach has been developed for selective scalable long-term archival of relational databases (RDBs) as RDF, implemented in the SAQ (Semantic Archive and Query) system. The archival of user-specified parts of an RDB is specified using an extension of SPARQL, A-SPARQL. SAQ automatically generates an RDF view of the RDB, the RD-view. The result of an archival query is RDF triples stored in: i) a data archive file containing the preserved RDB content, and ii) a schema archive file containing sufficient meta-data to reconstruct the archived database. To achieve scalable data preservation and recreation, SAQ uses special query rewriting optimizations for the archival queries. It was experimentally shown that they improve query execution and archival time compared with naïve processing. The performance of SAQ was compared with that of other systems supporting SPARQL queries to views of existing RDBs. When an archived RDB is to be recreated, the reloader module of SAQ first reads the schema archive file and executes a schema reconstruction algorithm to automatically construct the RDB schema. The thus created RDB is populated by reading the data archive and converting the read data into relational attribute values. For scalable recreation of RDF archived data we have developed the Triple Bulk Load (TBL) approach where the relational data is reconstructed by using the bulk load facility of the RDBMS. Our experiments show that the TBL approach is substantially faster than the naïve Insert Attribute Value (IAV) approach, despite the added sorting and post-processing. To view and query semi-structured Topic Maps data as RDF the prototype system TM-Viewer was implemented. A declarative RDF view of Topic Maps, the TM-view, is automatically generated by the TM-viewer using a developed conceptual schema for the Topic Maps data model. To achieve efficient query processing of SPARQL queries to the TM-view query rewrite transformations were developed and evaluated. It was shown that they significantly improve the query execution time. / eSSENCE
17

Removing DUST using multiple alignment of sequences

Rodrigues, Kaio Wagner Lima, 92991221146 21 September 2016 (has links)
Submitted by Kaio Wagner Lima Rodrigues (kaiowagner@gmail.com) on 2018-08-23T05:45:00Z No. of bitstreams: 3 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) kaio-tese.pdf: 3615178 bytes, checksum: dc547b203670c1159f46136e021a4825 (MD5) kaio-folha-de-aprovacao.jpg: 3343904 bytes, checksum: b00e5c4807f5a7e10eddc2eed2de5f12 (MD5) / Approved for entry into archive by Secretaria PPGI (secretariappgi@icomp.ufam.edu.br) on 2018-08-23T19:08:57Z (GMT) No. of bitstreams: 3 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) kaio-tese.pdf: 3615178 bytes, checksum: dc547b203670c1159f46136e021a4825 (MD5) kaio-folha-de-aprovacao.jpg: 3343904 bytes, checksum: b00e5c4807f5a7e10eddc2eed2de5f12 (MD5) / Approved for entry into archive by Divisão de Documentação/BC Biblioteca Central (ddbc@ufam.edu.br) on 2018-08-24T13:43:58Z (GMT) No. of bitstreams: 3 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) kaio-tese.pdf: 3615178 bytes, checksum: dc547b203670c1159f46136e021a4825 (MD5) kaio-folha-de-aprovacao.jpg: 3343904 bytes, checksum: b00e5c4807f5a7e10eddc2eed2de5f12 (MD5) / Made available in DSpace on 2018-08-24T13:43:58Z (GMT). No. of bitstreams: 3 license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) kaio-tese.pdf: 3615178 bytes, checksum: dc547b203670c1159f46136e021a4825 (MD5) kaio-folha-de-aprovacao.jpg: 3343904 bytes, checksum: b00e5c4807f5a7e10eddc2eed2de5f12 (MD5) Previous issue date: 2016-09-21 / FAPEAM - Fundação de Amparo à Pesquisa do Estado do Amazonas / A large number of URLs collected by web crawlers correspond to pages with duplicate or near-duplicate contents. These duplicate URLs, generically known as DUST (Different URLs with Similar Text), adversely impact search engines since crawling, storing and using such data imply waste of resources, the building of low quality rankings and poor user experiences. To deal with this problem, several studies have been proposed to detect and remove duplicate documents without fetching their contents. To accomplish this, the proposed methods learn normalization rules to transform all duplicate URLs into the same canonical form. This information can be used by crawlers to avoid fetching DUST. A challenging aspect of this strategy is to efficiently derive the minimum set of rules that achieve larger reductions with the smallest false positive rate. As most methods are based on pairwise analysis, the quality of the rules is affected by the criterion used to select the examples and the availability of representative examples in the training sets. To avoid processing large numbers of URLs, they employ techniques such as random sampling or by looking for DUST only within sites, preventing the generation of rules involving multiple DNS names. As a consequence of these issues, current methods are very susceptible to noise and, in many cases, derive rules that are very specific. In this thesis, we present a new approach to derive quality rules that take advantage of a multi-sequence alignment strategy. We demonstrate that a full multi-sequence alignment of URLs with duplicated content, before the generation of the rules, can lead to the deployment of very effective rules. Experimental results demonstrate that our approach achieved larger reductions in the number of duplicate URLs than our best baseline in two different web collections, in spite of being much faster. We also present a distributed version of our method, using the MapReduce framework, and demonstrate its scalability by evaluating it using a set of 7.37 million URLs. / Um grande número de URLs obtidas por coletores corresponde a páginas com conteúdo duplicado ou quase duplicado, conhecidas em Inglês pelo acrônimo DUST, que pode ser traduzido como Diferentes URLs com Texto Similar. DUST são prejudiciais para sistemas de busca porque ao serem coletadas, armazenadas e utilizadas, contribuem para o desperdício de recursos, a criação de rankings de baixa qualidade e, consequentemente, uma experiência pior para o usuário. Para lidar com este problema, muita pesquisa tem sido realizada com intuito de detectar e remover DUST antes mesmo de coletar as URLs. Para isso, esses métodos se baseiam no aprendizado de regras de normalização que transformam todas as URLs com conteúdo duplicado para uma mesma forma canônica. Tais regras podem ser então usadas por coletores com o intuito de reconhecer e ignorar DUST. Para isto, é necessário derivar, de forma eficiente, um conjunto mínimo de regras que alcance uma grande taxa de redução com baixa incidência de falsos-positivos. Como a maioria dos métodos propostos na literatura é baseada na análise de pares, a qualidade das regras é afetada pelo critério usado para selecionar os exemplos de pares e a disponibilidade de exemplos representativos no treino. Para evitar processar um número muito alto de exemplos, em geral, são aplicadas técnicas de amostragem ou a busca por DUST é limitada apenas a sites, o que impede a geração de regras que envolvam diferentes nomes de DNS. Como consequência, métodos atuais são muito suscetíveis a ruído e, em muitos casos, derivam regras muito específicas. Nesta tese, é proposta uma nova técnica para derivar regras, baseada em uma estratégia de alinhamento múltiplo de sequências. Em particular, mostramos que um alinhamento prévio das URLs com conteúdo duplicado contribui para uma melhor generalização, o que resulta na geração de regras mais efetivas. Através de experimentos em duas diferentes coleções extraídas da Web, observa-se que a técnica proposta, além de ser mais rápida, filtra um número maior de URLs duplicadas. Uma versão distribuída do método, baseada na arquitetura MapReduce, proporciona a possibilidade de escalabilidade para coleções com dimensões compatíveis com a Web.
18

Du sens et de l'utilité des réécritures dans la littérature comparée. Maryse Condé, Assia Djebar, Nédim Gürsel, Abdelwahab Meddeb / The meaning and utility of rewriting in comparative literature. Maryse Condé, Assia Djebar, Nedim Gürsel, Abdelwahab Meddeb

Bahsoun, Jihad 19 December 2017 (has links)
Nous avons tenté dans ce travail de mettre en évidence le sens et l’utilité des réécritures dans la littérature comparée. Pourquoi et comment les écrivains s’emparent-ils d’une œuvre ou d’un texte en général (un texte sacré par exemple) et les transforment-ils ? Les écrivains francophones (ou imprégnés de culture française) des XXe et XXIe siècles s’inspirent de certains modèles de réécritures présents dans la littérature classique européenne ; ils apportent cependant une richesse supplémentaire aux hypotextes sur plusieurs plans : culturel, philosophique et esthétique. Un parallèle entre les arts du verbe et de l’image est établi, les seconds étant censés jeter un éclairage sur les premiers, aider à mieux saisir comment se réalise le passage d’une œuvre de création à une autre dans la continuité et la rupture. Plusieurs auteurs ― dont Maryse Condé, Assia Djebar, Nedim Gürsel et Abdelwahab Meddeb ― usent de la réécriture ― qui peut être interprétation ou travail esthétique sur l’écriture même (pastiche, parodie, etc.) ― pour exprimer leurs pensées et faire passer plus amplement leurs messages. Notre thèse se propose de dévoiler ces pensées et ces messages et d’essayer de les interpréter en faisant ressortir les mobiles ou les motifs des écrivains. Les œuvres sont à la fois des signaux (c’est-à-dire le sens voulu par les auteurs, ce qu’ils souhaitent consciemment exprimer, leur projet intentionnel) et des symptômes (ce que les œuvres révèlent en plus au lecteur). / In this work, we have attempted to highlight the meaning and usefulness of rewriting in comparative literature, whether this literature is European or is derived from French-speaking countries in other parts of the world. Why and how do writers seize a work or text in general (sacred text, for example) and transform it ? We recall that the French-speaking (or inhabited by French) writers of the 20th and 21st centuries are inspired by models of rewriting present in classical literature; they nevertheless bring hypotexts to a great wealth on several levels: cultural, philosophical and aesthetic. A parallel between the arts of the verb and the image is established, the latter being supposed to shed light on the first, to help better grasp the passage from one work of creation to another in continuity and rupture. Several authors - including Maryse Condé, Assia Djebar, Nedim Gürsel and Abdelwahab Meddeb - use rewriting - which can be interpretation or aesthetic work on writing itself (pastiche, parody) - to express their thoughts and pass messages that matter to them. It is these thoughts and messages that our thesis proposes to unveil and try to understand by bringing out the motives of writers.
19

Un hybride du groupe de Thompson F et du groupe de tresses B°° / A hybrid of Thompson’s group F and the braid group B∞

Tesson, Emilie 02 March 2018 (has links)
Nous étudions un certain monoïde défini par une présentation, notée P, qui est un hybride de celles du monoïde de tresses infinies et du monoïde de Thompson. Pour cela, nous utilisons plusieurs approches. On décrit d’abord un système de réécriture convergent pour la présentation P, ce qui fournit en particulier une solution au problème de mots de P et rapproche le monoïde hybride du monoïde de Thompson. Puis, suivant le modèle du monoïde de tresses, on utilise la méthode du retournement de facteur pour analyser la relation de divisibilité à gauche, et montrer en particulier que le monoïde hybride admet la simplification et des ppcm à droite conditionnels. Ensuite, on étudie la combinatoire de Garside de l'hybride: pour chaque entier n, on introduit un élément ∆(n) comme ppcm à droite des (n−1) premiers atomes, et on étudie les diviseurs à gauche des éléments ∆(n), appelés éléments simples. Les principaux résultats sont les dénombrement des diviseurs à gauche de ∆(n) et la détermination effective des formes normales des éléments simples. On termine en construisant des représentations du monoïde hybride dans divers monoïdes, en particulier une représentation dans des matrices à coefficients polynômes de Laurent dont on conjecture qu’elle est fidèle. / We study a certain monoid specified by a presentation, denoted P, that is a hybrid of the classical presentation of the infinite braid monoid and of the presentation of Thompson’s monoid. To this end, we use several approaches. First, we describe a convergent rewrite system for P, which provides in particular a solution to the word problem, and makes the hybrid monoid reminiscent of Thompson’s monoid. Next, on the shape of the braid monoid, we use the factor reversing method to analyze the divisibility relation, and show in particular that the hybrid monoid admits cancellation and conditional right lcms. Then, we study Garside combinatorics of the hybrid: for every integer n, we introduce an element ∆(n) as the right lcm of the first (n−1) atoms, and one investigates the left divisors of the elements ∆(n), called simple elements. The main results are a counting of the left divisors of ∆(n) and a characterization of the normal forms of simple elements. We conclude with the construction of several representations of the hybrid monoid in various monoids, in particular a representation in a monoid of matrices whose entries are Laurent polynomials, which we conjecture could be faithful.
20

Order-sensitive XML Query Processing Over Relational Sources

Murphy, Brian R 05 May 2003 (has links)
XML is an emerging standard format for data on the Web as well as in business applications. In order to store and access this information in an efficient manner, database technology must be utilized. A relational database system, the most established and mature technology for query processing and storage, creates a strong foundation for such an XML data management system. However, while relational databases are based on SQL queries, the original user queries are written in XQuery, an XML query language. This XML query language has support for order-sensitive queries as XML is an order-sensitive markup language. A major problem has been discovered with loading XML in a relational database. That problem is the lack of native SQL support for and management of order handling. While XQuery has order and positional support, SQL does not have the same support. For example, individuals who were viewing XML information about music albums would have a hard time querying for the first three songs of a track list from a relational backend. Mapping XML documents to relational backends also proves hard as the data models (hierarchical elements versus flat tables) are so different. For these reasons, and other purposes, the Rainbow System is being developed at WPI as a system that bridges XML data and relational data. This thesis in particular deals with the algebra operators that affect order, order sensitive loading and mapping of XML documents, and the pushdown of order handling into SQL-capable query engines. The contributions of the thesis are the order-sensitive rewrite rules, new XML to relational mappings with different order styles, order-sensitive template-driven SQL generation, and a proposed metadata table for order-sensitive information. A system that implements these proposed techniques with XQuery as the XML query language and Oracle as the backend relational storage system has been developed. Experiments were created to measure execution time based on various factors. First, scalability of the system as backend data set size grows is studied. Second, scalability of the system as results returned from the database grows, and finally, query execution times with different loading types are explored. The experimental results are encouraging. Query execution with the relational backend proves to be much faster than native execution within the Rainbow system. These results confirm the practical utility of our proposed order-sensitive XQuery execution solution over relational data.

Page generated in 0.1332 seconds