• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 557
  • 231
  • 139
  • 127
  • 110
  • 68
  • 65
  • 43
  • 30
  • 24
  • 19
  • 14
  • 10
  • 9
  • 8
  • Tagged with
  • 1548
  • 408
  • 263
  • 240
  • 233
  • 231
  • 226
  • 213
  • 171
  • 155
  • 145
  • 131
  • 127
  • 120
  • 112
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
711

Efficient XML Stream Processing with Automata and Query Algebra

Jian, Jinhuj 27 August 2003 (has links)
"XML Stream Processing is an emerging technology designed to support declarative queries over continuous streams of data. The interest in this novel technology is growing due to the increasing number of real world applications such as monitoring systems for stock, email, and sensor data that need to analyze incoming data streams. There are however several open challenges. One, we must develop efficient techniques for pattern matching over the nested tag structure of XML as data streams in token by token. Two, we must develop techniques for query optimization to cope with complex user queries while given only incomplete knowledge of source data. When considering these challenges separately, then automata models have been shown by several recent works to be suited to tackle the first problem, while algebraic query models have been regarded as appropriate foundations to tackle the second problem. The question however remains how best to put these two models together to have an overall effective system. This thesis aims to exactly fill this gap. We propose a unified query framework to augment automata-style processing with algebra-based query optimization capabilities. We use the automata model to handle the token-oriented streaming XML data and use the algebraic model to support set-oriented optimization techniques. The framework has been designed in two layers such that the logical layer provides a uniform abstraction across the two models and any optimization techniques can be applied in either model uniformly using query rewritings. The physical layer, on the other hand, allows us to refine the implementation details after the logical layer optimization. We have successfully applied this framework in the Raindrop stream processing system. We have identified several trade-offs regarding which query functionality should be realized in which specific query model. We have developed novel optimization techniques to exploit these trade-offs. For example, a query rewrite rule can flexibly push down a pattern matching into the automata model when the optimizer decides that it is more efficient to do so. To deal with incomplete knowledge of source data, we have also developed novel techniques to monitor data statistics, based on which we can apply optimization techniques to choose the optimal query plan at runtime. Our experimental study confirms that considerable performance gains are being achieved when these optimization techniques are applied in our system."
712

Self Maintenance of Materialized XQuery Views via Query Containment and Re-Writing

Nilekar, Shirish K. 24 April 2006 (has links)
In recent years XML, the eXtensible Markup Language has become the de-facto standard for publishing and exchanging information on the web and in enterprise data integration systems. Materialized views are often used in information integration systems to present a unified schema for efficient querying of distributed and possibly heterogenous data sources. On similar lines, ACE-XQ, an XQuery based semantic caching system shows the significant performance gains achieved by caching query results (as materialized views) and using these materialized views along with query containment techniques for answering future queries over distributed XML data sources. To keep data in these materialized views of ACE-XQ up-to-date, the view must be maintained i.e. whenever the base data changes, the corresponding cached data in the materialized view must also be updated. This thesis builds on the query containment ideas of ACE-XQ and proposes an efficient approach for self-maintenance of materialized views. Our experimental results illustrate the significant performance improvement achieved by this strategy over view re-computation for a variety of situations.
713

MASS: A Multi-Axis Storage Structure for Large XML Documents

Deschler, Kurt W 06 May 2002 (has links)
Due to the wide acceptance of the Word Wide Web Consortium (W3C) XPath language specification, native indexing for XML is needed to support path expression queries efficiently. XPath describes the different document tree relationships that may be queried as a set of axes. Many recent proposals for XML indexing focus on accelerating only a small subset of expressions possible using these axes. In particular, queries by ordinal position and updates that alter document structure are not well supported. A more general indexing solution is needed that not only offers efficient evaluation of all of the XPath axes, but also allows for efficient document update. We introduce MASS, a Multiple Axis Storage Structure, to meet the performance challenge posed by the XPath language. MASS is a storage and indexing solution for large XML documents that eliminates the need for external secondary storage. It is designed around the XPath language, providing efficient interfaces for evaluating all XPath axes. The clustered organization of MASS allows several different axes to be evaluated using the same index structure. The clustering, in conjunction with an internal compression mechanism exploiting specific XML characteristics, keep the size of the structure small which further aids efficiency. MASS introduces a versatile scheme for representing document node relationships that always allows for efficient updates. Finally, the integration of a ranked B+ tree allows MASS to efficiently evaluate XPath axes in large documents. We have implemented MASS in C++ and measured the performance of many different XPath expressions and document updates. Our experimental evaluation illustrates that MASS exhibits excellent performance characteristics for both queries and updates and scales well to large documents, making it a practical solution for XML storage. In conjunction with text indexing, MASS provides a complete solution from XML indexing.
714

Sistema de operação remota e supervisão de iluminação pública / System of remote operation and supervision of public street lighting

Fonseca, Cleber Costa da 04 March 2013 (has links)
Sistemas de operação remota e supervisão de iluminação pública são compostos por dispositivos acoplados aos pontos de luz interligados via rede, e aplicativos que são executados em computadores que indicam problemas nos pontos de iluminação e apuram o valor do consumo de energia. Estudar as tecnologias dos trabalhos correlatos, propor um sistema dedicado para iluminação pública e implantar o sistema proposto em um teste piloto para avaliação das características de operação e supervisão são os objetivos do trabalho. A arquitetura do sistema proposto é modular e expansível. O modelo baseado em células permite que novos conjuntos de dispositivos possam ser adicionados de acordo com a demanda. No desenvolvimento do trabalho a linguagem C# é adotada para desenvolver a operação e supervisão através do padrão CyberOPC (Cybernetic OPC) e arquivos do tipo XML são aplicados para descrição dos dispositivos e definição da topologia da rede. Os resultados obtidos em simulação e no teste piloto validam a metodologia e arquitetura proposta. / Systems of remote operation and supervision of public lighting are composed of devices attached to the light points interconnected via a network, and applications that run on computers that indicate problems with lighting points and discharge the amount of energy consumption. Studying the related works technologies, propose a system dedicated to public lighting and deploy the proposed system in a pilot test to evaluate the operating characteristics and supervision are the objectives of the work. The architecture of the proposed system is modular and expandable. The cell-based model allows new sets of devices can be added according to demand. In developing the work the C # language is adopted to develop the operation and monitoring via standard CyberOPC (Cybernetic OPC) and XML file types are applied to the device description and definition of the network topology. The results obtained from simulation and from the test pilot validate the methodology and the proposed architecture.
715

Suitability of the NIST Shop Data Model as a Neutral File Format for Simulation

Harward, Gregory Brent 07 July 2005 (has links)
Due to the successful application in internet related fields, Extensible Markup Language (XML) and its related technologies are being explored as a revolutionary software file format technology used to provide increased interoperability in the discrete-event simulation (DES) arena. The National Institute of Standards and Technology (NIST) has developed an XML-based information model (XSD) called the Shop Data Model (SDM), which is used to describe the contents of a neutral file format (NFF) that is being promoted as a means to make manufacturing simulation technology more accessible to a larger group of potential customers. Using a two step process, this thesis evaluates the NIST SDM information model in terms of its ability to encapsulate the informational requirements of one vendor's simulation model information conceptually and syntactically in order to determine its ability to serve as an NFF for the DES industry. ProModel Corporation, a leading software vendor in the DES industry since 1988, serves as the test case for this evaluation. The first step in this evaluation is to map the contents of ProModel's information model over to an XML schema file (XSD). Next, the contents of this new XSD file are categorized and compared to the SDM information model in order to evaluate compatibility. After performing this comparison, observations are made in relation to the challenges that simulation vendors might encounter when implementing the proposed NIST SDM. Two groups of limitations are encountered which cause the NIST SDM to support less than a third of the ProModel XSD elements. These two groups of limitations are: paradigm differences between the two information models and limitations posed due to the incomplete status of the NIST SDM specification. Despite these limitations, this thesis shows by comparison that XML technology does not pose any limitation which would invalidate its ability to syntactically represent a common information model or associated XML NFF. While only 28% of the ProModel element are currently supported by the SDM, appropriate changes to the SDM would allow the information model to serve as a foundation upon which a common information model and neutral file format for the DES industry could be built using XML technologies.
716

Optimisation de la performance des entrepôts de données XML par fragmentation et répartition

Mahboubi, Hadj 08 December 2008 (has links) (PDF)
Les entrepôts de données XML forment une base intéressante pour les applications décisionnelles qui exploitent des données hétérogènes et provenant de sources multiples. Cependant, les Systèmes de Gestion de Bases de Données (SGBD) natifs XML actuels présentent des limites en termes de volume de données gérable, d'une part, et de performance des requêtes d'interrogation complexes, d'autre part. Il apparaît donc nécessaire de concevoir des méthodes pour optimiser ces performances.<br /><br />Pour atteindre cet objectif, nous proposons dans ce mémoire de pallier conjointement ces limitations par fragmentation puis par répartition sur une grille de données. Pour cela, nous nous sommes intéressés dans un premier temps à la fragmentation des entrepôts des données XML et nous avons proposé des méthodes qui sont à notre connaissance les premières contributions dans ce domaine. Ces méthodes exploitent une charge de requêtes XQuery pour déduire un schéma de fragmentation horizontale dérivée.<br /><br />Nous avons tout d'abord proposé l'adaptation des techniques les plus efficaces du domaine relationnel aux entrepôts de données XML, puis une méthode de fragmentation originale basée sur la technique de classification k-means. Cette dernière nous a permis de contrôler le nombre de fragments. Nous avons finalement proposé une approche de répartition d'un entrepôt de données XML sur une grille. Ces propositions nous ont amené à proposer un modèle de référence pour les entrepôts de données XML qui unifie et étend les modèles existants dans la littérature.<br /><br />Nous avons finalement choisi de valider nos méthodes de manière expérimentale. Pour cela, nous avons conçu et développé un banc d'essais pour les entrepôts de données XML : XWeB. Les résultats expérimentaux que nous avons obtenus montrent que nous avons atteint notre objectif de maîtriser le volume de données XML et le temps de traitement de requêtes décisionnelles complexes. Ils montrent également que notre méthode de fragmentation basée sur les k-means fournit un gain de performance plus élevé que celui obtenu par les méthodes de fragmentation horizontale dérivée classiques, à la fois en terme de gain de performance et de surcharge des algorithmes.
717

Konstruktion av webbaserat SMS-system med Bluetooth teknik : Jämförelse mellan programspråk för Bluetooth applikation i Windows XP

Holm, Johan January 2009 (has links)
<p>Behovet av tillgänglighet på kommunikation innebär att vi idag har krav på att meddelanden som skrivs över webben inte bara når oss via en dator utan även till våra mobiltelefoner. Målet med examensarbetet har varit att genom litteraturstudie och intervjuer konstruera ett webbaserat SMS-system som kan användas till att sända SMS textmeddelande till en godtycklig mobiltelefonenhet. Meddelande som sänds ska även lagras i databas och administreras via webbsidan. En jämförelse har även genomförts där syftet har varit att undersöka bäst lämpat programspråk för Bluetooth applikation. Metoden har grundat sig på kriterier. Jämförelsen mellan programspråken gav att Java med bibliotek från BlueCove var det bäst lämpade programspråket.  För konstruktion av SMS-systemet har en objektbaserad utvecklingsmodell tillämpats, dock inte i full skala. SMS-systemet är konstruerat av webbsida, databas, blåtandsapplikation, Bluetooth- och mobiltelefonenhet. Webbsidan bygger på ASP.NET 2.0, databasen XML och blåtandsapplikation programmerades i Java. Kontroll har utförts att systemet uppfyller kravspecifikation.</p> / <p>As there is now wide access to the World Wide Web, demand now exists for Web messages to be available not only via a computer but also via cell phones. The aim of the project was, by means of literature studies and interviews, to create a web-based SMS-system which can be used to send SMS text messages to arbitrary mobile phone devices from a web page.  The sent messages are also to be stored in a database and administered by means of the website. A comparison was also conducted in which the aim was to investigate the most suitable programming language for the Bluetooth application. The model for the study was based on given criteria. The comparison suggested that the Java programming language with its library from BlueCove was the most suitable. For the development of the system an object-based model has been applied, but this has not been adopted in a full scale manner at the present time. The SMS-system has been constructed by means of a Website, database, Bluetooth application and mobile phone unit. The website is based on an ASP.NET 2.0, database on XML and the Bluetooth application was written using Java. A control was conducted in order to test that the system meets the required specifications.</p>
718

Sales Information Provider / Försäljningsdatahämtning

Karlsson, Mathias January 2005 (has links)
<p>Sammanfattning, max 25 rader. :</p><p>Denna rapport utreder möjligheten till att ta in stora mängder data in i en databas och göra sammanslagningar. Detta för att sedan skicka en mängd data på ett smidigt sätt till en klient som ska bearbeta datat. Arbetet sträcker sig från databas till ett API möjligt att implementera i en applikation som önskar hämta informationen. Arbetet innebär en intelligent hämtning av data för visualisering. Det är ett av två examensarbeten som ligger till grund för en visualisering av försäljningsdata för sportbutikskedjan Stadium AB. Stadium AB har idag ca 80 butiker, vilket innebär en stor försäljning per vecka. Tanken är att detta ex-jobb tillsammans med det parallellt gående ex-jobbet ska vara till hjälp för Stadium AB vid inköp av produkter till nästkommande säsonger. Det ex-jobb som löpte parallellt med detta visualiserar mängden av produkter som säljs för en viss tidpunkt vilket ger Stadium möjlighet att se vilka tidpunkter de har för lite produkter i butiken samt när de har för mycket produkter. Detta ex-jobb ska förse visualiseringsapplikationen med den information som krävs. Sales Data Provider, som applikationen heter, bygger på en datalager lösning som grund. Den innehåller beräknade försäljningsdata på olika nivåer för att lätt kunna gräva sig ner i hierarkin och se information om hur olika produkter säljer. Som transportmedel från datalager till klient använder den Web Services med XML som media, för att möjliggöra en distans mellan datalager och klient. Dessutom innehåller den en logisk klient som tar hand om alla anrop mot Web Servicen och exponerar ett API som visualiseringsapplikationen kan använda sig av. Klienten innehåller både logik för att hämta data från Web Servicen och att exponera data genom en objektmodell.</p>
719

Design and implementation of a database programming language for XML-based applications /

Schuhart, Henrike. January 2006 (has links)
Thesis (doctoral)--Universität zu Lübeck, 2006. / Includes bibliographical references (p. 161-169) and index.
720

Utveckling av ett verktyg för produktkataloggenerering / Development of a Product Catalogue Generating Tool

Fritz, Jenny January 2013 (has links)
Produktkataloger publiceras och distribueras idag av många detaljhandelsföretag, stora somsmå. Dock har det påvisats att katalogproduktion kan vara både tids- och resurskrävande. Dettaexamensarbete har därför syftat till att finna en lösning på problemet genom att undersökabehov och förutsättningar och därpå utveckla ett verktyg som kan underlätta arbetet med attskapa produktkataloger. Målsättningen var att det resulterande verktyget per automatik skulleframställa en produktkatalog i PDF-format utifrån ett befintligt artikelregister.En förstudie visade att trots olikheter i befintliga produktkatalogers utformning finnsändå vissa gemensamma element såsom produktbild och pris. Detta faktum utnyttjades vidutvecklingsarbetet där det förutsattes att ett artikelregister, oavsett typ av datakälla, alltidinnehåller vissa informationselement som kan publiceras. För att sedan ge utrymme förolikheter i den grafiska utformningen av en katalog implementerades en separat mallhantering.Syftet med detta var att ge användaren möjligheten att justera exempelvis placering av textfält,bilddimensioner och bakgrundsbilder efter eget behov och tycke.För att komma i hamn tilläts projektet att växa i omfattning och under våren 2013 fungeradekataloggenereringsverktyget i enlighet med de mål som satts upp. Trots detta ses fortfarandestora möjligheter för vidareutveckling, särskilt som behovet av effektiviserad katalogproduktiontycks stort. / Today, product catalogs are published and distributed by a large share of retail companies.However, the process of catalog production can be both time consuming and resource heavy.The purpose of this thesis has been to find a solution to that problem. This was done byresearching different needs and demands regarding catalog production which was then followedby implementation of a software tool that could accomodate those needs. The goal was toautomatically produce a product catalog in PDF format out of an existing product database.A pilot study showed that despite differences in existing catalog layouts, there still aresome common elements such as product image and price. This was used as a basis during theimplementation in which it was assumed that a product database, no matter the type of datasource, always contains specific information elements to be published. To allow for differentlayouts of a product catalog, a separate template handler was implemented. The purposeof this was to give the user an opportunity to configure for instance text field placements,image dimensions and which background images to use - all in favor of individual needs andoppinions.To reach the goals the scope of the project was extended and during spring 2013 it wasfinalized with the desired functionalities. Despite this, a whole lot of possibilities regardingfurther development can be seen, especially since the need of a more efficient process of catalogproduction seems to exist.

Page generated in 0.0332 seconds