• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 557
  • 231
  • 139
  • 127
  • 110
  • 68
  • 65
  • 43
  • 30
  • 24
  • 19
  • 14
  • 10
  • 9
  • 8
  • Tagged with
  • 1548
  • 408
  • 263
  • 240
  • 233
  • 231
  • 226
  • 213
  • 171
  • 155
  • 145
  • 131
  • 127
  • 120
  • 112
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

XML Data Modeling for Network-Based Telemetry Systems

Price, Jeremy C., Moore, Michael S., Malatesta, Bill A. 10 1900 (has links)
ITC/USA 2008 Conference Proceedings / The Forty-Fourth Annual International Telemetering Conference and Technical Exhibition / October 27-30, 2008 / Town and Country Resort & Convention Center, San Diego, California / Network-based telemetry systems are often made up of many components from multiple vendors. The complexity involved in coordinating the design, integration, configuration, and operation of these systems has required instrumentation engineers to become experts in the tools and hardware from various vendors. Interoperation between the various tools and systems is very limited. One approach toward a more efficient method of managing these systems is to define a common language for describing the goals of the test, the measurements to be acquired, and the equipment that is available to compose a system. Through an open working group process, the iNET program is defining an eXtensible Markup Language (XML)-based language for describing instrumentation and telemetry systems. The language is designed with multiple aspects that allow filtered views into the instrumentation system, making the creation of the various parts of the documents more straight-forward and understandable to the type of user providing the information. This paper will describe the iNET metadata project, the model-driven approach that is being pursued, and the current state of the iNET metadata language.
122

Översikt och tillämpning av XML

Al-abuhalje, Afrah, Al-abuhalje, Sanaa January 2007 (has links)
<p>Sammanfattning</p><p>Allt eftersom kraven ökat på mer avancerade applikationer över Internet har det förekommit kritik mot att HTML inte räcker till, vilket är en av anledningarna till att ett nytt märkningsspråk som heter XML växt fram. I det här arbetet redogör vi för, alltifrån grunderna i XML till mer avancerade delar som DTD, XML-schema och XSLT.</p><p>XML kombinerar HTML:s enkelhet med SGML:s möjligheter. En av XML:s främsta styrkor är att det kan användas för att lagra all typ av data, utan att man behöver ta någon hänsyn till hur den senare ska presenteras. Innehåll och presentation är helt separerade. En annan viktig egenskap är att XML-dokument lagras som vanliga textfiler, vilket innebär att XML är system- och plattformsoberoende.</p><p>I denna uppsats utgår vi från två mål, dels att tillämpa XML för att skapa ett lämpligt lagringsformat för konfigurationsdata till en nätverksemulator, och dels att redogöra för XML. För att emulera förhållanden som finns i ett riktigt nätverk kan t ex bitfel,</p><p>paketförluster, bandbreddsbegränsning och fördröjning emuleras. Inställningar av önskad frekvens av paketförluster, bitfel o s v är exempel på konfigurationsdata. Då vi redogör för XML återkopplar vi till, och beskriver tillämpningen stegvis. På så sätt får läsaren en god inblick i hur XML fungerar.</p> / <p>Abstract</p><p>HTML is not powerful enough to handle the increasing demands on Internet applications of today, which is one of the reasons to why a new markup language called XML has been introduced. In this report we describe the basics as well as more advanced parts of XML like DTD, XML Schema and XSLT.</p><p>XML combines the simplicity of HTML and the possibilties of SGML. One of the premier strengths of XML is that it may be used to store any type of data without any considerations to how it will be presented later. The content and the presentation are separated completely. Another important property is that XML documents are stored as</p><p>ordinary text files, which means that XML is system and platform independent.</p><p>In this report our aim consists of two goals. One goal is to create a suitable format for storing configuration data using XML. The configuration data is to be used by a network emulator. In order to emulate the conditions present in a real network the emulator is</p><p>capable of emulating things as bit errors, packet losses, bandwidth limitation and delay, all of which are examples of configuration data. The other goal is to describe the basics as well as more advanced parts of XML in general. As we describe XML, we continuously show how we apply this knowledge to our application. In this way, the reader achieves a good insight into how XML works.</p>
123

Supporting the Procedural Component of Query Languages over Time-Varying Data

Gao, Dengfeng January 2009 (has links)
As everything in the real world changes over time, the ability to model thistemporal dimension of the real world is essential to many computerapplications. Almost every database application involves the management oftemporal data. This applies not only to relational data but also to any datathat models the real world including XML data. Expressing queries ontime-varying (relational or XML) data by using standard query language (SQLor XQuery) is more difficult than writing queries on nontemporal data.In this dissertation, we present minimal valid-time extensions to XQueryand SQL/PSM, focusing on the procedural aspect of the two query languagesand efficient evaluation of sequenced queries.For XQuery, we add valid time support to it by minimally extendingthe syntax and semantics of XQuery. We adopt a stratum approach which maps a&tauXQuery query to a conventional XQuery. The first part of the dissertationfocuses on how to performthis mapping, in particular, on mapping sequenced queries, which are byfar the most challenging. The critical issue of supporting sequenced queries(in any query language) is time-slicing the input data while retaining periodtimestamping. Timestamps are distributed throughout anXML document, rather than uniformly in tuples, complicating the temporalslicing while also providing opportunities for optimization. We propose fiveoptimizations of our initial maximally-fragmented time-slicing approach:selected node slicing, copy-based per-expression slicing, in-placeper-expression slicing, and idiomatic slicing, each of which reducesthe number of constant periods over which the query is evaluated.We also extend a conventional XML query benchmark to effect a temporal XMLquery benchmark. Experiments on this benchmark show that in-place slicingis the best. We then apply the approaches used in &tauXQuery to temporal SQL/PSM.The stratum architecture and most of the time-slicing techniques work fortemporal SQL/PSM. Empirical comparison is performed by running a variety of temporalqueries.
124

Zpracování nekorektních XML dat / Processing of Incorrect XML Data

Svoboda, Martin January 2010 (has links)
XML documents and related technologies represent widely accepted standard for managing and exchanging semi-structured data. However, surprisingly high number of XML documents is affected by well-formedness errors, structural invalidity or data inconsistencies. The aim of this thesis is the analysis of existing approaches resulting to the proposal of a new correction framework. The introduced model involves repairs of elements and attributes with respect to single type tree grammars. Via the inspection of the state space of an automaton recognising regular expressions, we are always able to find all minimal repairs. These repairs are compactly represented by recursively nested multigraphs, which can be translated to particular sequences of edit operations altering data trees. We have proposed four particular algorithms and provided the prototype implementation supplemented with experimental results. The most efficient algorithm heuristically follows only perspective repair directions and avoids repeated computations using the caching mechanism.
125

An XML-based framework for electronic business document integration with relational databases

Shamsedin Tekieh, Razieh Sadat, Information Systems, Technology & Management, Australian School of Business, UNSW January 2009 (has links)
Small and medium enterprises (SMEs) are becoming increasingly engaged in B2B interactions. The ubiquitousness of the Internet and the quasi-reliance on electronic document exchanges with larger trading partners have fostered this move. The main technical challenge that this brings to SMEs is that of business document integration: they need to exchange business documents with heterogeneous document formats and also integrate these documents with internal information systems. Often they can not afford using expensive, customized and proprietary solutions for document exchange and storage. Rather they need cost-effective approaches designed based on open standards and backed with easy-to-use information systems. In this dissertation, we investigate the problem of business document integration for SMEs following a design science methodology. We propose a framework and conceptual architecture for a business document integration system (BDIS). By studying existing business document formats, we recommend using the GS1 XML standard format as the intermediate format for business documents in BDIS. The GS1 standards are widely used in supply chains and logistics globally. We present an architecture for BDIS consisting of two layers: one for the design of internal information system based on relational databases, capable of storing XML business documents, and the other enabling the exchange of heterogeneous business documents at runtime. For the design layer, we leverage existing XML schema conversion approaches, and extend them, to propose a customized and novel approach for converting GS1 XML document schemas into relational schemas. For the runtime layer, we propose wrappers as architectural components for the conversion of various electronic documents formats into the GS1 XML format. We demonstrate our approach through a case study involving a GS1 XML business document. We have implemented a prototype BDIS. We have evaluated and compared it with existing research and commercial tools for XML to relational schema conversion. The results show that it generates operational and simpler relational schemas for GS1 XML documents. In conclusion, the proposed framework enables SMEs to engage effectively in electronic business.
126

Evaluation of Effective XML Information Retrieval

Pehcevski, Jovan, jovanp@cs.rmit.edu.au January 2007 (has links)
XML is being adopted as a common storage format in scientific data repositories, digital libraries, and on the World Wide Web. Accordingly, there is a need for content-oriented XML retrieval systems that can efficiently and effectively store, search and retrieve information from XML document collections. Unlike traditional information retrieval systems where whole documents are usually indexed and retrieved as information units, XML retrieval systems typically index and retrieve document components of varying granularity. To evaluate the effectiveness of such systems, test collections where relevance assessments are provided according to an XML-specific definition of relevance are necessary. Such test collections have been built during four rounds of the INitiative for the Evaluation of XML Retrieval (INEX). There are many different approaches to XML retrieval; most approaches either extend full-text information retrieval systems to handle XML retrieval, or use database technologies that incorporate existing XML standards to handle both XML presentation and retrieval. We present a hybrid approach to XML retrieval that combines text information retrieval features with XML-specific features found in a native XML database. Results from our experiments on the INEX 2003 and 2004 test collections demonstrate the usefulness of applying our hybrid approach to different XML retrieval tasks. A realistic definition of relevance is necessary for meaningful comparison of alternative XML retrieval approaches. The three relevance definitions used by INEX since 2002 comprise two relevance dimensions, each based on topical relevance. We perform an extensive analysis of the two INEX 2004 and 2005 relevance definitions, and show that assessors and users find them difficult to understand. We propose a new definition of relevance for XML retrieval, and demonstrate that a relevance scale based on this definition is useful for XML retrieval experiments. Finding the appropriate approach to evaluate XML retrieval effectiveness is the subject of ongoing debate within the XML information retrieval research community. We present an overview of the evaluation methodologies implemented in the current INEX metrics, which reveals that the metrics follow different assumptions and measure different XML retrieval behaviours. We propose a new evaluation metric for XML retrieval and conduct an extensive analysis of the retrieval performance of simulated runs to show what is measured. We compare the evaluation behaviour obtained with the new metric to the behaviours obtained with two of the official INEX 2005 metrics, and demonstrate that the new metric can be used to reliably evaluate XML retrieval effectiveness. To analyse the effectiveness of XML retrieval in different application scenarios, we use evaluation measures in our new metric to investigate the behaviour of XML retrieval approaches under the following two scenarios: the ad-hoc retrieval scenario, exploring the activities carried out as part of the INEX 2005 Ad-hoc track; and the multimedia retrieval scenario, exploring the activities carried out as part of the INEX 2005 Multimedia track. For both application scenarios we show that, although different values for retrieval parameters are needed to achieve the optimal performance, the desired textual or multimedia information can be effectively located using a combination of XML retrieval approaches.
127

Confidentiality of XML documents by pool encryption

Geuer-Pollmann, Christian January 2003 (has links) (PDF)
Zugl.: Siegen, Univ., Diss., 2003
128

On detecting and repairing inconsistent schema mappings

Ho, Terence Cheung-Fai 11 1900 (has links)
Huge amount of data flows around the Internet every second, but for the data to be useful at its destination, it must be presented in a way such that the target has little problem interpreting it. Current data exchange technologies may rearrange the structure of data to suit expectations at the target. However, there may be semantics behind data (e.g. knowing the title of a book can determine its #pages) that may be violated after data translation. These semantics are expressed as integrity constraints (IC) in a database. Currently, there is no guarantee that the exchanged data conforms to the target’s ICs. As a result, existing applications (e.g. user queries) that assume such semantics will no longer function correctly. Current constraint repair techniques deal with data after it has been translated; thus take no consideration of the integrity constraints at the source. Moreover, such constraint repair methods usually involve addition/deletion/modification of data, which may yield incomplete or false data. We consider the constraints of both source and target schemas; together with the mapping, we can efficiently detect which constraint is violated and suggest ways to correct the mappings.
129

XML Enabled Page-Grouping

Lee, Hor-Tzung 04 July 2000 (has links)
As more and more services are provided via WWW, how to reduce the perceived delay in WWW interaction becomes very import to the service providers to keep their users. Pre-fetching is an important technique for reducing latency in distributes systems like the WWW. Page pre-fetching takes advantage of local machine idle period of user viewing current page to deliver pages that user like to access in the near future. Being motivated by pre-fetching ideas and its practical bothers, we propose a server-initiated page pre-fetching method: XML enabled page-grouping to reduce Web latency. In our page-grouping scheme, we anticipate the page that user will like to access in the near future based on hyperlink and referral access probabilities of each page. The predictive pages are grouped and converted into a XML file embedding in the page that user currently requests. If the user clicks the predictive linked page, the corresponding HTML is regenerated directly from the embedded XML document. The proposed scheme allows alternative of batch grouping or on-line grouping. For the reason of avoiding server extra load, we suggest that the task of grouping static pages is performed periodically at server off-peak loading time. Beside static pages, we also hope to group dynamic page generated by CGI and illustrate the feasibility with an example of Web-based database query. When compared to previous page pre-fetching techniques, our page-grouping method has simplicity and practicability. By using XML document, add-on application modules are no more needed because that the XML processor is supported in new generator browsers like Microsoft IE 5.0. Furthermore, the way of converting grouping pages into embedded XML document makes predictive pages transparent to the proxy servers and the server side speculative service can work no matter whether there are proxy servers between the server and clients. Using trace simulations based on the logs of HTTP server http://www.kcg.gov.tw, we show that 67.84% URL request is referral request. It means that the probability is about 2/3 that users retrieve next Web page by clicking hyperlinks on currently viewing page. The logs are categorized according to the kind of official service. And the statistical results of every class of logs indicate that page always has a persistent referral access probabilities for a few days. It encourages us to get high hit rate of a predictive page by selecting it according to its high referral access probability. Considering bandwidth tradeoff, we discuss hit rate, traffic increase due to grouping and traffic intensity based on M/M/1 model. For online grouping of dynamic page, we take an example of database querying page on our simulating HTTP server. The experiment result leads to the conclusion that page grouping of pages of Web-based database querying can reduce server load of CGI processing, as the hit rate of the next page is about 18.48%.
130

An Agent and Profile Management System for Mobile Users and Service Providers

Wang, Szu-Hsuan 16 July 2002 (has links)
With the development of mobile devices, people can execute many applications on their personal devices at any time and in any place. However, many limitations of the mobile devices, e.g., CPU, memory, power supply, etc., make them impossible to be completely the same as the desktop PCs. In this paper we present an integrated management architecture for thin-client purpose called Agent and Profile Management System (APMS). The users of mobile devices can use the services from various service providers via this system. On the other hand, the centralized management of service agents provided by the service providers is employed in this system. The users only need to download a simple service agent to their mobile device and install it, then the service agent will connect to the homologous service provider, send an XML-RPC request by ¡§POST¡¨ to the back-end server. After receiving the XML-RPC request, the back-end server executes appropriate processes and returns an XML-RPC response to the mobile device. Most of the procedures are accomplished on the server side that has powerful CPU and large amount of memory. Therefore, the loading of mobile devices is relatively low and the cost of mobile devices can be effectively reduced.

Page generated in 0.0351 seconds