Spelling suggestions: "subject:"XML (document markup anguage)"" "subject:"XML (document markup 1anguage)""
151 |
Self maintenance of materialized xquery views via query containment and re-writingNilekar, Shirish K. January 2006 (has links)
Thesis (M.S.)--Worcester Polytechnic Institute. / Keywords: XML, Query Re-Writing, View Maintenance, Query Containment. Includes bibliographical references. (p.108-111)
|
152 |
Efficient and parallel evaluation of XQueryLi, Xiaogang, January 2006 (has links)
Thesis (Ph. D.)--Ohio State University, 2006. / Title from first page of PDF file. Includes bibliographical references (p. 137-144).
|
153 |
A survey and analysis of access control architectures for XML dataEstlund, Mark J. January 2006 (has links) (PDF)
Thesis (M.S. in Computer Science)--Naval Postgraduate School, March 2006. / Thesis Advisor(s): Cynthia E. Irvine, Timothy E. Levin. "March 2006." Includes bibliographical references (p. 43-45). Also available online.
|
154 |
Text augmentation : inserting markup into natural language text with PPM models /Yeates, Stuart Andrew. January 2006 (has links)
Thesis (Ph.D.)--University of Waikato, 2006. / Includes bibliographical references (p. 157-170)
|
155 |
An automated XPATH to SQL transformation methodology for XML dataJandhyala, Sandeep. January 2006 (has links)
Thesis (M.S.)--Georgia State University, 2006. / Rajshekhar Sunderraman, committee chair; Sushil Prasad, Alex Zelikovsky, committee members. Electronic text (58 p.) : digital, PDF file. Description based on contents viewed Aug. 13, 2007. Includes bibliographical references (p. 58).
|
156 |
Trust on the semantic web /Cloran, Russell Andrew. January 2006 (has links)
Thesis (M.Sc. (Computer Science)) - Rhodes University, 2007.
|
157 |
Trust on the semantic webCloran, Russell Andrew 07 August 2006 (has links)
The Semantic Web is a vision to create a “web of knowledge”; an extension of the Web as we know it which will create an information space which will be usable by machines in very rich ways. The technologies which make up the Semantic Web allow machines to reason across information gathered from the Web, presenting only relevant results and inferences to the user. Users of the Web in its current form assess the credibility of the information they gather in a number of different ways. If processing happens without the user being able to check the source and credibility of each piece of information used in the processing, the user must be able to trust that the machine has used trustworthy information at each step of the processing. The machine should therefore be able to automatically assess the credibility of each piece of information it gathers from the Web. A case study on advanced checks for website credibility is presented, and the site presented in the case presented is found to be credible, despite failing many of the checks which are presented. A website with a backend based on RDF technologies is constructed. A better understanding of RDF technologies and good knowledge of the RAP and Redland RDF application frameworks is gained. The second aim of constructing the website was to gather information to be used for testing various trust metrics. The website did not gain widespread support, and therefore not enough data was gathered for this. Techniques for presenting RDF data to users were also developed during website development, and these are discussed. Experiences in gathering RDF data are presented next. A scutter was successfully developed, and the data smushed to create a database where uniquely identifiable objects were linked, even where gathered from different sources. Finally, the use of digital signature as a means of linking an author and content produced by that author is presented. RDF/XML canonicalisation is discussed in the provision of ideal cryptographic checking of RDF graphs, rather than simply checking at the document level. The notion of canonicalisation on the semantic, structural and syntactic levels is proposed. A combination of an existing canonicalisation algorithm and a restricted RDF/XML dialect is presented as a solution to the RDF/XML canonicalisation problem. We conclude that a trusted Semantic Web is possible, with buy in from publishing and consuming parties.
|
158 |
Development of a web-based drug intelligence database systemLiao, Jianghong 01 October 2000 (has links)
No description available.
|
159 |
The development of a web based designer for simulating dynamic system by remotely accessing MATLAB using java and XMLChan, Wai Lun 01 January 1999 (has links)
No description available.
|
160 |
XML-Based Agent Scripts and Inference MechanismsSun, Guili 08 1900 (has links)
Natural language understanding has been a persistent challenge to researchers in various computer science fields, in a number of applications ranging from user support systems to entertainment and online teaching. A long term goal of the Artificial Intelligence field is to implement mechanisms that enable computers to emulate human dialogue. The recently developed ALICEbots, virtual agents with underlying AIML scripts, by A.L.I.C.E. foundation, use AIML scripts - a subset of XML - as the underlying pattern database for question answering. Their goal is to enable pattern-based, stimulus-response knowledge content to be served, received and processed over the Web, or offline, in the manner similar to HTML and XML. In this thesis, we describe a system that converts the AIML scripts to Prolog clauses and reuses them as part of a knowledge processor. The inference mechanism developed in this thesis is able to successfully match the input pattern with our clauses database even if words are missing. We also emulate the pattern deduction algorithm of the original logic deduction mechanism. Our rules, compatible with Semantic Web standards, bring structure to the meaningful content of Web pages and support interactive content retrieval using natural language.
|
Page generated in 0.0493 seconds