• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 557
  • 231
  • 139
  • 127
  • 110
  • 68
  • 65
  • 43
  • 30
  • 24
  • 19
  • 14
  • 10
  • 9
  • 8
  • Tagged with
  • 1548
  • 408
  • 263
  • 240
  • 233
  • 231
  • 226
  • 213
  • 171
  • 155
  • 145
  • 131
  • 127
  • 120
  • 112
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
531

Semantic information stored in an extended denormalized database

Garrido, Piedad, Tramullas, Jesús January 2006 (has links)
This research project explains the birth and evolution of an information repository called XTMdb whose basic principles are intended to integrate complementary tagging languages such as: SKOS; MODS, Dublin Core and/or GILS with the paradigm of topic maps. Once the information processing was completed, the repository was tested by means of an efficient information retrieval process which allows the information resources description to be extended in real time. This allowed to obtain greater expressivity, independence in relation to the tagging language, as well as improved searches since the search can be centred on the topic concept. Certain other aspects remain outstanding such as: the topic maps paradigm can provide added value visual information, the development of a support system decision is easy and soft-computing techniques can solve a considerable amount of problems in relation to the information retrieval process.
532

Examining the Role of Website Information in Facilitating Different Citizen-Government Relationships: A Case Study of State Chronic Wasting Disease Websites

Eschenfelder, Kristin R., Miller, Clark A. January 2006 (has links)
This is a preprint accepted for publication in Government Information Quarterly (2007) 24(1), pg. 64-88. This paper develops a framework to assess the text-based public information provided on program level government agency Websites. The framework informs the larger e-government question of how, or whether, state administrative agencies are using Websites in a transformative capacity - to change relationships between citizens and government. It focuses on assessing the degree to which text information provided on government Websites could facilitate various relationships between government agencies and citizens. The framework incorporates four views of government information obligations stemming from different assumptions about citizen-government relationships in a democracy: the private citizen view, the attentive citizen view, the deliberative citizen view and the citizen-publisher view. Each view suggests inclusion of different types of information. The framework is employed to assess state Websites containing information about Chronic Wasting Disease, a disease effecting deer and elk in numerous U.S. states and Canada.
533

Experimental Frame Structuring For Automated Model Construction: Application to Simulated Weather Generation

Cheon, Saehoon January 2007 (has links)
The source system is the real or virtual environment that we are interested in modeling. It is viewed as a source of observable data, in the form of time-indexed trajectories of variables. The data that has been gathered from observing or experimenting with a system is called the system behavior data base. The time indexed trajectories of variables provide an important clue to compose the DEVS (discrete event specification) model. Once event set is derived from the time indexed trajectories of variable, the DEVS model formalism can be extracted from the given event set. The process must not be a simple model generation but a meaningful model structuring of a request. The source data and query designed with SES are converted to XML Meta data by XML converting process. The SES serves as a compact representation for organizing all possible hierarchical composition of system so that it performs an important role to design the structural representation of query and source data to be saved. For the real data application, the model structuring with the US Climate Normals is introduced. Moreover, complex systems are able to be developed at different levels of resolution. When the huge size of source data in US Climate Normals are implemented for the DEVS model, the model complexity is unavoidable. This issue is dealt with the creation of the equivalent lumped model based on the concept of morphism. Two methods to define the resolution level are discussed, fixed and dynamic definition. Aggregation is also discussed as the one of approaches for the model abstraction. Finally, this paper will introduce the process to integrate the DEVSML(DEVS Modeling Language) engine with the DEVS model creation engine for the Web Service Oriented Architecture.
534

Formulating Evaluation Measures for Structured Document Retrieval using Extended Structural Relevance

Ali, Mir Sadek 06 December 2012 (has links)
Structured document retrieval (SDR) systems minimize the effort users spend to locate relevant information by retrieving sub-documents (i.e., parts of, as opposed to entire, documents) to focus the user's attention on the relevant parts of a retrieved document. SDR search tasks are differentiated by the multiplicity of ways that users prefer to spend effort and gain relevant information in SDR. The sub-document retrieval paradigm has required researchers to undertake costly user studies to validate whether new IR measures, based on gain and effort, accurately capture IR performance. We propose the Extended Structural Relevance (ESR) framework as a way, akin to classical set-based measures, to formulate SDR measures that share the common basis of our proposed pillars of SDR evaluation: relevance, navigation and redundancy. Our experimental results show how ESR provides a flexible way to formulate measures, and addresses the challenge of testing measures across related search tasks by replacing costly user studies with low-cost simulation.
535

Lock-based concurrency control for XML

Ahmed, Namiruddin. January 2006 (has links)
As XML gains popularity as the standard data representation model, there is a need to store, retrieve and update XML data efficiently. McXml is a native XML database system that has been developed at McGill University and represents XML data as trees. McXML supports both read-only queries and six different kinds of update operations. To support concurrent access to documents in the McXML database, we propose a concurrency control protocol called LockX which applies locking to the nodes in the XML tree. LockX maximizes concurrency by considering the semantics of McXML's read and write operations in its design. We evaluate the performance of LockX as we vary factors such as the structure of the XML document and the proportion of read operations in transactions. We also evaluate LockX's performance on the XMark benchmark [16] after extending it with suitable update operations [13]. Finally, we compare LockX's performance with two snapshot-based concurrency control protocols (SnaX, OptiX) that provide a committed snapshot of the data for client operations.
536

Objektų savybių modelio grafinis redaktorius / Graphical editor for the Object Property model

Menkevičius, Saulius 13 January 2006 (has links)
During the development of federated IS that make use of non-homogenous databases and data sources, XML documents are often used for data exchange among the local subsystems, while their corresponding XML Schemas are generated using the standard CASE tools for local systems. External data schemes of those systems must be specified in a unified common model. An assumption is given that OBJECT PROPERTY (OP) model is being used for the semantic integration of the local non-homogenous subsystems. A graphical editor was developed that can be used to specify relation objects, their identifiers, complex and multi-valued object attributes. As OP model’s semantic expression capabilities can map those available in XML, additionally rules have been defined and implemented that can transform specific OP model structures into XML Schemas. Also algorithm is specified that can be used to extract tree-like structures from the model.Example transformations are performed that illustrate the process of generation of XML Schemas documents from sample OP models.
537

Įmonės veiklos rinkos sąlygomis modelio sistema, skirta nuotoliniam mokymui / The model system of enterprise in market area for distant study

Malickas, Ričardas 31 May 2004 (has links)
As Internet is penetrating into all spheres of life, the requirement of Lithuanian software is growing rapidly. Such penetration is reflected by popularity of distant studies and by stimulation of government means and various projects. Work with computers helps to acquire theoretical information more effectively and much faster. Created system helps to practically use information about enterprise activities, competition and profit opportunities. Paper analyzes the nature of software changes and the usage new systems. Also systems of market model are introduced and the results of realizations of market model are presented. The real advantage in experimental estimation is also discussed .
538

Dokumentų analizė ir palyginimas naudojant ontologijas / Analysis and comparison of documents using ontologies

Kurklietytė, Gita 08 September 2009 (has links)
Šiuolaikiniame pasaulyje informacijos kiekiai auga milžiniškais tempais, todėl atsiranda poreikis ją apdoroti ir sisteminti kompiuterio pagalba pagal jos prasmę. Ontologijos, dar besivystantis ir evoliucionuojantis produktas, yra vis plačiau naudojamos įvairių rūšių informacinėms sistemoms intelektualizuoti. Vienas iš tokių pavyzdžių galėtų būti automatinis elektroninių laiškų analizatorius, veikiantis ontologijų principu – sistema, atpažįstanti atėjusį laišką pagal jo prasmę ir galinti sugeneruoti jam atsakymą. Šiame darbe aprašyta pagal išnagrinėtą konkrečią internetinių loterijų informacijos sritį suprojektuotos ontologijos ir jų interpretavimo taisyklės, veikiančios sukurtu žodžių dažnių bei klasės koeficientų palyginimo algoritmo principu. Taip pat suprogramuotas mechanizmas veikiantis ontologijų ir minėto algoritmo pagrindu, klasifikuojantis elektroninio laiško tekstą pagal jo prasmę, pritaikytas konkrečiai internetinių loterijų informacijos sričiai. Atlikta algoritmo klasifikavimo efektyvumo ir paklaidos analizė. / Nowadays amounts of information are growing drastically. Now we have the new needs to manage this information according to its meaning with help of computer. Ontologies are still in evolution and progress phase, but this product is increasingly used to create intelligent information systems. One of samples can be an automatic analyzer for e-mails recognition working on the basis of ontologies. This is a system, which can recognize the new e-mail according to its meaning and send the corresponding reply. In this research work are described an ontology, which was designed according to particular internet lotteries information domain, and the rules to ontology interpretation. These rules are working on the algorithm basis. The principles of computing are the comparison of word rates and the class coefficients. The algorithm was created to classify e-mails from above mentioned particular information domain. Furthermore, the ontology and the reading rules were realized in mechanism, which can classify the text according to its meaning. The efficiency and the classification error of this mechanism was analyzed and evaluated.
539

Security Issues in Heterogeneous Data Federations

Leighton, Gregory Unknown Date
No description available.
540

XML theory and practice through an application feasibility study

Hall, Benjamin Fisher 08 1900 (has links)
No description available.

Page generated in 0.0284 seconds