• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1600
  • 457
  • 422
  • 170
  • 114
  • 102
  • 60
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3634
  • 855
  • 801
  • 754
  • 606
  • 543
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 295
  • 273
  • 262
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.

Data management for interoperable systems /

Mühlberger, Ralf Maximilian. January 2001 (has links) (PDF)
Thesis (Ph. D.)--University of Queensland, 2002. / Includes bibliographical references.

A study on privacy-preserving clustering

Cui, Yingjie. January 2009 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2010. / Includes bibliographical references (leaves 87-90). Also available in print.

The use of a blackboard system for story processing by computer

Ward, Mark Brendan January 1991 (has links)
One of the major objectives in story understanding is to discover the causal reasoning behind characters' actions and to link these into an overall picture of the characters' motivations and actions. Thus the main aim when processing a sentence is to discover a character's goal in which this sentence can be considered as a step towards its achievement. The above process uses abductive reasoning in drawing its inferences and as a consequence of this any facts that are derived from a sentence might be invalid, causing a number of facts to be generated that are inconsistent with the knowledge base. A further complication to story understanding is that much of the information that is necessary for understanding to occur can only be obtained using default reasoning. Any such default fact remain valid unless a further statement proves that this is not the case. As a consequence of the above any new statements must be check against the rest of the knowledge base to make sure there are no inconsistencies and a list of supporting statements must be held so that any inconsistency found can be resolved and erased. An alternative to erasing these inconsistent statements within the knowledge base is to maintain a number of consistent environments using an assumption based truth maintenance system to enforce consistency. This has the advantage that more than one environment may be worked on at once and environments can be compared. The thesis discusses the maintenance of more than one environment and proposes a blackboard system, along with an assumption based truth maintenance system, as an ideal architecture to support the requirements of a story understanding program. The thesis also describes the knowledge sources, such as syntax and semantics, that are necessary for story understanding and how their operation should be controlled using a dynamic scheduling system.

URA : a universal data replication architecture

Zheng, Zheng, 1977- 10 September 2012 (has links)
Data replication is a key building block for large-scale distributed systems to improve availability, performance, and scalability. Because there is a fundamental trade-off between performance and consistency as well as between availability and consistency, systems must make trade-offs among these factors based on the demands and technologies of their target environments and workloads. Unfortunately, existing replication protocols and mechanisms are intrinsically entangled with specific policy assumptions. Therefore, to accommodate new trade-offs for new policy requirements, developers have to either build a new replication system from scratch or modify existing mechanisms. This dissertation presents a universal data replication architecture (URA) that cleanly separates mechanism and policy and supports Partial Replication (PR), Any Consistency (AC), and Topology Independence (TI) simultaneously. Our architecture yields two significant advantages. First, by providing a single set of mechanisms that capture the common underlying abstractions for data replication, URA can serve as a common substrate for building and deploying new replication systems. It therefore can significantly reduce the effort required to construct or modify a replication system. Second, by providing a set of general and flexible mechanisms independent of any specific policy, URA enables better trade-offs than any current system can provide. In particular, URA can simultaneously provide the three PRACTI properties while any existing system can provide at most two of them. Our experimental results and case-study systems confirm that universal data replication architecture is a way to build better replication systems and a better way to build replication systems. / text

Technical solutions for conducting investigations in digital age

Ho, Sze-lok., 何思樂. January 2012 (has links)
Confidentiality has always been a concern in secret operation. In this thesis, we consider the situation of legitimate data request and transfer between investigator and database owner who provides intelligence, where the identity of the investigation subject and the records in the database are both confidential. Current practice of secret investigation solely relies on the integrity and carefulness of the involved individuals to resist data leakage, but regulations, policy, agreement, such human means cannot give a promising solution, thus a technical means is needed. As appropriate solution for this confidential data request and transfer problem cannot be found from related research, our goal is to offer a means that can help keeping the investigation secret and protecting irrelevant data at the same time. We present a technical solution for preserving two-way confidentiality between the investigator (legitimate data requester) and the database owner (legitimate data holder), which can accommodate the concerns of both sides during the specific information request and transfer. Two schemes, Sender-Based Scheme and Receiver-Based Scheme, have been proposed to solve the problem under different conditions, and illustration of executing our schemes is given through an example situation “Investigator and Private hospital” which is an ordinary scenario during investigation. Furthermore, a practical cost reduction methodology on the schemes and sensible proposals for extensions are suggested and discussed. The direction of future work is also considered. / published_or_final_version / Computer Science / Master / Master of Philosophy

Advanced analysis and join queries in multidimensional spaces

Ge, Shen., 葛屾. January 2012 (has links)
Multidimensional data are ubiquitous and their efficient management and analysis is a core database research problem. There are lots of previous works focusing on indexing, analyzing and querying multidimensional data. In this dissertation, three challenging advanced analysis and join problems in multidimensional spaces are proposed and studied, providing efficient solutions to their related applications. First, the problem of generalized budget constrained optimization query (Gen-BOQ) is studied. In real life, it is often difficult for manufacturers to create new products dominating their competitors, due to some constraints. These constraints can be modeled by constraint functions, and the problem is then to decide the best possible regions in multidimensional spaces where the features of new products could be placed. Using the number of dominating and dominated objects, the profitability of these regions can be evaluated and the best areas are then returned. Although GenBOQ computation is challenging due to its high complexity, an efficient divide-and-conquer based framework is offered for this problem. In addition, an approximation method is proposed, making tradeoffs between the result quality and the query cost. Next, the efficient evaluation of all top-k queries (ATOPk) in multidimensional spaces is investigated, which compute the top ranked objects for a group of preference functions simultaneously. As an application of such a query, consider an online store, which needs to provide recommendations for a large number of users simultaneously. This problem is somewhat overlooked by past research; in this thesis, batch algorithms are proposed instead of naïvely evaluating top-k queries individually. Similar preferences are grouped together, and two algorithms are proposed, using block indexed nested loops and a view-based thresholding strategy. The optimized view-based threshold algorithm is demonstrated to be consistently the best. Moreover, an all top-k query helps to evaluate other queries relying on the results of multiple top-k queries, such as reverse top-k queries and top-m influential queries proposed in previous works. It is shown that applying the view-based approach to these queries can improve the performance of the current state-of-the-art by orders of magnitude. Finally, the problem of spatio-textual similarity joins (ST-SJOIN) on multidimensional data is considered. Given both spatial and textual information, ST-SJOIN retrieves pairs of objects which are both spatially close and textually similar. One possible application of this query is friendship recommendation, by matching people who not only live nearby but also share common interests. By combining the state-of-the-art strategies of spatial distance joins and set similarity joins, efficient query processing algorithms are proposed, taking both spatial and textual constraints into account. A batch processing strategy is also introduced to boost the performance, which is also effective for the original textual-only joins. Using synthetic and real datasets, it is shown that the proposed techniques outperform the baseline solutions. / published_or_final_version / Computer Science / Doctoral / Doctor of Philosophy

Multimodal transfer of literacy skills required to utilise electronic databases at the Tshwane University of Technology.

Esterhuizen, Elsa M. January 2008 (has links)
Thesis (MTech. degree in Educational Technology) / The current information literacy training programme (ILTP) of the Library and Information Services (LIS) of the Tshwane University of Technology (TUT) is set to be reviewed in such a way as to be presented in a multimodal transfer approach. A baseline evaluation serves to examine the existing programme on Module 5: Electronic Databases, after which an online program thereof is proposed. The components of the proposed multimodal transfer include an important role played by a facilitator, as well as the application of a learning management system (LMS). A bounded case study approach is used, applying an action research strategy that comprises two phases that includes a hybrid methodology of qualitative and quantitative research methods. Evaluating the proposed module, it becomes evident that a multimodal transfer approach is indeed suitable to transfer the necessary literacy skills to use electronic databases. The graphical user interface (GUI) appeals to users, and they report on the ease of use, as well as usefulness for personal studies of the module. In conclusion, the TUT programme is in line with similar programs regarding the development of critical thinking and research skills of students. The proposed transfer mode is successful and should be expanded to other modules of the ILTP. In doing so, it can serve as instrument for capacity building and empowerment of library staff members participating in the training venture.

An expert system approach to chemical hazard assessment

Toole, Edward January 1996 (has links)
Hazard assessment involves the retrieval of appropriate chemical hazard data and the production of an assessment which complies with relevant regulations. This is a complex process requiring detailed knowledge of chemical hazards and hazard assessment regulations. The purpose of this project is to investigate the use of an expert system approach for hazard assessment of complex systems. The approach has focused on the design, development and evaluation of an expert hazard assessor after appropriate modelling of chemical database retrieval, and legislative knowledge base. An "intelligent form" was designed to link the legislation to the chemical hazard data. To facilitate ease of use, the program was extended to include an on-line help facility, and a user friendly interface to address local and remote databases. The feasibility and benefits of the Expert Hazard Assessor ( EHA ) have been demonstrated through system testing and evaluation using sample regulations, and available chemical data, by groups of inexperienced users. Comparison with a previous standard expert system, also shows the EHA to be more comprehensive in output, more efficient, easier to use for the non-expert assessor, and provides better help support. The results suggest that an EHA such as the one detailed in this work is of significant benefit in providing appropriate hazard assessment.

Techniques in data stream mining

Tong, Suk-man, Ivy., 湯淑敏. January 2005 (has links)
published_or_final_version / abstract / Computer Science / Master / Master of Philosophy

PubMed Basics

National Network of Libraries of Medicine (NN/LM), U.S. January 2003 (has links)
This tri-fold brochure can be freely reproduced.

Page generated in 0.0634 seconds