• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 416
  • 73
  • 18
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 14
  • 7
  • 5
  • 5
  • 3
  • 3
  • Tagged with
  • 666
  • 666
  • 268
  • 216
  • 192
  • 151
  • 128
  • 122
  • 96
  • 82
  • 80
  • 67
  • 56
  • 54
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Query optimization using frequent itemset mining

Eom, Boyun. January 2005 (has links)
Thesis (M.S.)--University of Florida, 2005. / Title from title page of source document. Document formatted into pages; contains 99 pages. Includes vita. Includes bibliographical references.
72

Modelling recovery in database systems

Scheuerl, S. January 1998 (has links)
The execution of modern database applications requires the co-ordination of a number of components such as: the application itself, the DBMS, the operating system, the network and the platform. The interaction of these components makes understanding the overall behaviour of the application a complex task. As a result the effectiveness of optimisations are often difficult to predict. Three techniques commonly available to analyse system behaviour are empirical measurement, simulation-based analysis and analytical modelling. The ideal technique is one that provides accurate results at low cost. This thesis investigates the hypothesis that analytical modelling can be used to study the behaviour of DBMSs with sufficient accuracy. In particular the work focuses on a new model for costing recovery mechanisms called MaStA and determines if the model can be used effectively to guide the selection of mechanisms. To verify the effectiveness of the model a validation framework is developed. Database workloads are executed on the flexible Flask architecture on different platforms. Flask is designed to minimise the dependencies between DBMS components and is used in the framework to allow the same workloads to be executed on a various recovery mechanisms. Empirical analysis of executing the workloads is used to validate the assumptions about CPU, I/O and workload that underlie MaStA. Once validated, the utility of the model is illustrated by using it to select the mechanisms that provide optimum performance for given database applications. By showing that analytical modelling can be used in the selection of recovery mechanisms, the work presented makes a contribution towards a database architecture in which the implementation of all components may be selected to provide optimum performance.
73

Parallel persistent object-oriented simulation with applications

Burdorf, Christopher January 1993 (has links)
No description available.
74

Visualising M-learning system usage data

Kamuhanda, Dany January 2015 (has links)
Data storage is an important practice for organisations that want to track their progress. The evolution of data storage technologies from manual methods of storing data on paper or in spreadsheets, to the automated methods of using computers to automatically log data into databases or text files has brought an amount of data that is beyond the level of human interpretation and comprehension. One way of addressing this issue of interpreting large amounts of data is data visualisation, which aims to convert abstract data into images that are easy to interpret. However, people often have difficulty in selecting an appropriate visualisation tool and visualisation techniques that can effectively visualise their data. This research proposes the processes that can be followed to effectively visualise data. Data logged from a mobile learning system is visualised as a proof of concept to show how the proposed processes can be followed during data visualisation. These processes are summarised into a model that consists of three main components: the data, the visualisation techniques and the visualisation tool. There are two main contributions in this research: the model to visualise mobile learning usage data and the visualisation of the usage data logged from a mobile learning system. The mobile learning system usage data was visualised to demonstrate how students used the mobile learning system. Visualisation of the usage data helped to convert the data into images (charts and graphs) that were easy to interpret. The evaluation results indicated that the proposed process and resulting visualisation techniques and tool assisted users in effectively and efficiently interpreting large volumes of mobile learning system usage data.
75

Data modelling techniques to improve student's admission criteria

Hutton, David January 2015 (has links)
Education is commonly seen as an escape from poverty and a critical path to securing a better standard of living. This is especially relevant in the South African context, where the need is so great that in one instance people were trampled to death at the gates of a higher educational institution, whilst attempting to register for this opportunity. The root cause of this great need is a limited capacity and a demand, which outstrips the supply. This is not a problem specific to South Africa. It is however exaggerated in the South African context due to the country's lack of infrastructure and the opening of facilities to all people. Tertiary educational institutions are faced with ever-increasing applications for a limited number of available positions. This study focuses on a dataset from the Nelson Mandela Metropolitan University's Faculty of Engineering, the Built Environment and Information Technology - with the aim of establishing guidelines for the use of data modelling techniques to improve student admissions criteria. The importance of data preprocessing was highlighted and generalized linear regression, decision trees and neural networks were proposed and motivated for modelling. Experimentation was carried out, resulting in a number of recommended guidelines focusing on the tremendous value of feature engineering coupled with the use of generalized linear regression as a base line. Adding multiple models was highly recommended; since it allows for greater opportunities for added insight.
76

Design and analysis of a multi-backend database system for performance improvement and capacity growth.

Menon, M. Jaishankar January 1981 (has links)
No description available.
77

Design considerations for distributed data bases in computer networks /

Cheng, Tu-Ting January 1976 (has links)
No description available.
78

The design and performance of a database computer /

Kannan, Krishnamurthi January 1977 (has links)
No description available.
79

Performance analysis and design methodology for implementing database systems on new database machines /

Banerjee, Jayanta January 1979 (has links)
No description available.
80

A methodology for the performance evaluation of data base systems: an extension of the IPSS methodology /

Brownsmith, Joseph D. January 1979 (has links)
No description available.

Page generated in 0.0838 seconds