• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1605
  • 457
  • 422
  • 170
  • 114
  • 102
  • 60
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3643
  • 856
  • 804
  • 754
  • 608
  • 543
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 276
  • 263
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Development of an automated and integrated budgeting system

Bury, Sarah E. January 2006 (has links) (PDF)
Thesis (M.S.C.I.T.)--Regis University, Denver, Colo., 2006. / Title from PDF title page (viewed on May 25, 2006). Includes bibliographical references.
182

Modelling recovery in database systems

Scheuerl, S. January 1998 (has links)
The execution of modern database applications requires the co-ordination of a number of components such as: the application itself, the DBMS, the operating system, the network and the platform. The interaction of these components makes understanding the overall behaviour of the application a complex task. As a result the effectiveness of optimisations are often difficult to predict. Three techniques commonly available to analyse system behaviour are empirical measurement, simulation-based analysis and analytical modelling. The ideal technique is one that provides accurate results at low cost. This thesis investigates the hypothesis that analytical modelling can be used to study the behaviour of DBMSs with sufficient accuracy. In particular the work focuses on a new model for costing recovery mechanisms called MaStA and determines if the model can be used effectively to guide the selection of mechanisms. To verify the effectiveness of the model a validation framework is developed. Database workloads are executed on the flexible Flask architecture on different platforms. Flask is designed to minimise the dependencies between DBMS components and is used in the framework to allow the same workloads to be executed on a various recovery mechanisms. Empirical analysis of executing the workloads is used to validate the assumptions about CPU, I/O and workload that underlie MaStA. Once validated, the utility of the model is illustrated by using it to select the mechanisms that provide optimum performance for given database applications. By showing that analytical modelling can be used in the selection of recovery mechanisms, the work presented makes a contribution towards a database architecture in which the implementation of all components may be selected to provide optimum performance.
183

Nominal record linkage : the development of computer strategies to achieve the family-based record linkage of nineteenth century demographic data

Welford, John Anthony January 1989 (has links)
This thesis was originally submitted for examination in March 1983. Following the result of the viva in October of that year an appeal was lodged, and the subsequent proceedings lasted for almost four years. In October 1987 formal notification was made that the thesis could be revised and resubmitted. The prolonged length of the appeal proceedings has meant that the computing environment within which the research was set has developed significantly from the position in 1983. Indeed, in purely practical terms, the computing systems which were used at that time are no longer operational. The opportunity for making modifications and refinements to the record linkage system, and for incorporating additional primary source materials (even were sufficient human resources available), has therefore been removed. Under these circumstances, the record linkage strategies described in the revised thesis are precisely the same as those presented in the 1983 submission. For this reason and because of the extensive delays in carrying out the appeal proceedings it has not seemed appropriate to provide a full review of developments in the record linkage field beyond this date. Reference has, however, been made to the subsequent, crucial impact of the findings of the present research on the progress of the later phases of the 1851 Census National Sample Project, which I co-directed with Professor Michael Anderson at the University of Edinburgh. The entire conceptual and strategic approach to the organisation of family information in this project grew directly from perceptions which were central to the present research. Reference has also been made to the influence of the present research on the development of the SASPAC package, a computing system for handling the 1981 Population Census Small Area Statistics data for Great Britain. I was the chief systems designer of SASPAC, and the design and implementation methods which were adopted in this development drew heavily on the experience gained from the present research. Finally, the opportunity has been taken (in the new Section 10.2), to present the findings of some fresh analyses of 1851 household census data which serve to confirm the validity of the linkage strategies which have been developed.
184

Parallel persistent object-oriented simulation with applications

Burdorf, Christopher January 1993 (has links)
No description available.
185

Die afdwinging van sekerheid en integriteit in 'n relasionele databasisomgewing

Kennedy, Renita 30 September 2014 (has links)
M.Com. (Informatics) / Please refer to full text to view abstract
186

Rolprofiele vir die bestuur van inligtingsekerheid

Van der Merwe, Isak Pieter 15 September 2014 (has links)
M.Com. (Informatics) / The aim of this study is to introduce a model that can be used to manage the security profiles by using a role oriented approach. In chapter 1 the addressed problem and the aim of the study, are introduced. In chapter 2 the different approaches used in the management of security profiles and the security profiles in Computer Associates's TOP SECRET and IBM's RACF, are discussed, In chapter 3 the Model for Role Profiles (MoRP) is introduced and discussed. Chapter 4 consists of a consideration of the possible problems of MoRP and an extension of MoRP is discussed.' The extended model is called ExMoRP. Chapter 5 consists of an analysis of the Path Context Model (pCM) for security and the principles of the PCM are added to ExMoRP to enhance security. In chapter 6 ExMoRP, with the principles of the PCM, are applied on a case study: In chapter 7 a methodology for the implementation of ExMoRP in an environment, is introduced. In chapter 8 it is shown how the principles of ExMoRP can be applied in UNIX, In chapter 9 it is shown how the principles of ExMoRP can be applied in Windows NT. In chapter 10 it is shown how the principles of ExMoRP can be applied in ORACLE. Chapter 11 consists of a review of the management of security and the present trends.
187

Visualising M-learning system usage data

Kamuhanda, Dany January 2015 (has links)
Data storage is an important practice for organisations that want to track their progress. The evolution of data storage technologies from manual methods of storing data on paper or in spreadsheets, to the automated methods of using computers to automatically log data into databases or text files has brought an amount of data that is beyond the level of human interpretation and comprehension. One way of addressing this issue of interpreting large amounts of data is data visualisation, which aims to convert abstract data into images that are easy to interpret. However, people often have difficulty in selecting an appropriate visualisation tool and visualisation techniques that can effectively visualise their data. This research proposes the processes that can be followed to effectively visualise data. Data logged from a mobile learning system is visualised as a proof of concept to show how the proposed processes can be followed during data visualisation. These processes are summarised into a model that consists of three main components: the data, the visualisation techniques and the visualisation tool. There are two main contributions in this research: the model to visualise mobile learning usage data and the visualisation of the usage data logged from a mobile learning system. The mobile learning system usage data was visualised to demonstrate how students used the mobile learning system. Visualisation of the usage data helped to convert the data into images (charts and graphs) that were easy to interpret. The evaluation results indicated that the proposed process and resulting visualisation techniques and tool assisted users in effectively and efficiently interpreting large volumes of mobile learning system usage data.
188

Využití technologie LINQ / Use of LINQ technology

Fexa, Marek January 2007 (has links)
This work is focused at new Microsoft technology for querying data LINQ (Language intergrated query). It is built as complex technology, it allows working with various data sources. For example it contains collections of objects, created for temporary storing data in memory, but also data in robust database systems. In first part of this work I describe how data was queried before LINQ was on scene and what reasons made LINQ originate. Then I analyze new features in C# that were prerequisites for creating language integrated query as part of base libraries. Next part focuses at query operators, their interfaces and provides examples of use. Then I describe LINQ technology in complex, I create and describe in detail process of execution of LINQ query. Next chapter describes querying ADO.NET Datasets. Practical examples are created for both typed and untyped Datasets. In following chapter I analyze LINQ to SQL functionality. There is described method of implementation, mainly translation LINQ query to SQL language, integrated object rational mapping tool, change tracking in mapped objects including concurrency problems. At the end of the chapter I sketch the future in data access in ADO.NET entities. In last but one chapter is created complex application of use LINQ technology. Application is built as thin client that uses LINQ as for reading and manipulating data from relational database. In the application is LINQ also used common work with component that presents data. At the end part of this work is LINQ technology summarized. Contribution and pitfalls of LINQ technology are also described.
189

COMA: A system for flexible combination of schema matching approaches

Do, Hong-Hai, Rahm, Erhard 12 December 2018 (has links)
This chapter presents the generic schema match system, COmbination MAtch (COMA), which provides an extensible library of simple and hybrid match algorithms and supports a powerful framework for combining match results. The user can tailor match strategies by selecting the marchers and their combination for a given match problem. Hybrid matchers can also be configured easily by combining existing matchers using the provided combination strategies. Schema matching is the task of finding semantic correspondences between elements of two schemas. It is needed in many database applications, such as integration of web data sources, data warehouse loading, and XML message mapping. To reduce the amount of user effort as much as possible, automatic approaches combining several match techniques are required. While such match approaches have found considerable interest recently, the problem of how to best combine different match algorithms still requires further work.
190

Development of a Comprehensive Dental Trauma Database

Reynolds, Kyle T., DDS 29 August 2013 (has links)
No description available.

Page generated in 0.0525 seconds