• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1608
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3649
  • 856
  • 805
  • 754
  • 608
  • 544
  • 420
  • 401
  • 392
  • 364
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

The effects of inheritance on the properties of physical storage models in object oriented databases

Willshire, Mary Jane 12 1900 (has links)
No description available.
272

Using Economic Models to Tune Resource Allocations in Database Management Systems

Zhang, Mingyi 17 November 2008 (has links)
Resource allocation in a database management system (DBMS) is a performance management process in which an autonomic DBMS makes resource allocation decisions based on properties like workload business importance. We propose the use of economic models in a DBMS to guide the resource allocation decisions. An economic model is described in terms of business trades and concepts, and it has been successfully applied in some computer system resource allocation problems. In this thesis, we present approaches that use economic models to allocate single and multiple DBMS resources, such as main memory buffer pool space and system CPU shares, to workloads running concurrently on a DBMS based on the workloads’ business importance policies. We first illustrate how economic models can be used to allocate single DBMS resources, namely system CPU shares, to competing workloads on a DBMS. We then extend this approach to using economic models to simultaneously allocate multiple DBMS resources, namely buffer pool memory space and system CPU shares, to competing workloads on a DBMS based on the workload business importance policy in order to achieve their service level agreements. Experiments are conducted using IBM® DB2® databases to verify the effectiveness of our approach. / Thesis (Master, Computing) -- Queen's University, 2008-11-17 15:35:50.303
273

Data base design for integrated computer-aided engineering

Hatchell, Brian 05 1900 (has links)
No description available.
274

CAD/CAM data base management systems requirements for mechanical parts

Whelan, Peter Timothy 08 1900 (has links)
No description available.
275

Thermal/structural integration through relational database management

Benatar, Gil 05 1900 (has links)
No description available.
276

Towards Privacy Preserving of Forensic DNA Databases

Liu, Sanmin 2011 December 1900 (has links)
Protecting privacy of individuals is critical for forensic genetics. In a kinship/identity testing, related DNA profiles between user's query and the DNA database need to be extracted. However, unrelated profiles cannot be revealed to each other. The challenge is today's DNA database usually contains millions of DNA profiles, which is too big to perform privacy-preserving query with current cryptosystem directly. In this thesis, we propose a scalable system to support privacy-preserving query in DNA Database. A two-phase strategy is designed: the first is a Short Tandem Repeat index tree for quick fetching candidate profiles from disk. It groups loci of DNA profiles by matching probability, so as to reduce I/O cost required to find a particular profile. The second is an Elliptic Curve Cryptosystem based privacy-preserving matching engine, which performs match between candidates and user's sample. In particular, a privacy-preserving DNA profile matching algorithm is designed, which achieves O(n) computing time and communication cost. Experimental results show that our system performs well at query latency, query hit rate, and communication cost. For a database of one billion profiles, it takes 80 seconds to return results to the user.
277

Data Structures and Reduction Techniques for Fire Tests

Tobeck, Daniel January 2007 (has links)
To perform fire engineering analysis, data on how an object or group of objects burn is almost always needed. This data should be collected and stored in a logical and complete fashion to allow for meaningful analysis later. This thesis details the design of a new fire test Data Base Management System (DBMS) termed UCFIRE which was built to overcome the limitations of existing fire test DBMS and was based primarily on the FDMS 2.0 and FIREBASEXML specifications. The UCFIRE DBMS is currently the most comprehensive and extensible DBMS available in the fire engineering community and can store the following test types: Cone Calorimeter, Furniture Calorimeter, Room/Corner Test, LIFT and Ignitability Apparatus Tests. Any data reduction which is performed on this fire test data should be done in an entirely mechanistic fashion rather than rely on human intuition which is subjective. Currently no other DBMS allows for the semi-automation of the data reduction process. A number of pertinent data reduction algorithms were investigated and incorporated into the UCFIRE DBMS. An ASP.NET Web Service (WEBFIRE) was built to reduce the bandwidth required to exchange fire test information between the UCFIRE DBMS and a UCFIRE document stored on a web server. A number of Mass Loss Rate (MLR) algorithms were investigated and it was found that the Savitzky-Golay filtering algorithm offered the best performance. This algorithm had to be further modified to autonomously filter other noisy events that occurred during the fire tests. This algorithm was then evaluated on test data from exemplar Furniture Calorimeter and Cone Calorimeter tests. The LIFT test standard (ASTM E 1321-97a) requires its ignition and flame spread data to be scrutinised but does not state how to do this. To meet these requirements the fundamentals of linear regression were reviewed and an algorithm to mechanistically scrutinise ignition and flame spread data was developed. This algorithm seemed to produce reasonable results when used on exemplar ignition and flame spread test data.
278

The practice of relationship marketing in hotels

Osman, Hanaa January 2001 (has links)
No description available.
279

The capture of meaning in database administration

Robinson, H. M. January 1988 (has links)
No description available.
280

A multimedia information exchange of the industrial heritage of the Lower Lee Valley

Budd, Brian Douglas January 1998 (has links)
The Lee Valley Industrial Heritage Electronic Archive (LVIHEA) is a model record of industrial buildings composed as a composite of multimedia data files relevant to the interpretation of the region's dynamic industrial environment. The design criteria concerning natural, human and artificial resources are applicable to education and heritage management strategies. The prototype model was evaluated in terms of its efficacy and effectiveness with designated user groups. The developed model will enable qualitative and quantitative analyses concerning the economic, social and industrial history of the region. It can be used as a pedagogic tool for instruction in the principles of structured data design, construction, storage and retrieval, and for techniques of data collection. Furthermore the data sets can be closely analysed and manipulated for interpretative purposes. Chapter one attempts to define the Lee Valley in terms of its geographic, historical, economic and societal context. The aims and resources of the project are outlined and the study is placed in the bibliographic context of similar studies. Thereafter it addresses the processes leading to and a description of the structure of the prototype model. A paper model is presented and the data structures conforming lo or compatible with established planning, archiving and management protocols and strategies are described and evaluated. Chapter two is a detailed description and rationale of the archive's data files and teaching and learning package. It outlines procedures of multimedia data collection and digitisation and provides an evaluative analysis. Chapter three looks at the completed prototype and reviews the soft systems methodology approach to problem analysis used throughout the project. Sections examining the LVIHEA in use and the practical issues of disseminating it follow. The chapter concludes by reviewing the significance of the research and indicates possible directions for further research. The survey is artifact rather than document led and begins with the contemporary landscape before "excavating" to reveal first the recent and then the more distant past. However, many choices for inclusion are necessarily reactive rather than proactive in response to the regular "crises" where conservation is just one consideration in a complex development. Progressive strategies are sometimes sacrificed for the immediate opportunity to record information concerning an artifact under imminent threat of destruction. It is acknowledge that the artefact (building) would usually disappear before its associated documentation and that therefore it was imperative to obtain as much basic detail as possible about as many sites as possible. It is hoped that greater depth can be achieved by tracking down the documentation to its repositories when time permits. Amenity groups had already focussed their attention on many of the more "interesting" sites and every opportunity was taken to incorporate their findings into the LVIHEA. This study provides an insight into the cycle of development and decline of an internationally important industrial landscape. It does so in a structured environment incorporating modem digital technology while providing a framework for continuing study.

Page generated in 0.0242 seconds