• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1606
  • 457
  • 422
  • 170
  • 114
  • 102
  • 61
  • 49
  • 40
  • 36
  • 29
  • 23
  • 21
  • 17
  • 16
  • Tagged with
  • 3646
  • 856
  • 805
  • 754
  • 608
  • 544
  • 420
  • 400
  • 392
  • 363
  • 310
  • 304
  • 296
  • 277
  • 264
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
701

A DATABASE SYSTEM CONCEPT TO SUPPORT FLIGHT TEST - MEASUREMENT SYSTEM DESIGN AND OPERATION

Oosthoek, Peter B. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Information management is of essential importance during design and operation of flight test measurement systems to be used for aircraft airworthiness certification. The reliability of the data generated by the realtime- and post-processing processes is heavily dependent on the reliability of all provided information about the used flight test measurement system. Databases are well fitted to the task of information management. They need however additional application software to store, manage and retrieve the measurement system configuration data in a specified way to support all persons and aircraft- and ground based systems that are involved in the design and operation of flight test measurement systems. At the Dutch National Aerospace Laboratory (NLR) a "Measurementsystem Configuration DataBase" (MCDB) is being developed under contract with the Netherlands Agency for Aerospace Programs (NIVR) and in cooperation with Fokker to provide the required information management. This paper addresses the functional and operational requirements to the MCDB, its data-contents and computer configuration and a description of its intended way of operation.
702

Databasdesign: Nulägesanalys av normalisering

Wesslén Weiler, Johannes, Öhrn, Emelie January 2016 (has links)
År 1970 introducerades normalisering med syfte att organisera data i relationsdatabaser för att undvika redundant data och reducera risker för anomalier. Idag finns indikationer på att en mer nyanserad bild av normalisering behövs då dagens databaser ställs inför nya utmaningar och krav. Det här arbetet utförs i form av en fallstudie där en analys av tre databaser inom olika verksamheter genomförs. Med utgångspunkt i normalformerna genomförs en explorativ analys för att identifiera vilka aspekter som påverkar normalisering i industrin. Slutsatsen av arbetet är att det är svårt för en oberoende part till databasen att avgöra och tolka normalformernas uppfyllnad. Faktorer som påverkar normalisering av databaser är: utvecklarens intuition, användarens påverkan av datakvalitet samt den tekniska skuld som quickfixes orsakar. / Normalization was first introduced in 1970 with the purpose to organize data within relational databases in a way to avoid data redundancy and reduce the number of anomalies. As databases are facing new challenges and requirements, indications have been identified which points to a need for a more detailed view of normalization. This work is the outcome of a case study where three databases are analyzed. With the normal forms as starting point, an explorative analysis is made with the goal to identify different aspects that affects the way normalization is conducted within the industry. The conclusion is that it is difficult for an outsider to the database to interpret and determine whether the normal forms are fulfilled or not. Aspects affecting normalization are: the developer's intuition, users' impact on data quality and the technical debt that quickfixes creates.
703

Main-Memory Query Processing Utilizing External Indexes

Truong, Thanh January 2016 (has links)
Many applications require storage and indexing of new kinds of data in main-memory, e.g. color histograms, textures, shape features, gene sequences, sensor readings, or financial time series. Even though, many domain index structures were developed, very a few of them are implemented in any database management system (DBMS), usually only B-trees and hash indexes. A major reason is that the manual effort to include a new index implementation in a regular DBMS is very costly and time-consuming because it requires integration with all components of the DBMS kernel. To alleviate this, there are some extensible indexing frameworks. However, they all require re-engineering the index implementations, which is a problem when the index has third-party ownership, when only binary code is available, or simply when the index implementation is complex to re-engineer. Therefore, the DBMS should allow including new index implementations without code changes and performance degradation. Furthermore, for high performance the query processor needs knowledge of how to process queries to utilize plugged-in index. Moreover, it is important that all functionalities of a plugged-in index implementation are correct. The extensible main memory database system (MMDB) Mexima (Main-memory External Index Manager) addresses these challenges. It enables transparent plugging in main-memory index implementations without code changes. Index specific rewrite rules transform complex queries to utilize the indexes. Automatic test procedures validate the correctness of them based on user provided index meta-data. Moreover, the same optimization framework can also optimize complex queries sent to a back-end DBMS by exposing hidden indexes for its query optimizer. Altogether, Mexima is a complete and extensible platform for transparently index integration, utilization, and evaluation.
704

Performance modelling of database designs using a queueing networks approach : an investigation in the performance modelling and evaluation of detailed database designs using queueing network models

Osman, Rasha Izzeldin Mohammed January 2010 (has links)
Databases form the common component of many software systems, including mission critical transaction processing systems and multi-tier Internet applications. There is a large body of research in the performance of database management system components, while studies of overall database system performance have been limited. Moreover, performance models specifically targeted at the database design have not been extensively studied. This thesis attempts to address this concern by proposing a performance evaluation method for database designs based on queueing network models. The method is targeted at designs of large databases in which I/O is the dominant cost factor. The database design queueing network performance model is suitable in providing what if comparisons of database designs before database system implementation. A formal specification that captures the essential database design features while keeping the performance model sufficiently simple is presented. Furthermore, the simplicity of the modelling algorithms permits the direct mapping between database design entities and queueing network models. This affords for a more applicable performance model that provides relevant feedback to database designers and can be straightforwardly integrated into early database design development phases. The accuracy of the modelling technique is validated by modelling an open source implementation of the TPC-C benchmark. The contribution of this thesis is considered to be significant in that the majority of performance evaluation models for database systems target capacity planning or overall system properties, with limited work in detailed database transaction processing and behaviour. In addition, this work is deemed to be an improvement over previous methodologies in that the transaction is modelled at a finer granularity, and that the database design queueing network model provides for the explicit representation of active database rules and referential integrity constraints.
705

Advanced query processing on spatial networks

Yiu, Man-lung., 姚文龍. January 2006 (has links)
published_or_final_version / abstract / Computer Science / Doctoral / Doctor of Philosophy
706

Techniques for managing and analyzing unconventional data

Ho, Wai-shing., 何偉成. January 2004 (has links)
published_or_final_version / abstract / Computer Science and Information Systems / Doctoral / Doctor of Philosophy
707

An empirical study of the use of conceptual models for mutation testing of database application programs

Wu, Yongjian, 吳勇堅 January 2006 (has links)
published_or_final_version / abstract / Computer Science / Master / Master of Philosophy
708

Tabu search-based techniques for clustering data sets

黃頌詩, Wong, Chung-sze. January 2001 (has links)
published_or_final_version / Mathematics / Master / Master of Philosophy
709

The influence of live customer service on consumers' likelihood of disclosing personal information

Li, Dan, active 21st century 08 August 2014 (has links)
Live customer service has been used by many e-commerce brands as a method to gain consumers personal information. Previous research has found that live service agents have a positive influence on consumer perceived service quality and trust. This research aims to examine if certain type of live customer service generate better website and brand perceptions from the consumer and ultimately help in gaining consumer personal information. Results of this experimental design show that avatar selection and exposure did not significantly differ for service quality, trust, attitudes, purchase intention, and likelihood of disclosing personal information. It was also found that customers have a significant likelihood of selecting agents of the same gender. / text
710

Querying and extracting heterogeneous graphs from structured data and unstrutured content

Soussi, Rania 22 June 2012 (has links) (PDF)
The present work introduces a set of solutions to extract graphs from enterprise data and facilitate the process of information search on these graphs. First of all we have defined a new graph model called the SPIDER-Graph, which models complex objects and permits to define heterogeneous graphs. Furthermore, we have developed a set of algorithms to extract the content of a database from an enterprise and to represent it in this new model. This latter representation allows us to discover relations that exist in the data but are hidden due to their poor compatibility with the classical relational model. Moreover, in order to unify the representation of all the data of the enterprise, we have developed a second approach which extracts from unstructured data an enterprise's ontology containing the most important concepts and relations that can be found in a given enterprise. Having extracted the graphs from the relational databases and documents using the enterprise ontology, we propose an approach which allows the users to extract an interaction graph between a set of chosen enterprise objects. This approach is based on a set of relations patterns extracted from the graph and the enterprise ontology concepts and relations. Finally, information retrieval is facilitated using a new visual graph query language called GraphVQL, which allows users to query graphs by drawing a pattern visually for the query. This language covers different query types from the simple selection and aggregation queries to social network analysis queries.

Page generated in 0.0451 seconds