• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 27
  • 18
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 64
  • 64
  • 18
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A DATABASE SYSTEM CONCEPT TO SUPPORT FLIGHT TEST - MEASUREMENT SYSTEM DESIGN AND OPERATION

Oosthoek, Peter B. 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1993 / Riviera Hotel and Convention Center, Las Vegas, Nevada / Information management is of essential importance during design and operation of flight test measurement systems to be used for aircraft airworthiness certification. The reliability of the data generated by the realtime- and post-processing processes is heavily dependent on the reliability of all provided information about the used flight test measurement system. Databases are well fitted to the task of information management. They need however additional application software to store, manage and retrieve the measurement system configuration data in a specified way to support all persons and aircraft- and ground based systems that are involved in the design and operation of flight test measurement systems. At the Dutch National Aerospace Laboratory (NLR) a "Measurementsystem Configuration DataBase" (MCDB) is being developed under contract with the Netherlands Agency for Aerospace Programs (NIVR) and in cooperation with Fokker to provide the required information management. This paper addresses the functional and operational requirements to the MCDB, its data-contents and computer configuration and a description of its intended way of operation.
12

A database system architecture supporting coexisting query languages and data models

Hepp, Pedro E. January 1983 (has links)
Database technology is already recognised and increasingly used in administering and organising large bodies of data and as an aid in developing software. This thesis considers the applicability of this technology in small but potentially expanding application environments with users of varying levels of competence. A database system architecture with the following main characteristics is proposed: Database technology is already recognised and increasingly used in administering and organising large bodies of data and as an aid in developing software. This thesis considers the applicability of this technology in small but potentially expanding application environments with users of varying levels of competence. A database system architecture with the following main characteristics is proposed:Database technology is already recognised and increasingly used in administering and organising large bodies of data and as an aid in developing software. This thesis considers the applicability of this technology in small but potentially expanding application environments with users of varying levels of competence. A database system architecture with the following main characteristics is proposed : 1. It is based on a set of software components that facilitates the implementation and evolution of a software development environment centered on a database. 2. It enables the implementation of different user interfaces to provide adequate perceptions of the information content of the database according to the user's competence, familiarity with the system or the complexity of the processing requirements. 3. it is oriented toward databases that require moderate resources from the computer system to start an application. Personal or small-group databases are likely to benefit most from this approach.
13

Workflow Management System Analysis and Construction Design Framework for Large-scale Enterprise

Zhao, Die January 2007 (has links)
<p>Nowadays, with the rapid development of Information technology, workflow management technology has undoubtedly become the basic component of enterprise information system construction, the research and application level regarding workflow management technology directly determine the level of enterprise information system. By virtue of using workflow management system within enterprises, could advance the operating efficiency of business processes and enhance enterprise competition ability.</p><p>This paper describes the origin and status of workflow management, analyses on the base of the workflow reference model of Workflow Management Coalition; discusses the overall requirements of workflow management system, especially for large-scale enterprises, which includes workflow engines, process design tools, administration and monitoring tools, and client tools thesis four essential components; as well as studies on the workflow modeling techniques and adopts the method based on UML activity diagram to achieve modeling. Along with proposes a design and realization method concerning how to construct a workflow management structure, which utilizes J2EE lightweight workflow engine to meet changeful business processes and diverse deployment environments.</p>
14

獨立專家系統與資料庫鬆散結合介面之研究 / The Study of An Interface for Loose Coupling of Independent Expert Systems and Databases

黃允暐 Unknown Date (has links)
智慧型資料庫的研究重點,在於發展一個能結合專家系統的知識表達、推論及問題解決能力與資料庫的大量資料儲存管理能力的資訊系統。目前有四種主要類型的結合方式:(1)增強式的資料庫系統、(2)增強式的專家系統、(3)獨立專家系統與資料庫的鬆散結合、(4)專家資料庫。   由於目前企業中已有許多資料庫系統在運作之中,上述的結合方式以獨立專家系統與資料庫的鬆散結合方式較為合適,其它三類則因需要對現有系統配合新的架構加以修改,在成本效益上較不經濟。   因此,本論文針對獨立專家系統與資料庫的鬆散結合架構進行研究。首先從文獻中瞭解其它相關研究對結合所提出架構的優缺之處,而從資料庫的獨立性、專家系統的獨立性、系統的複雜度、系統的擴充性等四點考慮,提出一個較為獨立、有效率,且建立容易的系統架構。並以財務診斷此一特定應用領域,成功地發展出一套結合財務診斷專家系統與會計資料庫系統的雛形系統,驗證了本架構的可行性。最後並提出進一步的研究建議,以供後續研究參考。 / The reserach objective of the integration, of expert systems and database systems is to develope an information system which can integrate both the capabilities of ex-pert systems (inferencing and problem solving) and capabilities of databases (storing and managing of huge amount of data). There are four ways to integrate them: (1) enhanced databases, (2) enhanced expert systems; (3) loose coupling of independent expert systems and databases, (4) expert database systems.   Since there are many existing database systems, the loose coupling of independent expert system, and database is considered to be the most appropriate approach. The other three ways are not cost/benefit feasible because they need major modification of the curresn systems.   The focus of this reserach is on loose coupling of independent expert system and database. The strengthes and weaknesses of the relevent researchs are discov-ered through literature review. There are four major considerations in developing the architecture: database independency, expert system independency, system, complexity, system, extendibility. The proposed architecture is more independent, efficient, and easy to be bulit. This research has also built a prototype system coupling a financial diagnosis expert system, with an accounting database. The feasibility of the prototype has verified the usefulness of the proposed architecture. Future reasearch suggestions are also stated.
15

Heterogeneity-Aware Placement Strategies for Query Optimization

Karnagel, Tomas 31 May 2017 (has links) (PDF)
Computing hardware is changing from systems with homogeneous CPUs to systems with heterogeneous computing units like GPUs, Many Integrated Cores, or FPGAs. This trend is caused by scaling problems of homogeneous systems, where heat dissipation and energy consumption is limiting further growths in compute-performance. Heterogeneous systems provide differently optimized computing hardware, which allows different operations to be computed on the most appropriate computing unit, resulting in faster execution and less energy consumption. For database systems, this is a new opportunity to accelerate query processing, allowing faster and more interactive querying of large amounts of data. However, the current hardware trend is also a challenge as most database systems do not support heterogeneous computing resources and it is not clear how to support these systems best. In the past, mainly single operators were ported to different computing units showing great results, while missing a system wide application. To efficiently support heterogeneous systems, a systems approach for query processing and query optimization is needed. In this thesis, we tackle the optimization challenge in detail. As a starting point, we evaluate three different approaches on isolated use-cases to assess their advantages and limitations. First, we evaluate a fork-join approach of intra-operator parallelism, where the same operator is executed on multiple computing units at the same time, each execution with different data partitions. Second, we evaluate using one computing unit statically to accelerate one operator, which provides high code-optimization potential, due to this static and pre-known usage of hardware and software. Third, we evaluate dynamically placing operators onto computing units, depending on the operator, the available computing hardware, and the given data sizes. We argue that the first and second approach suffer from multiple overheads or high implementation costs. The third approach, dynamic placement, shows good performance, while being highly extensible to different computing units and different operator implementations. To automate this dynamic approach, we first propose general placement optimization for query processing. This general approach includes runtime estimation of operators on different computing units as well as two approaches for defining the actual operator placement according to the estimated runtimes. The two placement approaches are local optimization, which decides the placement locally at run-time, and global optimization, where the placement is decided at compile-time, while allowing a global view for enhanced data sharing. The main limitation of the latter is the high dependency on cardinality estimation of intermediate results, as estimation errors for the cardinalities propagate to the operator runtime estimation and placement optimization. Therefore, we propose adaptive placement optimization, allowing the placement optimization to become fully independent of cardinalities estimation, effectively eliminating the main source of inaccuracy for runtime estimation and placement optimization. Finally, we define an adaptive placement sequence, incorporating all our proposed techniques of placement optimization. We implement this sequence as a virtualization layer between the database system and the heterogeneous hardware. Our implementation approach bases on preexisting interfaces to the database system and the hardware, allowing non-intrusive integration into existing database systems. We evaluate our techniques using two different database systems and two different OLAP benchmarks, accelerating the query processing through heterogeneous execution.
16

Využití databázových systémů v sw balících typu ERP/ERP II / Usage of database systems in robust ERP/ERP II sw packages

Vašek, Martin January 2010 (has links)
This thesis is concerning with problems of using database systems in ERP / ERP II software packages. The goal is to define position of ERP / ERP II systems in the Information System market. With this topic are also connected characteristics of database systems and definition of their specific position towards ERP / ERP II solutions. Except classical solutions, when the whole Information System is situated "inside" a company, there are also analyzed new attitudes, which respect external provider of ERP / ERP II and database services, particularly SaaS and Cloud Computing technology. This thesis also deals with evaluation of contributions and threats of these new business models, respecting different size of ERP / ERP II solutions. After introductory theoretical chapters, we choose respondents from groups of producers and distributors of ERP / ERP II products and we implemented a survey through questionnaire research. The goal is to clarify main reasons of choice of specific database platforms, used with different types of ERP / ERP II solutions. Afterward, with the aid of defined hypothesis, I'm trying to explain a degree of platform independence of robust ERP / ERP II software packages, towards database platforms. In closing parts of the thesis, there are compared individual database platforms among each other, respecting their suitability of usage in ERP / ERP II systems. Database systems are closely analyzed from several points of view. On the basis of ascertained theoretical and empirical frequency of particular database solutions we determine dominant market players and with the aid of multicriterial comparison we clear up reasons of their success among other competitors. Finally, we outline an anticipated trend, where the database systems market destined for ERP / ERP II products should grow in.
17

Management of Time Series Data

Matus Castillejos, Abel, n/a January 2006 (has links)
Every day large volumes of data are collected in the form of time series. Time series are collections of events or observations, predominantly numeric in nature, sequentially recorded on a regular or irregular time basis. Time series are becoming increasingly important in nearly every organisation and industry, including banking, finance, telecommunication, and transportation. Banking institutions, for instance, rely on the analysis of time series for forecasting economic indices, elaborating financial market models, and registering international trade operations. More and more time series are being used in this type of investigation and becoming a valuable resource in today�s organisations. This thesis investigates and proposes solutions to some current and important issues in time series data management (TSDM), using Design Science Research Methodology. The thesis presents new models for mapping time series data to relational databases which optimise the use of disk space, can handle different time granularities, status attributes, and facilitate time series data manipulation in a commercial Relational Database Management System (RDBMS). These new models provide a good solution for current time series database applications with RDBMS and are tested with a case study and prototype with financial time series information. Also included is a temporal data model for illustrating time series data lifetime behaviour based on a new set of time dimensions (confidentiality, definitiveness, validity, and maturity times) specially targeted to manage time series data which are introduced to correctly represent the different status of time series data in a timeline. The proposed temporal data model gives a clear and accurate picture of the time series data lifecycle. Formal definitions of these time series dimensions are also presented. In addition, a time series grouping mechanism in an extensible commercial relational database system is defined, illustrated, and justified. The extension consists of a new data type and its corresponding rich set of routines that support modelling and operating time series information within a higher level of abstraction. It extends the capability of the database server to organise and manipulate time series into groups. Thus, this thesis presents a new data type that is referred to as GroupTimeSeries, and its corresponding architecture and support functions and operations. Implementation options for the GroupTimeSeries data type in relational based technologies are also presented. Finally, a framework for TSDM with enough expressiveness of the main requirements of time series application and the management of that data is defined. The framework aims at providing initial domain know-how and requirements of time series data management, avoiding the impracticability of designing a TSDM system on paper from scratch. Many aspects of time series applications including the way time series data are organised at the conceptual level are addressed. The central abstraction for the proposed domain specific framework is the notions of business sections, group of time series, and time series itself. The framework integrates comprehensive specification regarding structural and functional aspects for time series data management. A formal framework specification using conceptual graphs is also explored.
18

Workflow Management System Analysis and Construction Design Framework for Large-scale Enterprise

Zhao, Die January 2007 (has links)
Nowadays, with the rapid development of Information technology, workflow management technology has undoubtedly become the basic component of enterprise information system construction, the research and application level regarding workflow management technology directly determine the level of enterprise information system. By virtue of using workflow management system within enterprises, could advance the operating efficiency of business processes and enhance enterprise competition ability. This paper describes the origin and status of workflow management, analyses on the base of the workflow reference model of Workflow Management Coalition; discusses the overall requirements of workflow management system, especially for large-scale enterprises, which includes workflow engines, process design tools, administration and monitoring tools, and client tools thesis four essential components; as well as studies on the workflow modeling techniques and adopts the method based on UML activity diagram to achieve modeling. Along with proposes a design and realization method concerning how to construct a workflow management structure, which utilizes J2EE lightweight workflow engine to meet changeful business processes and diverse deployment environments.
19

Návrh databázového systému pro správu akcí / Proposal of Database System for Event Management

Sekula, Radim January 2017 (has links)
The Master’s thesis deals with the proposal of database system for event management according to the requirements of the client, which is ANeT-Advanced Network Technology, Ltd. The first part of the thesis summarizes the theoretical bases of database systems and object-oriented programming focusing on .NET Framework. In the second part the current situation and the specific needs of the company are analyzed, and based on this analysis a solution proposal is created in the third part.
20

Návrh a implementace rozhraní pro zpracování rámců xPON / Design and implementation of xPON frame processing interface

Vais, Zdeněk January 2020 (has links)
This thesis focuses on system for persistence of GPON communication. Theoretical part deals with problems of GPON and NG-PON optical networks, NoSQL database systems and MongoDB database. Practical part contains design of a database schema for MongoDB database and source code in programming languages Python and C# for working with this database. The thesis is finalized by performance testing, proving that the database design and source code implementation is capable of handling real world traffic.

Page generated in 0.0627 seconds