• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 39
  • 9
  • 9
  • 6
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 74
  • 35
  • 24
  • 23
  • 21
  • 18
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Databasteknologier i svenska företag och organisationer och hinder för dess användning

Andersson, Ulf January 2000 (has links)
Databaser och databassystem är grunden för en stor del av de verksamheter som företag och organisationer sysslar med idag. Att utnyttja moderna databasteknologier kan medföra ett säkrare och smidigare system vilket i sin tur kan leda till konkurrens-fördelar för de som använder dem. I detta arbete undersöks i vilken omfattning ett antal olika moderna databasteknologier används inom företag och organisationer i Sverige och huruvida kostnaden är det största hindret för vidareutvecklingar av befintliga system. Inledningsvis beskrivs ett antal moderna databasteknologier för att ge en uppfattning om vilka möjligheter som finns på databasområdet. Därefter redovisas den under-sökning som gjorts med hjälp av telefonintervju som metod för att samla in material. Resultatet visar att vissa teknologier, som exempelvis databaser kopplade mot Internet, utnyttjas och i stor utsträckning kommer att användas i ännu högre grad i framtiden. Av undersökningen framkom ett flertal olika hinder för utveckling av databassystem där kostnaden endast är ett av dessa.
62

Sur la dépendance des queues de distributions / On the tait dependence of distributions

Aleiyouka, Mohalilou 27 September 2018 (has links)
Pour modéliser de la dépendance entre plusieurs variables peut s'appuyer soit sur la corrélation entre les variables, soit sur d'autres mesures, qui déterminent la dépendance des queues de distributions.Dans cette thèse, nous nous intéressons à la dépendance des queues de distributions, en présentant quelques propriétés et résultats.Dans un premier temps, nous obtenons le coefficient de dépendance de queue pour la loi hyperbolique généralisée selon les différentes valeurs de paramètres de cette loi.Ensuite, nous exposons des propriétés et résultats du coefficient de dépendance extrémale dans le cas où les variables aléatoires suivent une loi de Fréchet unitaire.Finalement, nous présentons un des systèmes de gestion de bases de données temps réel (SGBDTR). Le but étant de proposer des modèles probabilistes pour étudier le comportement des transactions temps réel, afin d'optimiser ses performances. / The modeling of the dependence between several variables can focus either on the positive or negative correlation between the variables, or on other more effective ways, which determine the tails dependence of distributions.In this thesis, we are interested in the tail dependence of distributions, by presenting some properties and results. Firstly, we obtain the limit tail dependence coefficient for the generalized hyperbolic law according to different parameter values of this law. Then, we exhibit some properties and results of die extremal dependence coefficient in the case where the random variables follow a unitary Fréchet law.Finally, we present a Real Time Database ManagementSystems (RDBMS). The goal is to propose probabilistic models to study thebehavior of real-time transactions, in order to optimize its performance.
63

A Plan for OLAP

Jaecksch, Bernhard, Lehner, Wolfgang, Faerber, Franz 30 May 2022 (has links)
So far, data warehousing has often been discussed in the light of complex OLAP queries and as reporting facility for operative data. We argue that business planning as a means to generate plan data is an equally important cornerstone of a data warehouse system, and we propose it to be a first-class citizen within an OLAP engine. We introduce an abstract model describing relevant aspects of the planning process in general and the requirements it poses to a planning engine. Furthermore, we show that business planning lends itself well to parallelization and benefits from a column-store much like traditional OLAP does. We then develop a physical model specifically targeted at a highly parallel column-store, and with our implementation, we show nearly linear scaling behavior.
64

GignoMDA

Habich, Dirk, Richly, Sebastian, Lehner, Wolfgang 03 July 2023 (has links)
Database Systems are often used as persistent layer for applications. This implies that database schemas are generated out of transient programming class descriptions. The basic idea of the MDA approach generalizes this principle by providing a framework to generate applications (and database schemas) for different programming platforms. Within our GignoMDA project [3]--which is subject of this demo proposal--we have extended classic concepts for code generation. That means, our approach provides a single point of truth describing all aspects of database applications (e.g. database schema, project documentation,...) with great potential for cross-layer optimization. These new cross-layer optimization hints are a novel way for the challenging global optimization issue of multi-tier database applications. The demo at VLDB comprises an in-depth explanation of our concepts and the prototypical implementation by directly demonstrating the modeling and the automatic generation of database applications.
65

Sistema de gerenciamento da informação: alterações neurológicas em chagásicos crônicos não-cardíacos / Information Management System: neurological disorders in non-cardiac chronics chagasic.

Carmo, Samuel Sullivan 27 April 2010 (has links)
O presente trabalho ocupa-se no desenvolvimento de um sistema computacional de gerenciamento da informação para auxiliar os estudos científicos sobre o sistema nervoso de chagásicos crônicos não-cardíacos. O objetivo é desenvolver o sistema requerido, pelo pressuposto de praticidade nas análises decorrentes da investigação. O método utilizado para desenvolver este sistema computacional, dedicado ao gerenciamento das informações da pesquisa sobre as alterações neurológicas de seus sujeitos, foi; compor o arquétipo de metas e a matriz de levantamento de requisitos das variantes do sistema; listar os atributos, domínios e qualificações das suas variáveis; elaborar o quadro de escolha de equipamentos e aplicativos necessários para sua implantação física e lógica e; implantá-lo mediante uma modelagem de base de dados, e uma programação lógica de algoritmos. Como resultado o sistema foi desenvolvido. A discussão de análise é; a saber, que a informatização pode tornar mais eficaz as operações de cadastro, consulta e validação de campo, além da formatação e exportação de tabelas pré-tratadas para análises estatísticas, atuando assim como uma ferramenta do método científico. Ora, a argumentação lógica é que a confiabilidade das informações computacionalmente registradas é aumentada porque o erro humano é diminuído na maioria dos processamentos. Como discussão de cerramento, estudos dotados de razoável volume de variáveis e sujeitos de pesquisa são mais bem geridos caso possuam um sistema dedicado ao gerenciamento de suas informações. / This is the development of a computer information management system to support scientific studies about the nervous system of non-cardiac chronic chagasic patients. The goal is to develop the required system, by assumption of the convenience in the analysis of research results. The method used to develop this computer system, dedicated to information management of research about the neurological disorders of their human subject research, were; compose the archetypal matrix of targets and requirements elicitation of the system variants; list the attributes, qualifications and domains of its variables; draw up the choice framework of equipment and required applications for its physical and logic implementation, and; deploying it through a data modeling, an adapted entity-relationship diagram and programmable logic algorithms. As a result the required system was developed. The analytical discussion is that the computerization makes the data processing faster and safer. The more practical information management processes are: the operations of registration, queries and fields\' validations, as well as the advanced and basic queries of records, in addition to table formatting and exporting of pre-treated for statistical analysis. The logical argument is that the reliability of the recorded computationally information is increased because is insured that bias of human error is absent from most of the steps, including several the data processing operations. As end discussion, scientific studies with reasonable amount of variables and research subjects are better managed if they have a dedicated system to managing their information.
66

物件網際網路資料庫系統中介模式之研究 / A Language-based Gateway between OODBMS and Web

韋凱忠 Unknown Date (has links)
近年來由於全球資訊網的盛行,造成網際網路上的資料需求量大增,資料庫管理系統與全球資訊網必須結合以滿足客戶端的資訊需求。但隨著網路異質性的增加,不同的平台、作業系統及通訊協定不斷地加入使用,不一致的問題亦隨之而來。   同時,物件資料庫的兩大標準,SQL3與ODMG,在許多觀念上不儘相同。因此,本研究嘗試從物件模型及查詢語言兩方面著手,以物件觀點來分析兩者的相同及相異處,提出一對應模式,並實做一個資料庫中介系統,以轉換兩種語言在語法上的差異。 / In recent years, the complexity of database systems has been enhanced under the development of client/server architecture and distributed computing systems. Usually this type of system combines different hardware, network protocols or DBMSs.   Because the Internet and WWW are more and more popular, many people regard "Network as a computer" or "Network as a global database." It is obvious heterogeneous databases will be connected via WWW in order to provide more information. In the meantime, new OODBMS standards, SQL3 and ODMG, are emerging. Although the two standards both support for object facilities, they are quite different in object model and query languages. Therefore, the mapping between these two standards is necessary. We propose a comparison model and develop an experimental gateway according to the model.
67

Energy-Efficient In-Memory Database Computing

Lehner, Wolfgang 27 June 2013 (has links) (PDF)
The efficient and flexible management of large datasets is one of the core requirements of modern business applications. Having access to consistent and up-to-date information is the foundation for operational, tactical, and strategic decision making. Within the last few years, the database community sparked a large number of extremely innovative research projects to push the envelope in the context of modern database system architectures. In this paper, we outline requirements and influencing factors to identify some of the hot research topics in database management systems. We argue that—even after 30 years of active database research—the time is right to rethink some of the core architectural principles and come up with novel approaches to meet the requirements of the next decades in data management. The sheer number of diverse and novel (e.g., scientific) application areas, the existence of modern hardware capabilities, and the need of large data centers to become more energy-efficient will be the drivers for database research in the years to come.
68

Elektros energijos apskaitos ir matavimo prietaisų maršrutizavimo kompiuterizuotos informacinės sistemos sukūrimas ir tyrimas / The Creation and Investigation of the Computerized Information System for the Electricity Energy Accounting and Measuring Instruments Routine

Griškėnienė, Edita 24 September 2004 (has links)
The computerize information systems are widely used in the companies of Lithuania now. A lot of them are universal enough and fit to solve the various administrative problems in the companies. These systems excel in large complexity and high price. So develops the need to create more simple and cheap information systems. An aim of the project is to create system which are accomulating the information about received and given flows of the electricity energy instruments, are doing the namesake and guantity account of the accounting instruments, are formating analysis reports for the directed period. The client of the project is Rytų skirstomieji tinklai AB branch Alytus electricity network Elekctricity energy realization division. The need of the project to the client may be given an outline: · to boost the guality of work and account results; · to reduce expenditure of time to do account works; · to eliminate the information duplicate; · to ease analysis reports composing; · to escape mistakes; · to effective account work. The project is realized by MS Access data base with integrated Microsoft Visual Basic for Aplication. The posibilities of this base complete enough to accomplish those project. Also this packet helps to realize grafic users link (GUL). There are realized those functions to help users work in this project: buttons, the facilitation of the repetitive information installing, help. The project was created to satisfy the users all needs and to diminish the use... [to full text]
69

Sistema de gerenciamento da informação: alterações neurológicas em chagásicos crônicos não-cardíacos / Information Management System: neurological disorders in non-cardiac chronics chagasic.

Samuel Sullivan Carmo 27 April 2010 (has links)
O presente trabalho ocupa-se no desenvolvimento de um sistema computacional de gerenciamento da informação para auxiliar os estudos científicos sobre o sistema nervoso de chagásicos crônicos não-cardíacos. O objetivo é desenvolver o sistema requerido, pelo pressuposto de praticidade nas análises decorrentes da investigação. O método utilizado para desenvolver este sistema computacional, dedicado ao gerenciamento das informações da pesquisa sobre as alterações neurológicas de seus sujeitos, foi; compor o arquétipo de metas e a matriz de levantamento de requisitos das variantes do sistema; listar os atributos, domínios e qualificações das suas variáveis; elaborar o quadro de escolha de equipamentos e aplicativos necessários para sua implantação física e lógica e; implantá-lo mediante uma modelagem de base de dados, e uma programação lógica de algoritmos. Como resultado o sistema foi desenvolvido. A discussão de análise é; a saber, que a informatização pode tornar mais eficaz as operações de cadastro, consulta e validação de campo, além da formatação e exportação de tabelas pré-tratadas para análises estatísticas, atuando assim como uma ferramenta do método científico. Ora, a argumentação lógica é que a confiabilidade das informações computacionalmente registradas é aumentada porque o erro humano é diminuído na maioria dos processamentos. Como discussão de cerramento, estudos dotados de razoável volume de variáveis e sujeitos de pesquisa são mais bem geridos caso possuam um sistema dedicado ao gerenciamento de suas informações. / This is the development of a computer information management system to support scientific studies about the nervous system of non-cardiac chronic chagasic patients. The goal is to develop the required system, by assumption of the convenience in the analysis of research results. The method used to develop this computer system, dedicated to information management of research about the neurological disorders of their human subject research, were; compose the archetypal matrix of targets and requirements elicitation of the system variants; list the attributes, qualifications and domains of its variables; draw up the choice framework of equipment and required applications for its physical and logic implementation, and; deploying it through a data modeling, an adapted entity-relationship diagram and programmable logic algorithms. As a result the required system was developed. The analytical discussion is that the computerization makes the data processing faster and safer. The more practical information management processes are: the operations of registration, queries and fields\' validations, as well as the advanced and basic queries of records, in addition to table formatting and exporting of pre-treated for statistical analysis. The logical argument is that the reliability of the recorded computationally information is increased because is insured that bias of human error is absent from most of the steps, including several the data processing operations. As end discussion, scientific studies with reasonable amount of variables and research subjects are better managed if they have a dedicated system to managing their information.
70

Balancing Money and Time for OLAP Queries on Cloud Databases

Sabih, Rafia January 2016 (has links) (PDF)
Enterprise Database Management Systems (DBMSs) have to contend with resource-intensive and time-varying workloads, making them well-suited candidates for migration to cloud plat-forms { specifically, they can dynamically leverage the resource elasticity while retaining affordability through the pay-as-you-go rental interface. The current design of database engine components lays emphasis on maximizing computing efficiency, but to fully capitalize on the cloud's benefits, the outlays of these computations also need to be factored into the planning exercise. In this thesis, we investigate this contemporary problem in the context of industrial-strength deployments of relational database systems on real-world cloud platforms. Specifically, we consider how the traditional metric used to compare query execution plans, namely response-time, can be augmented to incorporate monetary costs in the decision process. The challenge here is that execution-time and monetary costs are adversarial metrics, with a decrease in one entailing a rise in the other. For instance, a Virtual Machine (VM) with rich physical resources (RAM, cores, etc.) decreases the query response-time, but is expensive with regard to rental rates. In a nutshell, there is a tradeoff between money and time, and our goal therefore is to identify the VM that others the best tradeoff between these two competing considerations. In our study, we pro le the behavior of money versus time for a given query, and de ne the best tradeoff as the \knee" { that is, the location on the pro le with the minimum Euclidean distance from the origin. To study the performance of industrial-strength database engines on real-world cloud infrastructure, we have deployed a commercial DBMS on Google cloud services. On this platform, we have carried out extensive experimentation with the TPC-DS decision-support benchmark, an industry-wide standard for evaluating database system performance. Our experiments demonstrate that the choice of VM for hosting the database server is a crucial decision, because: (i) variation in time and money across VMs is significant for a given query, (ii) no one VM offers the best money-time tradeoff across all queries. To efficiently identify the VM with the best tradeoff from a large suite of available configurations, we propose a technique to characterize the money-time pro le for a given query. The core of this technique is a VM pruning mechanism that exploits the property of partially ordered set of the VMs on their resources. It processes the minimal and maximal VMs of this poset for estimated query response-time. If the response-times on these extreme VMs are similar, then all the VMs sandwiched between them are pruned from further consideration. Otherwise, the already processed VMs are set aside, and the minimal and maximal VMs of the remaining unprocessed VMs are evaluated for their response-times. Finally, the knee VM is identified from the processed VMs as the one with the minimum Euclidean distance from the origin on the money-time space. We theoretically prove that this technique always identifies the knee VM; further, if it is acceptable to and a \near-optimal" knee by providing a relaxation-factor on the response-time distance from the optimal knee, then it is also capable of finding more efficiently a satisfactory knee under these relaxed conditions. We propose two favors of this approach: the first one prunes the VMs using complete plan information received from database engine API, and named as Plan-based Identification of Knee (PIK). On the other hand, to further increase the efficiency of the identification of the knee VM, we propose a sub-plan based pruning algorithm called Sub-Plan-based Identification of Knee (SPIK), which requires modifications in the query optimizer. We have evaluated PIK on a commercial system and found that it often requires processing for only 20% of the total VMs. The efficiency of the algorithm is further increased significantly, by using 10-20% relaxation in response-time. For evaluating SPIK , we prototyped it on an open-source engine { Postgresql 9.3, and also implemented it as Java wrapper program with the commercial engine. Experimentally, the processing done by SPIK is found to be only 40% of the PIK approach. Therefore, from an overall perspective, this thesis facilitates the desired migration of enterprise databases to cloud platforms, by identifying the VM(s) that offer competitive tradeoffs between money and time for the given query.

Page generated in 0.1086 seconds