• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 85
  • 84
  • 46
  • 23
  • 12
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 407
  • 407
  • 105
  • 100
  • 94
  • 74
  • 69
  • 61
  • 61
  • 61
  • 52
  • 49
  • 48
  • 43
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Metodika budování a údržby závislých datových tržišť / Methodology of development and maintanance of dependent data marts

Müllerová, Sandra January 2011 (has links)
The thesis is primary focused on the integrated data warehouse, particularly on a subset -- dependent data marts. The main objectives of this thesis are design the methodology of development and maintenance of dependent data marts and verification methodology usefulness in the real organization. The first part deals with the theoretical definition of terms. It focuses particularly on the definition of terms from Business Intelligence area especially data warehousing and data marts. Each of the terms is described in detail in separate chapters. Business Intelligence area puts emphasis on description of individual components. In data warehousing area are described the data warehouse concepts and content of layers in a data warehouse. Finally, the data mart area is designed to describe dependent and independent data marts and also "special "cases of data marts, likely the semantic layer, and a sandbox. The second part focuses on the design methodology itself. At the beginning of this part is analysis of the existing methodologies and assess their usefulness with connection to designed methodology. The following part describes the current situation in the approach to the development and maintenance of dependent data marts in the organization. At the end of the second part is designed own methodology, which is based in part on the analysis methodology and in part on the analysis of current situation. The third part focuses on usability and usefulness evaluation of methodology in the organization. Evaluation is based on the methodology of criticism from employees in the organization who are directly engaged in designing and maintaining dependent data marts. Finally, the fourth and final part will focus on the description of an alternative solution that could be considered as one of the ways to sustainable development of data warehouse in the organization. It's about comparison architecture based on utilization of semantic layer as oppose to the "three layers" concept of data warehouse by Bill Inmon, which is implemented in the organization. The output evaluates alternative solutions to the current solution.
142

Design von Stichproben in analytischen Datenbanken

Rösch, Philipp 17 July 2009 (has links)
Aktuelle Studien belegen ein rasantes, mehrdimensionales Wachstum in analytischen Datenbanken: Das Datenvolumen verzehnfachte sich in den letzten vier Jahren, die Anzahl der Nutzer wuchs um durchschnittlich 25% pro Jahr und die Anzahl der Anfragen verdoppelte sich seit 2004 jährlich. Bei den Anfragen handelt es sich zunehmend um komplexe Verbundanfragen mit Aggregationen; sie sind häufig explorativer Natur und werden interaktiv an das System gestellt. Eine Möglichkeit, der Forderung nach Interaktivität bei diesem starken, mehrdimensionalen Wachstum nachzukommen, stellen Stichproben und eine darauf aufsetzende näherungsweise Anfrageverarbeitung dar. Diese Lösung bietet signifikant kürzere Antwortzeiten sowie Schätzungen mit probabilistischen Fehlergrenzen. Mit den Operationen Verbund, Gruppierung und Aggregation als Hauptbestandteile analytischer Anfragen ergeben sich folgende Anforderungen an das Design von Stichproben in analytischen Datenbanken: Zwischen den Stichproben fremdschlüsselverbundener Relationen ist die referenzielle Integrität zu gewährleisten, sämtliche Gruppen sind angemessen zu repräsentieren und Aggregationsattribute sind auf extreme Werte zu untersuchen. In dieser Dissertation wird für jedes dieser Teilprobleme ein Stichprobenverfahren vorgestellt, das sich durch speicherplatzbeschränkte Stichproben und geringe Schätzfehler auszeichnet. Im ersten der vorgestellten Verfahren wird durch eine korrelierte Stichprobenerhebung die referenzielle Integrität bei minimalem zusätzlichen Speicherplatz gewährleistet. Das zweite vorgestellte Stichprobenverfahren hat durch eine Berücksichtigung der Streuung der Daten eine angemessene Repräsentation sämtlicher Gruppen zur Folge und unterstützt damit beliebige Gruppierungen, und im dritten Verfahren ermöglicht eine mehrdimensionale Ausreißerbehandlung geringe Schätzfehler für beliebig viele Aggregationsattribute. Für jedes dieser Verfahren wird die Qualität der resultierenden Stichprobe diskutiert und bei der Berechnung speicherplatzbeschränkter Stichproben berücksichtigt. Um den Berechnungsaufwand und damit die Systembelastung gering zu halten, werden für jeden Algorithmus Heuristiken vorgestellt, deren Kennzeichen hohe Effizienz und eine geringe Beeinflussung der Stichprobenqualität sind. Weiterhin werden alle möglichen Kombinationen der vorgestellten Stichprobenverfahren betrachtet; diese Kombinationen ermöglichen eine zusätzliche Verringerung der Schätzfehler und vergrößern gleichzeitig das Anwendungsspektrum der resultierenden Stichproben. Mit der Kombination aller drei Techniken wird ein Stichprobenverfahren vorgestellt, das alle Anforderungen an das Design von Stichproben in analytischen Datenbanken erfüllt und die Vorteile der Einzellösungen vereint. Damit ist es möglich, ein breites Spektrum an Anfragen mit hoher Genauigkeit näherungsweise zu beantworten. / Recent studies have shown the fast and multi-dimensional growth in analytical databases: Over the last four years, the data volume has risen by a factor of 10; the number of users has increased by an average of 25% per year; and the number of queries has been doubling every year since 2004. These queries have increasingly become complex join queries with aggregations; they are often of an explorative nature and interactively submitted to the system. One option to address the need for interactivity in the context of this strong, multi-dimensional growth is the use of samples and an approximate query processing approach based on those samples. Such a solution offers significantly shorter response times as well as estimates with probabilistic error bounds. Given that joins, groupings and aggregations are the main components of analytical queries, the following requirements for the design of samples in analytical databases arise: 1) The foreign-key integrity between the samples of foreign-key related tables has to be preserved. 2) Any existing groups have to be represented appropriately. 3) Aggregation attributes have to be checked for extreme values. For each of these sub-problems, this dissertation presents sampling techniques that are characterized by memory-bounded samples and low estimation errors. In the first of these presented approaches, a correlated sampling process guarantees the referential integrity while only using up a minimum of additional memory. The second illustrated sampling technique considers the data distribution, and as a result, any arbitrary grouping is supported; all groups are appropriately represented. In the third approach, the multi-column outlier handling leads to low estimation errors for any number of aggregation attributes. For all three approaches, the quality of the resulting samples is discussed and considered when computing memory-bounded samples. In order to keep the computation effort - and thus the system load - at a low level, heuristics are provided for each algorithm; these are marked by high efficiency and minimal effects on the sampling quality. Furthermore, the dissertation examines all possible combinations of the presented sampling techniques; such combinations allow to additionally reduce estimation errors while increasing the range of applicability for the resulting samples at the same time. With the combination of all three techniques, a sampling technique is introduced that meets all requirements for the design of samples in analytical databases and that merges the advantages of the individual techniques. Thereby, the approximate but very precise answering of a wide range of queries becomes a true possibility.
143

In-Memory-Datenmanagement in betrieblichen Anwendungssystemen

Peter, Loos, Lechtenbörger, Jens, Vossen, Gottfried, Zeier, Alexander, Krüger, Jens, Müller, Jürgen, Lehner, Wolfgang, Kossmann, Donald, Fabian, Benjamin, Günther, Oliver, Winter, Robert 25 January 2023 (has links)
In-Memory-Datenbanken halten den gesamten Datenbestand permanent im Hauptspeicher vor. Somit können lesende Zugriffe weitaus schneller erfolgen als bei traditionellen Datenbanksystemen, da keine I/O-Zugriffe auf die Festplatte erfolgen müssen. Für schreibende Zugriffe wurden Mechanismen entwickelt, die Persistenz und somit Transaktionssicherheit gewährleisten. In-Memory-Datenbanken werden seit geraumer Zeit entwickelt und haben sich in speziellen Anwendungen bewährt. Mit zunehmender Speicherdichte von DRAM-Bausteinen sind Hardwaresysteme wirtschaftlich erschwinglich, deren Hauptspeicher einen kompletten betrieblichen Datenbestand aufnehmen können. Somit stellt sich die Frage, ob In-Memory-Datenbanken auch in betrieblichen Anwendungssystemen eingesetzt werden können. Hasso Plattner, der mit HANA eine In-Memory-Datenbank entwickelt hat, ist ein Protagonist dieses Ansatzes. Er sieht erhebliche Potenziale für neue Konzepte in der Entwicklung betrieblicher Informationssysteme. So könne beispielsweise eine transaktionale und eine analytische Anwendung auf dem gleichen Datenbestand laufen, d. h. eine Trennung in operative Datenbanken einerseits und Data-Warehouse-Systeme andererseits ist in der betrieblichen Informationsverarbeitung nicht mehr notwendig (Plattner und Zeier 2011). Doch nicht alle Datenbank-Vertreter stimmen darin überein. Larry Ellison hat die Idee des betrieblichen In-Memory-Einsatzes, eher medienwirksam als seriös argumentativ, als „wacko“ bezeichnet (Bube 2010). Stonebraker (2011) sieht zwar eine Zukunft für In-Memory-Datenbanken in betrieblichen Anwendungen, hält aber weiterhin eine Trennung von OLTP- und OLAP-Anwendungen für sinnvoll. [Aus: Einleitung]
144

In-memory Databases in Business Information Systems

Loos, Peter, Lechtenbörger, Jens, Vossen, Gottfried, Zeier, Alexander, Krüger, Jens, Müller, Jürgen, Lehner, Wolfgang, Kossmann, Donald, Fabian, Benjamin, Günther, Oliver, Winter, Robert 26 January 2023 (has links)
In-memory databases are developed to keep the entire data in main memory. Compared to traditional database systems, read access is now much faster since no I/O access to a hard drive is required. In terms of write access, mechanisms are available which provide data persistence and thus secure transactions. In-memory databases have been available for a while and have proven to be suitable for particular use cases. With increasing storage density of DRAM modules, hardware systems capable of storing very large amounts of data have become affordable. In this context the question arises whether in-memory databases are suitable for business information system applications. Hasso Plattner, who developed the HANA in-memory database, is a trailblazer for this approach. He sees a lot of potential for novel concepts concerning the development of business information systems. One example is to conduct transactions and analytics in parallel and on the same database, i.e. a division into operational database systems and data warehouse systems is no longer necessary (Plattner and Zeier 2011). However, there are also voices against this approach. Larry Ellison described the idea of business information systems based on in-memory database as “wacko,” without actually making a case for his statement (cf. Bube 2010). Stonebraker (2011) sees a future for in-memory databases for business information systems but considers the division of OLTP and OLAP applications as reasonable. [From: Introduction]
145

A Decathlon in Multidimensional Modeling: Open Issues and Some Solutions

Hümmer, W., Lehner, W., Bauer, A., Schlesinger, L. 12 January 2023 (has links)
The concept of multidimensional modeling has proven extremely successful in the area of Online Analytical Processing (OLAP) as one of many applications running on top of a data warehouse installation. Although many different modeling techniques expressed in extended multidimensional data models were proposed in the recent past, we feel that many hot issues are not properly reflected. In this paper we address ten common problems reaching from defects within dimensional structures over multidimensional structures to new analytical requirements and more.
146

A comparison of maintenance and support challenges within a data warehousing environment to that of a transactional application environment in a South African context / Shakeel Mitra Juggath

Juggath, Shakeel Mitra January 2014 (has links)
In transactional systems development literature, maintenance is reported as being a phase in the software development life cycle. In practice, this phase is often neglected as it occurs post-deployment and other ongoing projects take a higher priority. In data warehouse (DW) systems development literature, maintenance is not reported as being a phase but an ongoing iteration to the DW development project. It should therefore not be treated as a phase by DW systems professionals. Although there is this fundamental difference in the approach to maintenance, transaction systems maintenance and DW maintenance share many of the same challenges. DW literature and methodologies inherently contain utilities and methods to assist in alleviating these challenges in a DW system. Transactional systems do not deal with these challenges inherently. Research aspects were extracted from the literature review conducted. The literature review conducted demonstrates what the challenges in maintenance are, how the challenges of transactional systems compare to the challenges of DW maintenance and how the utilities and methods used in DW methodologies can inherently assist in managing these challenges from DW perspective. These research aspects were used to formulate an interpretive questionnaire. This research portion of the study explores the use of DW systems development and maintenance methodologies in the industry among DW professionals. This is done by conducting an interpretive study using the interpretive questionnaire developed from the literature review. The interpretive questionnaire focusses on maintenance and dealing with the challenges thereof. Many themes evolved from the analysis of the interpretive study by using the content analysis method. The final conclusions of this study is drawn by comparing and combining the information gathered from the literature review with the information gathered from the interpretive study. Gaps are identified between practice and literature and recommendations are made based on these gaps. / MSc (Computer Science), North-West University, Potchefstroom Campus, 2015
147

Development of a data consolidation platform for a web-based energy information system / Ignatius Michael Prinsloo

Prinsloo, Ignatius Michael January 2015 (has links)
Global energy constraints and economic conditions have placed large energy consumers under pressure to conserve resources. Several governments have acknowledged this and have employed policies to address energy shortages. In South Africa, the lacking electrical infrastructure caused severe electricity supply shortages during recent years. To alleviate the shortage, the government has revised numerous energy policies. Consumers stand to gain nancially if they embrace the opportunities o ered by the revised policies. Energy management systems provide a framework that ensures alignment with speci cations of the respective programs. Such a system requires a data consolidation platform to import and manage relevant data. A stored combination of consumption data, production data and nancial data can be used to extract information for numerous reporting applications. This study discusses the development of a data consolidation platform. The platform is used to collect and maintain energy related data. The platform is capable of consolidating a wide range of energy and production data into a single data set. The generic platform architecture o ers users the ability to manage a wide range of data from several sources. In order to generate reports, the platform was integrated with an existing software based energy management system. The integrated system provides a web-based interface that allows the generation and distribution of various reports. To do this the system accesses the consolidated data set. The developed energy information tool is used by an ESCo to gather and consolidate data from multiple client systems into a single repository. Speci c reports are generated by the integrated system and can be targeted at both consumers and governing bodies. The system complies with draft legislative guidelines and has been successfully implemented as a energy information tool in practice. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
148

A comparison of maintenance and support challenges within a data warehousing environment to that of a transactional application environment in a South African context / Shakeel Mitra Juggath

Juggath, Shakeel Mitra January 2014 (has links)
In transactional systems development literature, maintenance is reported as being a phase in the software development life cycle. In practice, this phase is often neglected as it occurs post-deployment and other ongoing projects take a higher priority. In data warehouse (DW) systems development literature, maintenance is not reported as being a phase but an ongoing iteration to the DW development project. It should therefore not be treated as a phase by DW systems professionals. Although there is this fundamental difference in the approach to maintenance, transaction systems maintenance and DW maintenance share many of the same challenges. DW literature and methodologies inherently contain utilities and methods to assist in alleviating these challenges in a DW system. Transactional systems do not deal with these challenges inherently. Research aspects were extracted from the literature review conducted. The literature review conducted demonstrates what the challenges in maintenance are, how the challenges of transactional systems compare to the challenges of DW maintenance and how the utilities and methods used in DW methodologies can inherently assist in managing these challenges from DW perspective. These research aspects were used to formulate an interpretive questionnaire. This research portion of the study explores the use of DW systems development and maintenance methodologies in the industry among DW professionals. This is done by conducting an interpretive study using the interpretive questionnaire developed from the literature review. The interpretive questionnaire focusses on maintenance and dealing with the challenges thereof. Many themes evolved from the analysis of the interpretive study by using the content analysis method. The final conclusions of this study is drawn by comparing and combining the information gathered from the literature review with the information gathered from the interpretive study. Gaps are identified between practice and literature and recommendations are made based on these gaps. / MSc (Computer Science), North-West University, Potchefstroom Campus, 2015
149

Development of a data consolidation platform for a web-based energy information system / Ignatius Michael Prinsloo

Prinsloo, Ignatius Michael January 2015 (has links)
Global energy constraints and economic conditions have placed large energy consumers under pressure to conserve resources. Several governments have acknowledged this and have employed policies to address energy shortages. In South Africa, the lacking electrical infrastructure caused severe electricity supply shortages during recent years. To alleviate the shortage, the government has revised numerous energy policies. Consumers stand to gain nancially if they embrace the opportunities o ered by the revised policies. Energy management systems provide a framework that ensures alignment with speci cations of the respective programs. Such a system requires a data consolidation platform to import and manage relevant data. A stored combination of consumption data, production data and nancial data can be used to extract information for numerous reporting applications. This study discusses the development of a data consolidation platform. The platform is used to collect and maintain energy related data. The platform is capable of consolidating a wide range of energy and production data into a single data set. The generic platform architecture o ers users the ability to manage a wide range of data from several sources. In order to generate reports, the platform was integrated with an existing software based energy management system. The integrated system provides a web-based interface that allows the generation and distribution of various reports. To do this the system accesses the consolidated data set. The developed energy information tool is used by an ESCo to gather and consolidate data from multiple client systems into a single repository. Speci c reports are generated by the integrated system and can be targeted at both consumers and governing bodies. The system complies with draft legislative guidelines and has been successfully implemented as a energy information tool in practice. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
150

Participatory approach to data warehousing in health care : UGANDA’S Perspective

Otine, Charles January 2011 (has links)
This licentiate thesis presents the use of participatory approach to developing a data warehouse for data mining in health care. Uganda is one of the countries that faced the largest brunt of the HIV/AIDS epidemic at its inception in the early 1980s with reports of close to a million deaths. Government and nongovernmental interventions over the years saw massive reductions in HIV prevalence rates over the years. This reduction in HIV prevalence rates led to great praises by the international community and a call for other countries to model Uganda’s approach to battling the epidemic. In the last decade the reduction in HIV prevalence rates have stagnated and in some cases increased. This has lead to a call for reexamination of the HIV/AIDS fight with an emphasis on collective efforts of all approaches. One of these collective efforts is the introduction of antiretroviral therapy (ART) for those already infected with the virus. Antiretroviral therapy has numerous challenges in Uganda not least of which is the cost of the therapy especially on a developing country with limited resources. It is estimated that of the close to 1 million infected in Uganda only 300,000 are on antiretroviral therapy (UNAIDS, 2009). Additional challenges of the therapy includes following through a treatment regimen that is prescribed. Given the costs of the therapy and the limited number of people able to access the therapy it is imperative that this effort be as effective as possible. This research hinges on using data mining techniques with monitoring HIV patient’s therapy, most specifically their adherence to ART medication. This is crucial given that failure to adhere to therapy means treatment failure, virus mutation and huge losses in terms of costs incurred in administering the therapy to the patients. A system was developed to monitor patient adherence to therapy, by using a participatory approach of gathering system specification and testing to ensure acceptance of the system by the stakeholders. Due to the cost implications of over the shelf software the development of the system was implemented using open source software with limited license costs. These can be implemented in resource constrained settings in Uganda and elsewhere to assist in monitoring patients in HIV therapy. A algorithm that is used to analyze the patient data warehouses for information on and quickly assists therapists in identifying potential risks such as non-adherence and treatment failure. Open source dimensional modeling tools power architect and DB designer were used to model the data warehouse using open source MYSQL database. The thesis is organized in three parts with the first part presenting the background information, the problem, justification, objectives of the research and a justification for the use of participatory methodology. The second part presents the papers, on which this research is based and the final part contains the summary discussions, conclusions and areas for future research. The research is sponsored by SIDA under the collaboration between Makerere University and Blekinge Institute of Technology (BTH) in Sweden.

Page generated in 0.05 seconds