• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Development of a data consolidation platform for a web-based energy information system / Ignatius Michael Prinsloo

Prinsloo, Ignatius Michael January 2015 (has links)
Global energy constraints and economic conditions have placed large energy consumers under pressure to conserve resources. Several governments have acknowledged this and have employed policies to address energy shortages. In South Africa, the lacking electrical infrastructure caused severe electricity supply shortages during recent years. To alleviate the shortage, the government has revised numerous energy policies. Consumers stand to gain nancially if they embrace the opportunities o ered by the revised policies. Energy management systems provide a framework that ensures alignment with speci cations of the respective programs. Such a system requires a data consolidation platform to import and manage relevant data. A stored combination of consumption data, production data and nancial data can be used to extract information for numerous reporting applications. This study discusses the development of a data consolidation platform. The platform is used to collect and maintain energy related data. The platform is capable of consolidating a wide range of energy and production data into a single data set. The generic platform architecture o ers users the ability to manage a wide range of data from several sources. In order to generate reports, the platform was integrated with an existing software based energy management system. The integrated system provides a web-based interface that allows the generation and distribution of various reports. To do this the system accesses the consolidated data set. The developed energy information tool is used by an ESCo to gather and consolidate data from multiple client systems into a single repository. Speci c reports are generated by the integrated system and can be targeted at both consumers and governing bodies. The system complies with draft legislative guidelines and has been successfully implemented as a energy information tool in practice. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
222

Development of a data consolidation platform for a web-based energy information system / Ignatius Michael Prinsloo

Prinsloo, Ignatius Michael January 2015 (has links)
Global energy constraints and economic conditions have placed large energy consumers under pressure to conserve resources. Several governments have acknowledged this and have employed policies to address energy shortages. In South Africa, the lacking electrical infrastructure caused severe electricity supply shortages during recent years. To alleviate the shortage, the government has revised numerous energy policies. Consumers stand to gain nancially if they embrace the opportunities o ered by the revised policies. Energy management systems provide a framework that ensures alignment with speci cations of the respective programs. Such a system requires a data consolidation platform to import and manage relevant data. A stored combination of consumption data, production data and nancial data can be used to extract information for numerous reporting applications. This study discusses the development of a data consolidation platform. The platform is used to collect and maintain energy related data. The platform is capable of consolidating a wide range of energy and production data into a single data set. The generic platform architecture o ers users the ability to manage a wide range of data from several sources. In order to generate reports, the platform was integrated with an existing software based energy management system. The integrated system provides a web-based interface that allows the generation and distribution of various reports. To do this the system accesses the consolidated data set. The developed energy information tool is used by an ESCo to gather and consolidate data from multiple client systems into a single repository. Speci c reports are generated by the integrated system and can be targeted at both consumers and governing bodies. The system complies with draft legislative guidelines and has been successfully implemented as a energy information tool in practice. / MIng (Computer and Electronic Engineering), North-West University, Potchefstroom Campus, 2015
223

A longitudinal patient record for patients receiving antiretroviral treatment

Kotze, E., McDonald, T. January 2012 (has links)
Published Article / In response to the Human Immunodeficiency Virus (HIV) epidemic in the country, the South African Government started with the provisioning of Antiretroviral Therapy (ART) in the public health sector. Monitoring and evaluating the effectiveness of the ART programme is of the utmost importance. The current patient information system could not supply the required information to manage the rollout of the ART programme. A data warehouse, consisting of several data marts, was developed that integrated several disparate systems related to HIV/AIDS/ART into one system. It was, however, not possible to trace a patient across all the data marts in the data warehouse. No unique identifiers existed for the patient records in the different data marts and they also had different structures. Record linkage in conjunction with a mapping process was used to link all the data marts and in so doing identify the same patient in all the data marts. This resulted in a longitudinal patient record of an ART patient that displayed all the treatments received by the patient in all public health care facilities in the province.
224

Participatory approach to data warehousing in health care : UGANDA’S Perspective

Otine, Charles January 2011 (has links)
This licentiate thesis presents the use of participatory approach to developing a data warehouse for data mining in health care. Uganda is one of the countries that faced the largest brunt of the HIV/AIDS epidemic at its inception in the early 1980s with reports of close to a million deaths. Government and nongovernmental interventions over the years saw massive reductions in HIV prevalence rates over the years. This reduction in HIV prevalence rates led to great praises by the international community and a call for other countries to model Uganda’s approach to battling the epidemic. In the last decade the reduction in HIV prevalence rates have stagnated and in some cases increased. This has lead to a call for reexamination of the HIV/AIDS fight with an emphasis on collective efforts of all approaches. One of these collective efforts is the introduction of antiretroviral therapy (ART) for those already infected with the virus. Antiretroviral therapy has numerous challenges in Uganda not least of which is the cost of the therapy especially on a developing country with limited resources. It is estimated that of the close to 1 million infected in Uganda only 300,000 are on antiretroviral therapy (UNAIDS, 2009). Additional challenges of the therapy includes following through a treatment regimen that is prescribed. Given the costs of the therapy and the limited number of people able to access the therapy it is imperative that this effort be as effective as possible. This research hinges on using data mining techniques with monitoring HIV patient’s therapy, most specifically their adherence to ART medication. This is crucial given that failure to adhere to therapy means treatment failure, virus mutation and huge losses in terms of costs incurred in administering the therapy to the patients. A system was developed to monitor patient adherence to therapy, by using a participatory approach of gathering system specification and testing to ensure acceptance of the system by the stakeholders. Due to the cost implications of over the shelf software the development of the system was implemented using open source software with limited license costs. These can be implemented in resource constrained settings in Uganda and elsewhere to assist in monitoring patients in HIV therapy. A algorithm that is used to analyze the patient data warehouses for information on and quickly assists therapists in identifying potential risks such as non-adherence and treatment failure. Open source dimensional modeling tools power architect and DB designer were used to model the data warehouse using open source MYSQL database. The thesis is organized in three parts with the first part presenting the background information, the problem, justification, objectives of the research and a justification for the use of participatory methodology. The second part presents the papers, on which this research is based and the final part contains the summary discussions, conclusions and areas for future research. The research is sponsored by SIDA under the collaboration between Makerere University and Blekinge Institute of Technology (BTH) in Sweden.
225

Texas Principals’ Data Use: Its Relationship to Leadership Style and Student Achievement

Bostic, Robert E. 05 1900 (has links)
This study applies an empirical research method determine whether Texas public school principals’ leadership styles, coupled with their use of real time data in a data warehouse, influenced their leadership ability as measured by student achievement. In today’s world of data rich environments that require campuses and districts to make data-driven decisions, principals find themselves having to organize and categorize data to help their school boards, campuses, and citizenry make informed decisions. Most school principals in Texas have access to data in multiple forms including national and state resources and a multitude of other data reports. A random sample of principals was selected to take the Multi Factor Leadership Questionnaire (MLQ5x) and the Principals Data Use Survey. The MLQ5x measured principals’ leadership styles as transformational, transactional, or passive avoidant. The Principals Data Use Survey measured how principals use data to inform campus decisions on student achievement, shaping the vision of the campus, and designing professional development. Data obtained from the survey were correlated to determine the relationship between principals’ use of data warehouses and their leadership styles on student achievement as measured by the Texas Assessment of Knowledge and Skills. The results yielded significant relationships between student achievement, principals’ leadership styles, and the principals’ data use with a data warehouse. Student achievement scores were highly correlated with the campuses that participated in the study and provided limited differences between those with data warehouses and those without data warehouses.
226

Efficient Querying and Analytics of Semantic Web Data / Interrogation et Analyse Efficiente des Données du Web Sémantique

Roatis, Alexandra 22 September 2014 (has links)
L'utilité et la pertinence des données se trouvent dans l'information qui peut en être extraite.Le taux élevé de publication des données et leur complexité accrue, par exemple dans le cas des données du Web sémantique autodescriptives et hétérogènes, motivent l'intérêt de techniques efficaces pour la manipulation de données.Dans cette thèse, nous utilisons la technologie mature de gestion de données relationnelles pour l'interrogation des données du Web sémantique.La première partie se concentre sur l'apport de réponse aux requêtes sur les données soumises à des contraintes RDFS, stockées dans un système de gestion de données relationnelles. L'information implicite, résultant du raisonnement RDF est nécessaire pour répondre correctement à ces requêtes.Nous introduisons le fragment des bases de données RDF, allant au-delà de l'expressivité des fragments étudiés précédemment.Nous élaborons de nouvelles techniques pour répondre aux requêtes dans ce fragment, en étendant deux approches connues de manipulation de données sémantiques RDF, notamment par saturation de graphes et reformulation de requêtes.En particulier, nous considérons les mises à jour de graphe au sein de chaque approche et proposerons un procédé incrémental de maintenance de saturation. Nous étudions expérimentalement les performances de nos techniques, pouvant être déployées au-dessus de tout moteur de gestion de données relationnelles.La deuxième partie de cette thèse considère les nouvelles exigences pour les outils et méthodes d'analyse de données, issues de l'évolution du Web sémantique.Nous revisitons intégralement les concepts et les outils pour l'analyse de données, dans le contexte de RDF.Nous proposons le premier cadre formel pour l'analyse d'entrepôts RDF. Notamment, nous définissons des schémas analytiques adaptés aux graphes RDF hétérogènes à sémantique riche, des requêtes analytiques qui (au-delà de cubes relationnels) permettent l'interrogation flexible des données et schémas, ainsi que des opérations d'agrégation puissantes de type OLAP. Des expériences sur une plateforme entièrement implémentée démontrent l'intérêt pratique de notre approche. / The utility and relevance of data lie in the information that can be extracted from it.The high rate of data publication and its increased complexity, for instance the heterogeneous, self-describing Semantic Web data, motivate the interest in efficient techniques for data manipulation.In this thesis we leverage mature relational data management technology for querying Semantic Web data.The first part focuses on query answering over data subject to RDFS constraints, stored in relational data management systems. The implicit information resulting from RDF reasoning is required to correctly answer such queries. We introduce the database fragment of RDF, going beyond the expressive power of previously studied fragments. We devise novel techniques for answering Basic Graph Pattern queries within this fragment, exploring the two established approaches for handling RDF semantics, namely graph saturation and query reformulation. In particular, we consider graph updates within each approach and propose a method for incrementally maintaining the saturation. We experimentally study the performance trade-offs of our techniques, which can be deployed on top of any relational data management engine.The second part of this thesis considers the new requirements for data analytics tools and methods emerging from the development of the Semantic Web. We fully redesign, from the bottom up, core data analytics concepts and tools in the context of RDF data. We propose the first complete formal framework for warehouse-style RDF analytics. Notably, we define analytical schemas tailored to heterogeneous, semantic-rich RDF graphs, analytical queries which (beyond relational cubes) allow flexible querying of the data and the schema as well as powerful aggregation and OLAP-style operations. Experiments on a fully-implemented platform demonstrate the practical interest of our approach.
227

Implementace Business Intelligence v poradenské společnosti / Business Intelligence Implementation

Filka, Zdeněk January 2010 (has links)
The main aim of this thesis is the proposal and implementation of the support of decision making with the help of Business Intelligence tools in Audit CI, company limited providing economic and financial consultancy. Business Intelligence tools are applied at the creation of reports which company provides for its clients in terms of its services. On the basis of these reports subsequently suggests recommendations in the field of finance and intradepartmental management. The whole thesis is divided into two parts. In the theoretical part are described fundamental principles of BI solution. Main components of which can be BI solution set up, its place in the architecture of the information system of company, finally fundamental base of the proposal of BI solution. The practical part includes the proposal and implementation of BI solution, from multidimensional analysis through the solution of the data pump, multidimensional cubes, to set up the output in the client application.
228

Business Intelligence v pojišťovnictví / Business Intelligence for Insurance

Havránek, Denis January 2010 (has links)
The thesis focuses on the application of Business Intelligence technologies in the insurance area, aiming concretely at their use in insurance companies. The aim is to introduce and summarize the various processes and information needs that are part of an insurance company and the ways that business intelligence tools can support and improve those processes. To achieve this goal I describe and present the internal functioning of the main processes in insurance companies emphasizing on data examples and the concrete use of BI. Furthermore, I point out two specific examples of Business Intelligence products, which are built and are aimed for the insurance industry. Specifically, I review InsFocus Business Intelligence product from InsFocus Insurance and SAS Business Intelligence from SAS. These two products are reviewed in terms of architecture, functionality and implementation process. At the end of the thesis, I've created a theoretical analysis for a Business Intelligence implementation in a fictitious insurance company. The benefit of this work is to present a comprehensive look at the advantages of Business Intelligence for insurance companies, at some specific products which relate to BI and insurance, and a look at the way that an analysis for a BI insurance solution is made.
229

Efficient Incremental View Maintenance for Data Warehousing

Chen, Songting 20 December 2005 (has links)
"Data warehousing and on-line analytical processing (OLAP) are essential elements for decision support applications. Since most OLAP queries are complex and are often executed over huge volumes of data, the solution in practice is to employ materialized views to improve query performance. One important issue for utilizing materialized views is to maintain the view consistency upon source changes. However, most prior work focused on simple SQL views with distributive aggregate functions, such as SUM and COUNT. This dissertation proposes to consider broader types of views than previous work. First, we study views with complex aggregate functions such as variance and regression. Such statistical functions are of great importance in practice. We propose a workarea function model and design a generic framework to tackle incremental view maintenance and answering queries using views for such functions. We have implemented this approach in a prototype system of IBM DB2. An extensive performance study shows significant performance gains by our techniques. Second, we consider materialized views with PIVOT and UNPIVOT operators. Such operators are widely used for OLAP applications and for querying sparse datasets. We demonstrate that the efficient maintenance of views with PIVOT and UNPIVOT operators requires more generalized operators, called GPIVOT and GUNPIVOT. We formally define and prove the query rewriting rules and propagation rules for such operators. We also design a novel view maintenance framework for applying these rules to obtain an efficient maintenance plan. Extensive performance evaluations reveal the effectiveness of our techniques. Third, materialized views are often integrated from multiple data sources. Due to source autonomicity and dynamicity, concurrency may occur during view maintenance. We propose a generic concurrency control framework to solve such maintenance anomalies. This solution extends previous work in that it solves the anomalies under both source data and schema changes and thus achieves full source autonomicity. We have implemented this technique in a data warehouse prototype developed at WPI. The extensive performance study shows that our techniques put little extra overhead on existing concurrent data update processing techniques while allowing for this new functionality."
230

Processo de KDD para aux?lio ? reconfigura??o de ambientes virtualizados

Winck, Ana Trindade 20 December 2007 (has links)
Made available in DSpace on 2015-04-14T14:48:55Z (GMT). No. of bitstreams: 1 397762.pdf: 1330898 bytes, checksum: 5d70750d721e0c762826c9afce7b0753 (MD5) Previous issue date: 2007-12-20 / Xen ? um paravirtualizador que permite a execu??o simult?nea de diversas m?quinas virtuais (VM), cada uma com seu pr?prio sistema operacional. O consumo dessas VMs se d? em diferentes n?veis de recursos. Com o objetivo de melhorar a performance do Xen, ? interessante verificar qual a melhor aloca??o de recursos para uma dada m?quina Xen, quando v?rias VMs s?o executadas, e quais s?o os respectivos par?metros. Para auxiliar a eventual reconfigura??o de par?metros, este trabalho prop?e um processo completo de descoberta de conhecimento em banco de dados (processo de KDD) para capturar dados de desempenho das VMs, organiz?-los em um modelo anal?tico e aplicar t?cnicas de minera??o para sugerir novos par?metros. Inicialmente s?o obtidos dados de desempenho de cada VM, onde a estrat?gia empregada ? a execu??o de benchmarks sobre cada sistema operacional. Esses dados s?o armazenados em um data warehouse propriamente modelado para armazenar registros de captura de m?tricas de benchmarks. Os dados armazenados s?o convenientemente preparados para serem utilizados por algoritmos de minera??o de dados. Os modelos preditivos gerados podem, ent?o, ser enriquecidos com instru??es em alto n?vel de reconfigura??es. Tais modelos buscam sugerir, dada uma configura??o vigente, qual o melhor conjunto de par?metros de configura??o para modificar o ambiente, e alcan?ar um ganho global de desempenho. O processo proposto foi implementado e testado com um conjunto significativo de execu??es de benchmarks, o que mostrou a qualidade e abrang?ncia da solu??o.

Page generated in 0.0576 seconds