• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 114
  • 85
  • 84
  • 46
  • 23
  • 12
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • Tagged with
  • 407
  • 407
  • 105
  • 100
  • 94
  • 74
  • 69
  • 61
  • 61
  • 61
  • 52
  • 49
  • 48
  • 43
  • 40
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

商業智慧應用之研究-以銷售分析為例

吳文宗 Unknown Date (has links)
根據<CFO Magazine>調查,75%的高階主管在面臨擬定策略時,通常無法獲得即時且完整的資訊,以致於在競爭激烈的產業環境中,失去了主動出擊的機會,也錯失了替公司創造財富的機會。 企業要提升競爭力,除了與企業本身運作的良窳相關外,也與整體的資訊支援能力有高度的關係。企業相關資訊系統的資訊如何作適度的整合,以利相關使用者,因此導入整合管理、決策與資訊科技的商業智慧系統,對內可以提升管理績效,對外以形成競爭優勢,實際化資訊為行動,並將其績效評估以指標式呈現,藉助有效的績效管理來提升服務,降低管理成本,以求競爭力的提升。Kaplan and Norton在平衡計分卡一書上說「如果您不能對他進行衡量,您便無法管理它」。 在傳統資訊應用的環境裡,只要使用者一有資訊的需求,就會要求資訊人員設計程式來列印所需的報表,或查詢所需的資訊,但也因為要更深入瞭解報表的內容,所以需要用更多的報表來作補充說明,如此就陷入了所謂的「傳統報表的惡性循環」裡,而使用者卻感覺永遠少一張報表。 「Get the right information,to the right people,at the right time」是商業智慧系統應用的最佳境界,然而其資料的來源,除了外部資訊外,絕大部份是從企業的歷史資料或由現有的應用系統提供,商業智慧是整合企業所有應用系統的資料,以提供給所有的使用者使用,也就是這些應用系統所產生的資料是商業智慧的基礎,有了這些資料,商業智慧才能發揮功效,兩者是相輔相成,企業如能充分發揮商業智慧和應用系統的各自特點,結合應用,必能提高企業的競爭力。 因此本論文提供一個銷售分析的實際案例,以Top Down的思考模式來定義各階層使用者的需求,而實際開發的步驟是以Bottom Up的方式來建置,以整體規劃,分階段執行的建置方式來確保商業智慧系統專案的成功。
292

Designing Conventional, Spatial, and Temporal Data Warehouses: Concepts and Methodological Framework

Malinowski Gajda, Elzbieta 02 October 2006 (has links)
Decision support systems are interactive, computer-based information systems that provide data and analysis tools in order to better assist managers on different levels of organization in the process of decision making. Data warehouses (DWs) have been developed and deployed as an integral part of decision support systems. A data warehouse is a database that allows to store high volume of historical data required for analytical purposes. This data is extracted from operational databases, transformed into a coherent whole, and loaded into a DW during the extraction-transformation-loading (ETL) process. DW data can be dynamically manipulated using on-line analytical processing (OLAP) systems. DW and OLAP systems rely on a multidimensional model that includes measures, dimensions, and hierarchies. Measures are usually numeric additive values that are used for quantitative evaluation of different aspects about organization. Dimensions provide different analysis perspectives while hierarchies allow to analyze measures on different levels of detail. Nevertheless, currently, designers as well as users find difficult to specify multidimensional elements required for analysis. One reason for that is the lack of conceptual models for DW and OLAP system design, which would allow to express data requirements on an abstract level without considering implementation details. Another problem is that many kinds of complex hierarchies arising in real-world situations are not addressed by current DW and OLAP systems. In order to help designers to build conceptual models for decision-support systems and to help users in better understanding the data to be analyzed, in this thesis we propose the MultiDimER model - a conceptual model used for representing multidimensional data for DW and OLAP applications. Our model is mainly based on the existing ER constructs, for example, entity types, attributes, relationship types with their usual semantics, allowing to represent the common concepts of dimensions, hierarchies, and measures. It also includes a conceptual classification of different kinds of hierarchies existing in real-world situations and proposes graphical notations for them. On the other hand, currently users of DW and OLAP systems demand also the inclusion of spatial data, visualization of which allows to reveal patterns that are difficult to discover otherwise. The advantage of using spatial data in the analysis process is widely recognized since it allows to reveal patterns that are difficult to discover otherwise. However, although DWs typically include a spatial or a location dimension, this dimension is usually represented in an alphanumeric format. Furthermore, there is still a lack of a systematic study that analyze the inclusion as well as the management of hierarchies and measures that are represented using spatial data. With the aim of satisfying the growing requirements of decision-making users, we extend the MultiDimER model by allowing to include spatial data in the different elements composing the multidimensional model. The novelty of our contribution lays in the fact that a multidimensional model is seldom used for representing spatial data. To succeed with our proposal, we applied the research achievements in the field of spatial databases to the specific features of a multidimensional model. The spatial extension of a multidimensional model raises several issues, to which we refer in this thesis, such as the influence of different topological relationships between spatial objects forming a hierarchy on the procedures required for measure aggregations, aggregations of spatial measures, the inclusion of spatial measures without the presence of spatial dimensions, among others. Moreover, one of the important characteristics of multidimensional models is the presence of a time dimension for keeping track of changes in measures. However, this dimension cannot be used to model changes in other dimensions. Therefore, usual multidimensional models are not symmetric in the way of representing changes for measures and dimensions. Further, there is still a lack of analysis indicating which concepts already developed for providing temporal support in conventional databases can be applied and be useful for different elements composing a multidimensional model. In order to handle in a similar manner temporal changes to all elements of a multidimensional model, we introduce a temporal extension for the MultiDimER model. This extension is based on the research in the area of temporal databases, which have been successfully used for modeling time-varying information for several decades. We propose the inclusion of different temporal types, such as valid and transaction time, which are obtained from source systems, in addition to the DW loading time generated in DWs. We use this temporal support for a conceptual representation of time-varying dimensions, hierarchies, and measures. We also refer to specific constraints that should be imposed on time-varying hierarchies and to the problem of handling multiple time granularities between source systems and DWs. Furthermore, the design of DWs is not an easy task. It requires to consider all phases from the requirements specification to the final implementation including the ETL process. It should also take into account that the inclusion of different data items in a DW depends on both, users' needs and data availability in source systems. However, currently, designers must rely on their experience due to the lack of a methodological framework that considers above-mentioned aspects. In order to assist developers during the DW design process, we propose a methodology for the design of conventional, spatial, and temporal DWs. We refer to different phases, such as requirements specification, conceptual, logical, and physical modeling. We include three different methods for requirements specification depending on whether users, operational data sources, or both are the driving force in the process of requirement gathering. We show how each method leads to the creation of a conceptual multidimensional model. We also present logical and physical design phases that refer to DW structures and the ETL process. To ensure the correctness of the proposed conceptual models, i.e., with conventional data, with the spatial data, and with time-varying data, we formally define them providing their syntax and semantics. With the aim of assessing the usability of our conceptual model including representation of different kinds of hierarchies as well as spatial and temporal support, we present real-world examples. Pursuing the goal that the proposed conceptual solutions can be implemented, we include their logical representations using relational and object-relational databases.
293

Exploitation d'un entrepôt de données guidée par des ontologies : application au management hospitalier / An ontology-driven approach for a personalized data warehouse exploitation : case study, healthcare management.

El Sarraj, Lama 10 July 2014 (has links)
Cette recherche s'inscrit dans le domaine de la personnalisation d'Entrepôt de Données (ED) et concerne l'aide à l'exploitation d'un ED. Nous intéressons à l'assistance à apporter à un utilisateur lors d'une analyse en ligne, dans son utilisation de ressources d'exploitation existantes. Le domaine d'application concerné est la gestion hospitalière, dans le cadre de la nouvelle gouvernance, et en se limitant au périmètre du Programme de Médicalisation des Systèmes d'Information (PMSI). Cette recherche a été supportée par l'Assistance Publique des Hôpitaux de Marseille (APHM). L'approche retenue pour développer une telle assistance à l'utilisateur d'ED est sémantique et guidée par l'usage d'ontologies. Le système d'assistance mettant en oeuvre cette approche, nommé Ontologies-based Personalization System (OPS), s'appuie sur une Base de Connaissances (BC) exploitée par un moteur de personnalisation. La BC est composée des trois ontologies : de domaine, de l'ED et des ressources. Le moteur de personnalisation permet d'une part une recherche personnalisée de ressources d'exploitation de l'ED en s'appuyant sur le profil de l'utilisateur, et d'autre part pour une ressource particulière, une recommandation de ressources complémentaires selon trois stratégies possibles. Afin de valider nos propositions, un prototype du système OPS a été développé avec un moteur de personnalisation a été implémenté en Java et exploitant une base de connaissance constituée des trois ontologies en OWL interconnectées. Nous illustrons le fonctionnement de notre système sur trois scenarii d'expérimentation liés au PMSI et définis avec des experts métiers de l'APHM. / This research is situated in the domain of Data Warehouses (DW) personalization and concerns DW assistance. Specifically, we are interested in assisting a user during an online analysis processes to use existing operational resources. The application of this research concerns hospital management, for hospitals governance, and is limited to the scope of the Program of Medicalization of Information Systems (PMSI). This research was supported by the Public Hospitals of Marseille (APHM). Our proposal is a semantic approach based on ontologies. The support system implementing this approach, called Ontology-based Personalization System (OPS), is based on a knowledge base operated by a personalization engine. The knowledge base is composed of three ontologies: a domain ontology, an ontology of the DW structure, and an ontology of resources. The personalization engine allows firstly, a personalized search of resources of the DW based on users profile, and secondly for a particular resource, an expansion of the research by recommending new resources based on the context of the resource. To recommend new resources, we have proposed three possible strategies. To validate our proposal, a prototype of the OPS system was developed, a personalization engine has been implemented in Java. This engine exploit an OWL knowledge composed of three interconnected OWL ontologies. We illustrate three experimental scenarios related to PMSI and defined with APHM domain experts.
294

Feeding a data warehouse with data coming from web services. A mediation approach for the DaWeS prototype / Alimenter un entrepôt de données par des données issues de services web. Une approche médiation pour le prototype DaWeS

Samuel, John 06 October 2014 (has links)
Cette thèse traite de l’établissement d’une plateforme logicielle nommée DaWeS permettant le déploiement et la gestion en ligne d’entrepôts de données alimentés par des données provenant de services web et personnalisés à destination des petites et moyennes entreprises. Ce travail s’articule autour du développement et de l’expérimentation de DaWeS. L’idée principale implémentée dans DaWeS est l’utilisation d’une approche virtuelle d’intégration de données (la médiation) en tant queprocessus ETL (extraction, transformation et chargement des données) pour les entrepôts de données gérés par DaWeS. A cette fin, un algorithme classique de réécriture de requêtes (l’algorithme inverse-rules) a été adapté et testé. Une étude théorique sur la sémantique des requêtes conjonctives et datalog exprimées avec des relations munies de limitations d’accès (correspondant aux services web) a été menée. Cette dernière permet l’obtention de bornes supérieures sur les nombres d’appels aux services web requis dans l’évaluation de telles requêtes. Des expérimentations ont été menées sur des services web réels dans trois domaines : le marketing en ligne, la gestion de projets et les services d’aide aux utilisateurs. Une première série de tests aléatoires a été effectuée pour tester le passage à l’échelle. / The role of data warehouse for business analytics cannot be undermined for any enterprise, irrespective of its size. But the growing dependence on web services has resulted in a situation where the enterprise data is managed by multiple autonomous and heterogeneous service providers. We present our approach and its associated prototype DaWeS [Samuel, 2014; Samuel and Rey, 2014; Samuel et al., 2014], a DAta warehouse fed with data coming from WEb Services to extract, transform and store enterprise data from web services and to build performance indicators from them (stored enterprise data) hiding from the end users the heterogeneity of the numerous underlying web services. Its ETL process is grounded on a mediation approach usually used in data integration. This enables DaWeS (i) to be fully configurable in a declarative manner only (XML, XSLT, SQL, datalog) and (ii) to make part of the warehouse schema dynamic so it can be easily updated. (i) and (ii) allow DaWeS managers to shift from development to administration when they want to connect to new web services or to update the APIs (Application programming interfaces) of already connected ones. The aim is to make DaWeS scalable and adaptable to smoothly face the ever-changing and growing web services offer. We point out the fact that this also enables DaWeS to be used with the vast majority of actual web service interfaces defined with basic technologies only (HTTP, REST, XML and JSON) and not with more advanced standards (WSDL, WADL, hRESTS or SAWSDL) since these more advanced standards are not widely used yet to describe real web services. In terms of applications, the aim is to allow a DaWeS administrator to provide to small and medium companies a service to store and query their business data coming from their usage of third-party services, without having to manage their own warehouse. In particular, DaWeS enables the easy design (as SQL Queries) of personalized performance indicators. We present in detail this mediation approach for ETL and the architecture of DaWeS. Besides its industrial purpose, working on building DaWeS brought forth further scientific challenges like the need for optimizing the number of web service API operation calls or handling incomplete information. We propose a bound on the number of calls to web services. This bound is a tool to compare future optimization techniques. We also present a heuristics to handle incomplete information.
295

Desenvolvimento de uma ferramenta computacional para monitoramento e coleta de dados, baseado em conceitos de e-Science e Data Warehouse para aplicação na pecuária / Development of a computer tool for monitoring and data collecting, based in concepts of e-Science and Data Warehouse for the application in cattle breeding

Tech, Adriano Rogério Bruno 14 February 2008 (has links)
Este trabalho estuda a viabilidade de um sistema de monitoramento e de coleta de dados através da WEB, com a construção de um e-Science Zootécnico. Foram desenvolvidos os sistemas, os meios de comunicação e monitoramento, além de um simulador de deslocamento, para otimização da distribuição de antenas e identificação dos animais. Os modelos Random Walk e Pseudo-Browniano foram utilizados para a implementação do simulador. O sistema de comunicação e o sistema gestor foram desenvolvidos e testados através de experimentos que permitiram a análise e a verificação de desempenho do sistema completo, intitulado de \"e-LAFAC\", que permite o controle integral do sistema, como a comunicação das estações fixas, com as estações móveis, através de uma rede de sensores sem fio. A metodologia utilizada no projeto, bem como os resultados obtidos, permitem concluir que, o objetivo de acompanhar e monitorar o animal durante um experimento, com coleta de dados telemétricos em tempo real e à distância, foi alcançado. / This paper studies the viability of a system of monitoring and data collecting through the WEB, with the construction of a Zootechnic e-Science. The hardware systems, the electronic communication devices and monitoring protocols were developed, besides a dislocating simulator, for the improvement and distribution of the antennas and identification of the animals. The models Random Walk and Pseudo-Brownian Motion were used for the implementation of the simulator. The communication and management system were developed and tested through experiments that enable the analysis and evaluation of the performance of the complete system, called \"e-LAFAC\", that allows the complete control of the system, as well as the communication of the fixed stations with the movable stations through a wireless sensor network. The methodology used in the project, as well as the results, allow us to conclude that the objective of following and monitoring the animal during an experiment, with the telemetric data collection in real time was obtained.
296

När tekniken behöver hinna ikapp affärsbehoven : En studie med fokus på hur data virtualization kan komplettera ett data warehouse / When technology needs to catch up with the business needs : A study that focuses on how data virtualization can complement a data warehouse

Birkås, Philip January 2019 (has links)
Data warehouse har länge varit, och är fortfarande, en viktig tillgång för organisationer för att lagra data för analys. Under det senaste årtionde har en exponentiell ökning av data skett, vilket givit organisationer allt större möjligheter att fatta datadrivna beslut i en helt ny utsträckning. Likväl har det i många fall även satt press på organisationer att utnyttja data som en tillgång för att reagera och ta tillvara på möjligheter i ett hårdnande affärsklimat, eller rent av för att bibehålla konkurrenskraft. För att möjliggöra för organisationer att kunna reagera i tid ställs flexibilitetskrav på de tekniker som används för att hantera data för analys. Praxis som länge rått för data warehouse spås inom kort inte längre vara aktuellt. Organisationer behöver hitta nya vägar för att snabbt lyckas ställa om och anpassa sig efter marknadens behov. Många organisationer har i dagsläget ett traditionellt data warehouse, och denna studie undersöker hur väl data virtualization kan bidra till att öka flexibiliteten genom att adderas som ett komplement. Detta undersöks genom frågeställningen:Hur kan data virtualization komplettera och förbättra ett traditionellt data warehouse?För att besvara frågeställningen genomförs en litteraturanalys och en fallstudie följt av en implementation, där datainsamlingen primärt består av tester utförda utifrån implementationsresultatet, observationer och intervjuer.Resultatet visar på en delad bild, där utförda tester potentiellt kan ses ifrågasätta delar av intervjuresultatet och befintlig litteratur. Utifrån resultatet kan en del konstateranden göras om i vilka användningsfall data virtualization inte passar, samt ge en bild av när det kan vara ett alternativ. Data virtualization bör ses som en utbyggnad av ett data warehouse som möjliggör sammansättning, transformering och beräkning av data, där datastrukturer kan ändras när som helst. En organisation som anammar denna flexiblare lösning måste dock vara beredda på rejält höjda svarstider för att utvinna data. / Data warehouse has for a long time been, and still is, an important asset for organizations to store data for analysis. In the last decade, an exponential increase in data has taken place, giving organizations increasingly greater opportunities to make data-driven decisions in completely new ways. Nevertheless, in many cases, it also puts pressure on organizations to use data as an asset to react and take advantage of opportunities in a harder business climate, or even to maintain competitiveness. In order to enable organizations to be able to react in time, flexibility requirements are imposed on the techniques used to handle data for analysis. The traditional best practices for data warehouse is predicted to soon be out of date. Organizations need to find new ways to quickly change and adapt to the needs of the market. Many organizations currently have a traditional data warehouse, and this study examines how well data virtualization can help increase flexibility by being added as a complement. This is examined through the question:How can data virtualization complement and improve a traditional data warehouse?To answer this question, a literature analysis, and a case study followed by an implementation is carried out, in which the data collection consists primarily of tests performed on the implementation result, observations and interviews.The result shows a divided picture, where performed tests can potentially be seen as questioning parts of the interview results and existing literature. Based on the result, some findings can be made about in which use-cases data virtualization does not fit, and gives a picture of when it can be an alternative. Data virtualization should be seen as an extension of a data warehouse that enables the federation, transformation and computation of data, where data structures can be changed at any time. However, an organization that embraces this more flexible solution must be prepared for substantially increased response times in data extraction.
297

Uma arquitetura para integração de ambientes data warehouse, espacial e agricultura de precisão

Petroski, Luiz Pedro 06 March 2017 (has links)
Made available in DSpace on 2017-07-21T14:19:30Z (GMT). No. of bitstreams: 1 Luiz Pedro Petroski.pdf: 3583708 bytes, checksum: cb0543041adedd80936d37a0c95d78ba (MD5) Previous issue date: 2017-03-06 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / The aim of this work is to present a proposal of integration between Precision Agriculture, DataWarehouse / OLAP and GIS. The integration should use extensible and open components, agricultural modeling for decision support, geographical data support, communication interface between components, extension of existing GIS and Data Warehouse solutions. As a result of the integration, an open and extensible architecture was defined, with a spatial agricultural data warehouse modeling. In this way the technologies and tools are open and allow the implementation and extension of its functionalities to adapt to the agricultural scenario of decision. In order to perform the integration, the data were obtained from a farm in the city of Piraí do Sul/PR, which uses proprietary software for data management. Data was exported to the SHAPEFILE format, and through the process performed by the ETL tool, was extracted, transformed and loaded into the analytical database. Also as a source of political boundaries data of rural regions of Brazil, data from the IBGE were used. The database was modeled and implemented by PostgreSQL DBMS with the extension PostiGIS to support spatial data. To provide the OLAP query service, was used the Geomondrian server. The application was extended from the Geonode project, where it was implemented Analytic functionalities, and the interface between the application and the OLAP was performed by the Mandoline API and the OLAP4J library. And finally the interface was implemented through javascript libraries for creating charts, tables and maps. As principal result, an architecture was obtained for Data Warehouse integration, OLAP operations, agricultural and spatial data, as well as ETL process definition and the user interface. / O objetivo desta dissertação é apresentar uma proposta de integração entre agricultura de precisão, Data Warehouse/OLAP e SIG. A integração deve utilizar componentes abertos e extensíveis, modelagem agrícola para suporte a decisão, suporte a dados geográficos, interface de comunicação entre os componentes e a extensão de soluções existentes de SIG e Data Warehouse. Como resultado da integração foi definido uma arquitetura aberta e extensível, integrada, com uma modelagem de Data Warehouse agrícola espacial, que permite o suporte a tomada de decisão para o planejamento e gestão do manejo das práticas da agricultura de precisão. Desta forma as tecnologias e ferramentas utilizadas são abertas e permitem a implementação e extensão de suas funcionalidades para adequar ao cenário agrícola de tomada de decisão. Para realizar a integração foi utilizado os dados oriundos de uma fazenda localizada em Piraí do Sul/PR, a qual utiliza um software proprietário para o gerenciamento de dados. Os dados foram exportados para o formato SHAPEFILE, e através do processo realizado pela ferramenta de ETL, foram extraídos, transformados e carregados para a base de dados analítica. Também como fonte de dados sobre as fronteiras políticas das regiões rurais do Brasil, foi utilizado dados do IBGE. A base de dados analítica foi modelada e implementada em um SGBD PostgreSQL com a extensão PostiGIS para suportar os dados geográficos. Para prover o serviço de consultas OLAP, foi utilizado o servidor Geomondrian. A aplicação foi estendida do projeto Geonode, onde foi implementado as funcionalidades analíticas, e a interface entre a aplicação e o servidor OLAP, foi realizada pela API Mandoline e a biblioteca OLAP4J. E por fim a interface foi implementada por meio de bibliotecas javascript para a criação de gráficos, tabelas e mapas. Como principal resultado, obteve-se uma arquitetura para integração de datawarehouse, operações OLAP, dados espaciais e agricultura, bem como definição do processo de ETL e a interface com o usuário.
298

Ambiente para extração de informação epidemiológica a partir da mineração de dez anos de dados do Sistema Público de Saúde / Environment for epidemiological information extraction by data mining ten years of data from the health public system

Pires, Fábio Antero 22 September 2011 (has links)
A utilização de bases de dados para estudos epidemiológicos, avaliação da qualidade e quantidade dos serviços de saúde vem despertando a atenção dos pesquisadores no contexto da Saúde Pública. No Brasil, as bases de dados do Sistema Único de Saúde (SUS) são exemplos de repositórios importantes que reúnem informações fundamentais sobre a Saúde. Entretanto, apesar dos avanços em termos de coleta e de ferramentas públicas para a pesquisa nessas bases de dados, tais como o TABWIN e o TABNET, esses recursos ainda não fazem uso de técnicas mais avançadas para a produção de informação gerencial, como as disponíveis em ferramentas OLAP (On Line Analytical Processing) e de mineração de dados. A situação é extremamente agravada pelo fato dos dados da Saúde Pública, produzidos por vários sistemas isolados, não estarem integrados, impossibilitando pesquisas entre diferentes bases de dados. Consequentemente, a produção de informação gerencial torna-se uma tarefa extremamente difícil. Por outro lado, a integração dessas bases de dados pode constituir um recurso indispensável e fundamental para a manipulação do enorme volume de dados disponível nesses ambientes e, assim, possibilitar a produção de informação e conhecimento relevantes, que contribuam para a melhoria da gestão em Saúde Pública. Acompanhar o seguimento de pacientes e comparar diferentes populações são outras importantes limitações das atuais bases de dados, uma vez que não há um identificador unívoco do paciente que possibilite executar tais tarefas. Esta Tese teve como objetivo a construção de um armazém de dados (data warehouse), a partir da análise de dez anos (período de 2000 a 2009) das principais bases de dados do SUS. Os métodos propostos para coleta, limpeza, padronização das estruturas dos bancos de dados, associação de registros ao paciente e integração dos sistemas de informação do SUS permitiram a identificação e o seguimento do paciente com sensibilidade de 99,68% e a especificidade de 97,94%. / The use of databases for epidemiologic studies, quality and quantity evaluation of health services have attracted the attention of researchers in the context of Public Health. In Brazil, the databases of the Sistema Único de Saúde (SUS) are examples of important repositories, which store fundamental information about health. However, despite of the advances in terms of load and public tools for research in those databases, such as TABWIN and TABNET, these resources do not use advanced techniques to produce management information as available in OLAP (On Line Analytical Processing) and data mining tools. The situation is drastically increased for the fact that data in public health, produced for different systems, are not integrated. This makes impossible to do research between different databases. As a consequence, the production of management information is a very difficult task. On the other hand, the integration of these databases can offer an important and fundamental resource to manipulate the enormous volume of data available in those environments and, in this way, to permit the production of relevant information and knowledge to improve the management of public health. The patient follow up and the comparison of different populations are other important limitations of the available databases, due to the absence of a common patient identifier. The objective of this Thesis was the construction of a data warehouse to analyze ten years (period from 2000 to 2009) of the principal databases of SUS. The proposed methods to load, clean, database structure standardization, patient record linkage and SUS information systems integration have been permitted patient identification and follow up with sensitivity of 99.6% and specificity of 97.94%.
299

Conceitos e aplicações de um sistema gerencial de apoio a decisão aplicados a sistemas de distribuição de energia elétrica via internet. / Executive information system: concepts and application for utilities.

Leal, Adriano Galindo 13 July 1999 (has links)
Nesta dissertação, são discutidas as vantagens e dificuldades de um Sistema Gerencial de Apoio a Decisão em um ambiente Intranet/Internet, sua execução, bem como a utilização de aplicações de bancos de dados na web. Um sistema dessa natureza, denominado SAG (Sistema de Apoio Gerencial), foi concebido para dar suporte às atividades de gerência, supervisão e controle da rede de distribuição de energia elétrica da Eletropaulo Metropolitana (São Paulo, Brasil). O SAG possibilita o estabelecimento de uma sistemática de supervisão, visando, a partir da análise das condições atuais da rede de distribuição e em função dos recursos disponíveis, permitir o acompanhamento da evolução da qualidade do fornecimento de energia elétrica. Disponibilizando informações que irão orientar ações para corrigir os possíveis desvios inadequados e fixar políticas e diretrizes a serem seguidas nos níveis gerenciais. Como resultado, é esperado tornar mais ágil o processo decisório, bem como o acesso a dados técnicos e de carregamento dos equipamentos da rede de distribuição. / In this dissertation are discussed the advantages and difficulties of an Executive Information System (EIS) implementation on an Intranet or Internet environment, as well as the use of Database applications on the web. An Executive Information System named SAG was implemented at Eletropaulo Metropolitana (São Paulo, Brazil); it was conceived to address maintenance, operational and engineering departments\' needs. The System allows the establishment of a systematic supervision to attend the quality of the electric energy supplied. The analysis of the actual distribution network conditions, the available resources and the electric energy supplied quality will guide the actions to correct possible inadequacies as well as set and fix policies and guidelines to be followed on a manager\'s level. As a result it\'s expected faster decision-making process and access on distribution network equipment\'s technical data and load.
300

Sistema de apoio à gestão de utilidades e energia: aplicação de conceitos de sistemas de informação e de apoio à tomada de decisão. / Support system for utility and energy management: utilization of information systems and decision support systems concepts.

Rosa, Luiz Henrique Leite 12 April 2007 (has links)
Este trabalho trata da especificação, desenvolvimento e utilização do Sistema de Apoio à Gestão de Utilidades e Energia - SAGUE, um sistema concebido para auxiliar na análise de dados coletados de sistemas de utilidades como ar comprimido, vapor, sistemas de bombeamento, sistemas para condicionamento ambiental e outros, integrados com medições de energia e variáveis climáticas. O SAGUE foi desenvolvido segundo conceitos presentes em sistemas de apoio à decisão como Data Warehouse e OLAP - Online Analytical Processing - com o intuito de transformar os dados oriundos de medições em informações que orientem diretamente as ações de conservação e uso racional de energia. As principais características destes sistemas, que influenciaram na especificação e desenvolvimento do SAGUE, são tratadas neste trabalho. Além disso, este texto aborda a gestão energética e os sistemas de gerenciamento de energia visando apresentar o ambiente que motivou o desenvolvimento do SAGUE. Neste contexto, é apresentado o Sistema de Gerenciamento de Energia Elétrica - SISGEN, um sistema de informação para suporte à gestão de energia elétrica e de contratos de fornecimento, cujos dados coletados podem ser analisados através do SAGUE. A aplicação do SAGUE é tratada na forma de um estudo de caso no qual se analisa a correlação existente entre o consumo de energia elétrica da CUASO - Cidade Universitária Armando de Sales Oliveira, obtido através do SISGEN, e as medições de temperatura ambiente, fornecidas pelo IAG - Instituto de Astronomia, Geofísica e Ciências Atmosféricas da USP. / This work deals with specification, development and utilization of the Support System for Utility and Energy Management - SAGUE, a system created to assist in analysis of data collected from utilities systems as compressed air, vapor, water pumping systems, environmental conditioning systems and others, integrated with energy consumption and climatic measurements. The development of SAGUE was based on concepts and methodologies from Decision Support System as Data Warehouse and OLAP - Online Analytical Processing - in order to transform data measurements in information that guide the actions for energy conservation and rational utilization. The main characteristics of Data Warehouse and OLAP tools that influenced in the specifications and development of SAGUE are described in this work. In addition, this text deals with power management and energy management systems in order to present the environment that motivated the SAGUE development. Within this context, it is presented the Electrical Energy Management System - SISGEN, a system for energy management support, whose electrical measurements can be analyzed by SAGUE. The SAGUE utilization is presented in a case study that discusses the relation between electrical energy consumption of CUASO - Cidade Universitária Armando de Sales Oliveira, obtained throughout SISGEN, and the local temperature measurements supplied by IAG - Institute of Astronomic and Atmospheric Science of USP.

Page generated in 0.0304 seconds