• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Uma abordagem de Data Warehouse para an?lise de processos de desenvolvimento de software

Novello, Taisa Carla 02 March 2006 (has links)
Made available in DSpace on 2015-04-14T14:48:55Z (GMT). No. of bitstreams: 1 399218.pdf: 2830985 bytes, checksum: ffa3a6af739950b3c3732472c58fb2c7 (MD5) Previous issue date: 2006-03-02 / A busca pela qualidade sobre produtos de software se faz cada vez mais presente e necess?ria em organiza??es de software. Neste sentido, essas organiza??es buscam op??es de como medir e analisar quantitativamente a qualidade de seus processos de desenvolvimento. No entanto, organiza??es trabalham com diferentes projetos que, por sua vez, utilizam-se de diversos processos e m?tricas. Partindo desta premissa, tais organiza??es devem buscar alternativas de como prover uma vis?o unificada atrav?s da centraliza??o dos dados dos diferentes projetos e ainda disponibilizar, a seus usu?rios, an?lises quantitativas de seus Processos de Desenvolvimento de Software (PDS) atrav?s de um Programa de M?tricas (PM). Para tal, os modelos de qualidade de software sugerem a constru??o de um reposit?rio organizacional de m?tricas. Contudo, a constru??o de um reposit?rio que atenda as caracter?sticas tanto de suporte ao armazenamento dos dados, como da disponibiliza??o de an?lises aos usu?rios organizacionais n?o mostra-se uma tarefa trivial. Perante esta realidade, este trabalho descreve sucintamente a arquitetura de um ambiente de Data Warehousing que prov? suporte a ado??o de um PM atrav?s do armazenamento de dados resultantes de diferentes PDS em uma base de dados unificada e centralizada. Este volume dedica-se a apresenta??o de dois componentes deste ambiente: o modelo anal?tico, base do Data Warehouse (DW), e o componente de apresenta??o no qual definem-se recursos anal?ticos que facilitam as an?lises realizadas pelos usu?rios organizacionais. O desenvolvimento de um reposit?rio deve considerar tanto as especificidades do PM adotado como as do pr?prio ambiente dos PDS. Quanto ?s m?tricas que comp?em o PM, algumas representam dados n?o aditivos que podem comprometer as an?lises a serem realizadas. J?, quanto ao ambiente, especificidades dos PDS dificultam a defini??o de um ?nico modelo que comporte caracter?sticas distintas. Al?m do armazenamento dos dados, a forma como estes ser?o disponibilizados tamb?m deve ser considerada, uma vez que usu?rios possuem caracter?sticas e necessidades de an?lise distintas. Por conseq??ncia, a complexidade de se desenvolver um modelo e prover recursos de an?lise neste contexto ? muito alta. Desta forma, o modelo anal?tico proposto visa armazenar m?tricas e dados resultantes dos PDS, considerando as necessidades de an?lises e tratando tanto as especificidades do PM adotado como tamb?m as do ambiente do PDS. A defini??o dos recursos anal?ticos propostos, considera usu?rios com diferentes perfis, bem como suas particularidades. Estes recursos visam satisfazer as necessidades de an?lise destes perfis disponibilizando informa??es atrav?s de v?rios n?veis de granularidade e disponibilizando mecanismos que forne?am maior sem?ntica aos dados. Assim, este trabalho prov? uma infraestrutura que suporta dados resultantes de diferentes PDS e an?lises quantitativas que consideram diferentes perfis de usu?rios embasadas em um PM.
232

Uma abordagem orientada a servi?os para captura de m?tricas de processo de desenvolvimento de software

Cunha, Virginia Silva da 26 January 2006 (has links)
Made available in DSpace on 2015-04-14T14:48:59Z (GMT). No. of bitstreams: 1 400971.pdf: 3124182 bytes, checksum: 9b0e8cc34e680328d6c7483573e46652 (MD5) Previous issue date: 2006-01-26 / As organiza??es de software trabalham com diversos projetos de software que se diferenciam tanto pelas ferramentas de gest?o utilizadas quanto pela forma que armazenam e controlam suas m?tricas de acompanhamento. Sendo assim, a inexist?ncia de um reposit?rio centralizado de dados dificulta o acompanhamento dos Processos de Desenvolvimento de Software (PDSs) dessas organiza??es. Uma das etapas mais cruciais do Processo de Descoberta de Conhecimento em Banco de Dados ? o processo de Extra??o, Transforma??o e Carga (ETC), pois este tem como finalidade a transforma??o dos dados brutos, extra?dos de diversas fontes, em informa??es consistentes e de qualidade. Considerando que os PDSs possuem suas especificidades, realizou-se um estudo em um ambiente real e verificou-se que, em termos de ferramentas, s?o utilizadas desde planilhas eletr?nicas (e.g. MS Excel) at? ferramentas para controle da execu??o de atividades de projetos (e.g. MS Project Server, IBM Rational Clear Quest, Bugzilla). Detectou-se ainda o uso de diferentes modelos de PDS, com ciclos de vida variados para projetos distintos, que se traduzem em formas totalmente diversas de registrar estes projetos, ainda que na mesma ferramenta. Outro problema ? que cada uma dessas ferramentas possui um modelo de dados pr?prio, que n?o segue padroniza??es estabelecidas de representa??o de dados, dificultando assim a extra??o desses dados. Por conseq??ncia, o grau de complexidade do processo de ETC, para esta organiza??o, ? muito alto. O modelo proposto neste trabalho tem por m?rito tratar, de forma integrada, dois aspectos: 1) a coleta de dados dos projetos de forma n?o intrusiva, levando em considera??o v?rios tipos de heterogeneidade, 2) a transforma??o e integra??o desses dados, proporcionando uma vis?o organizacional unificada e quantitativa dos projetos. Esses aspectos s?o tratados utilizando uma arquitetura orientada a servi?os. A abordagem orientada a servi?os busca lidar com v?rios tipos de heterogeneidade, tanto do ponto de vista organizacional (e.g. especializa??es do Processo de Software Padr?o da Organiza??o (OSSP Organization s Standard Software Process) que resultam em formas distintas de desenvolvimento e registro de fatos sobre projetos), quanto do ponto de vista t?cnico (e.g. diferentes ferramentas). Essa heterogeneidade ? convenientemente tratada atrav?s de servi?os que atuam como wrappers dos diferentes tipos de extratores, que suporta um ambiente distribu?do de desenvolvimento. Para avalia??o da abordagem proposta, foram desenvolvidos tr?s exemplos, que consideram todas essas quest?es de heterogeneidade: diferentes tipos de projetos, diferentes ciclos de vida, diferentes modelos de gerenciamento e diversas ferramentas de apoio ao acompanhamento.
233

SPDW-Miner : um m?todo para a execu??o de processos de descoberta de conhecimento em bases de dados de projetos de software

Figueira, Fernanda Vieira 31 March 2008 (has links)
Made available in DSpace on 2015-04-14T14:49:12Z (GMT). No. of bitstreams: 1 417649.pdf: 1251849 bytes, checksum: ad607557163d02817ddb83aa46013681 (MD5) Previous issue date: 2008-03-31 / As organiza??es de software buscam, cada vez mais, aprimorar seu Processo de Desenvolvimento de Software (PDS), com o intuito de garantir a qualidade dos seus processos e produtos. Para tanto, elas adotam modelos de maturidade de software. Esses modelos estabelecem que a mensura??o da qualidade seja realizada atrav?s de um programa de m?tricas (PM). As m?tricas definidas devem ser coletadas e armazenadas, permitindo manter um hist?rico organizacional da qualidade. Contudo, apenas mensurar n?o ? o bastante. As informa??es armazenadas devem ser ?teis para apoiar na manuten??o da qualidade do PDS. Para tanto, os n?veis mais altos dos modelos de maturidade sugerem que t?cnicas estat?sticas e anal?ticas sejam utilizadas, com a finalidade de estabelecer o entendimento quantitativo sobre as m?tricas. As t?cnicas de minera??o de dados entram neste contexto como uma abordagem capaz de aumentar a capacidade anal?tica e preditiva sobre as estimativas e o desempenho quantitativo do PDS. Este trabalho prop?e um m?todo para a execu??o do processo de KDD (Knowledge Discovery in Database), denominado de SPDW-Miner, voltado para a predi??o de m?tricas de software. Para tanto, prop?e um processo de KDD que incorpora o ambiente de data warehousing, denominado SPDW+. O m?todo ? composto por uma s?rie de etapas que guiam os usu?rios para o desenvolvimento de todo o processo de KDD. Em especial, em vez de considerar o DW (data warehouse) como um passo intermedi?rio deste processo, o toma como ponto de refer?ncia para a sua execu??o. S?o especificadas todas as etapas que comp?em o processo de KDD, desde o estabelecimento do objetivo de minera??o; a extra??o e prepara??o dos dados; a minera??o at? a otimiza??o dos resultados. A contribui??o est? em estabelecer um processo de KDD em um n?vel de detalhamento bastante confort?vel, permitindo que os usu?rios organizacionais possam adot?-lo como um manual de refer?ncia para a descoberta de conhecimento.
234

Um plano de m?tricas para monitoramento de projetos scrum

Spies, Eduardo Henrique 15 March 2013 (has links)
Made available in DSpace on 2015-04-14T14:50:09Z (GMT). No. of bitstreams: 1 453323.pdf: 1666131 bytes, checksum: b3d0384201e24752155d711856753450 (MD5) Previous issue date: 2013-03-15 / Agile methods have earned their space both in industry and in academia, being increasingly used. With the focus on frequent returns to customers, these methods have difficulties to gain control and maintain efficient communication, especially in larger projects with several collaborators. Software engineering techniques have proved of great value to increase predictability and provide more discipline to this kind of projects. In this paper we present a metrics program for SCRUM and an extension of a Data Warehousing environment for monitoring projects. Thus, we provide a consistent repository that can be used as a historical reference of projects and for exploring metrics in different dimensions, easing control over all aspects of the progress of a project. / M?todos ?geis j? consolidaram o seu espa?o tanto na ind?stria como na academia, sendo cada vez mais utilizados. Com o foco em retornos frequentes aos clientes, estes m?todos t?m dificuldades para obter controle e manter comunica??o eficiente, especialmente em projetos de maior porte e com grande quantidade de pessoas envolvidas. T?cnicas de engenharia de software t?m se mostrado de grande valia para aumentar a previsibilidade e dar mais disciplina deste tipo de projetos. Neste trabalho ? apresentado um programa de m?tricas para SCRUM e uma extens?o de um ambiente de Data Warehousing para o monitoramento de projetos. Desta forma, ? provido um reposit?rio consistente que pode ser utilizado como referencial hist?rico de projetos e para a visualiza??o de m?tricas em diferentes dimens?es, facilitando o controle sobre todos os aspectos do progresso de um projeto.
235

Contribution à la prévention des risques liés à l’anesthésie par la valorisation des informations hospitalières au sein d’un entrepôt de données / Contributing to preventing anesthesia adverse events through the reuse of hospital information in a data warehouse

Lamer, Antoine 25 September 2015 (has links)
Introduction Le Système d'Information Hospitalier (SIH) exploite et enregistre chaque jours des millions d'informations liées à la prise en charge des patients : résultats d'analyses biologiques, mesures de paramètres physiologiques, administrations de médicaments, parcours dans les unités de soins, etc... Ces données sont traitées par des applications opérationnelles dont l'objectif est d'assurer un accès distant et une vision complète du dossier médical des patients au personnel médical. Ces données sont maintenant aussi utilisées pour répondre à d'autres objectifs comme la recherche clinique ou la santé publique, en particulier en les intégrant dans un entrepôt de données. La principale difficulté de ce type de projet est d'exploiter des données dans un autre but que celui pour lequel elles ont été enregistrées. Plusieurs études ont mis en évidence un lien statistique entre le respect d'indicateurs de qualité de prise en charge de l'anesthésie et le devenir du patient au cours du séjour hospitalier. Au CHRU de Lille, ces indicateurs de qualité, ainsi que les comorbidités du patient lors de la période post-opératoire pourraient être calculés grâce aux données recueillies par plusieurs applications du SIH. L'objectif de se travail est d'intégrer les données enregistrées par ces applications opérationnelles afin de pouvoir réaliser des études de recherche clinique.Méthode Dans un premier temps, la qualité des données enregistrées dans les systèmes sources est évaluée grâce aux méthodes présentées par la littérature ou développées dans le cadre ce projet. Puis, les problèmes de qualité mis en évidence sont traités lors de la phase d'intégration dans l'entrepôt de données. De nouvelles données sont calculées et agrégées afin de proposer des indicateurs de qualité de prise en charge. Enfin, deux études de cas permettent de tester l'utilisation du système développée.Résultats Les données pertinentes des applications du SIH ont été intégrées au sein d'un entrepôt de données d'anesthésie. Celui-ci répertorie les informations liées aux séjours hospitaliers et aux interventions réalisées depuis 2010 (médicaments administrées, étapes de l'intervention, mesures, parcours dans les unités de soins, ...) enregistrées par les applications sources. Des données agrégées ont été calculées et ont permis de mener deux études recherche clinique. La première étude a permis de mettre en évidence un lien statistique entre l'hypotension liée à l'induction de l'anesthésie et le devenir du patient. Des facteurs prédictifs de cette hypotension ont également étaient établis. La seconde étude a évalué le respect d'indicateurs de ventilation du patient et l'impact sur les comorbidités du système respiratoire.Discussion The data warehouse L'entrepôt de données développé dans le cadre de ce travail, et les méthodes d'intégration et de nettoyage de données mises en places permettent de conduire des analyses statistiques rétrospectives sur plus de 200 000 interventions. Le système pourra être étendu à d'autres systèmes sources au sein du CHRU de Lille mais également aux feuilles d'anesthésie utilisées par d'autres structures de soins. / Introduction Hospital Information Systems (HIS) manage and register every day millions of data related to patient care: biological results, vital signs, drugs administrations, care process... These data are stored by operational applications provide remote access and a comprehensive picture of Electronic Health Record. These data may also be used to answer to others purposes as clinical research or public health, particularly when integrated in a data warehouse. Some studies highlighted a statistical link between the compliance of quality indicators related to anesthesia procedure and patient outcome during the hospital stay. In the University Hospital of Lille, the quality indicators, as well as the patient comorbidities during the post-operative period could be assessed with data collected by applications of the HIS. The main objective of the work is to integrate data collected by operational applications in order to realize clinical research studies.Methods First, the data quality of information registered by the operational applications is evaluated with methods … by the literature or developed in this work. Then, data quality problems highlighted by the evaluation are managed during the integration step of the ETL process. New data are computed and aggregated in order to dispose of indicators of quality of care. Finally, two studies bring out the usability of the system.Results Pertinent data from the HIS have been integrated in an anesthesia data warehouse. This system stores data about the hospital stay and interventions (drug administrations, vital signs …) since 2010. Aggregated data have been developed and used in two clinical research studies. The first study highlighted statistical link between the induction and patient outcome. The second study evaluated the compliance of quality indicators of ventilation and the impact on comorbity.Discussion The data warehouse and the cleaning and integration methods developed as part of this work allow performing statistical analysis on more than 200 000 interventions. This system can be implemented with other applications used in the CHRU of Lille but also with Anesthesia Information Management Systems used by other hospitals.
236

Evangelist Marketing of the CloverETL Software / Evangelist Marketing of the CloverETL Software

Štýs, Miroslav January 2011 (has links)
The Evangelist Marketing of the CloverETL Software diploma thesis aims at proposing a new marketing strategy for an ETL tool - CloverETL. Theoretical part comprises chapters two and three. In chapter two, the thesis attempts to cover the ETL term, which - as a separate component of the Business Intelligence architecture - is not given much space in literature. Chapter three introduces evangelist marketing, explains its origins and best practices. Practical part involves introducing the Javlin, a.s. company and its CloverETL software product. After assessing the current marketing strategy, proposal of a new strategy follows. The new strategy is built on evangelist marketing pillars. Finally, benefits of the new approach are discussed looking at stats and data - mostly Google Analytics outputs.
237

Datový sklad pro vzájemně nekompatibilní verze systému EPOS / Data Warehouse for Incompatible Versions of EPOS system

Kyšková, Lucia January 2016 (has links)
This bachelor’s thesis is elaborated according to gained experience and knowledge from thie field of databases systems and business intelligence. Its result is a data warehouse with support business intelligence parts for two incompatible versions of system EPOS (Electronic cash desk checking system).
238

Large Scale ETL Design, Optimization and Implementation Based On Spark and AWS Platform

Zhu, Di January 2017 (has links)
Nowadays, the amount of data generated by users within an Internet product is increasing exponentially, for instance, clickstream for a website application from millions of users, geospatial information from GIS-based APPs of Android and IPhone, or sensor data from cars or any electronic equipment, etc. All these data may be yielded billions every day, which is not surprisingly essential that insights could be extracted or built. For instance, monitoring system, fraud detection, user behavior analysis and feature verification, etc.Nevertheless, technical issues emerge accordingly. Heterogeneity, massiveness and miscellaneous requirements for taking use of the data from different dimensions make it much harder when it comes to the design of data pipelines, transforming and persistence in data warehouse. Undeniably, there are traditional ways to build ETLs from mainframe [1], RDBMS, to MapReduce and Hive. Yet with the emergence and popularization of Spark framework and AWS, this procedure could be evolved to a more robust, efficient, less costly and easy-to-implement architecture for collecting, building dimensional models and proceeding analytics on massive data. With the advantage of being in a car transportation company, billions of user behavior events come in every day, this paper contributes to an exploratory way of building and optimizing ETL pipelines based on AWS and Spark, and compare it with current main Data pipelines from different aspects. / Mängden data som genereras internet-produkt-användare ökar lavinartat och exponentiellt. Det finns otaliga exempel på detta; klick-strömmen från hemsidor med miljontals användare, geospatial information från GISbaserade Android och iPhone appar, eller från sensorer på autonoma bilar.Mängden händelser från de här typerna av data kan enkelt uppnå miljardantal dagligen, därför är det föga förvånande att det är möjligt att extrahera insikter från de här data-strömmarna. Till exempel kan man sätta upp automatiserade övervakningssystem eller kalibrera bedrägerimodeller effektivt. Att handskas med data i de här storleksordningarna är dock inte helt problemfritt, det finns flertalet tekniska bekymmer som enkelt kan uppstå. Datan är inte alltid på samma form, den kan vara av olika dimensioner vilket gör det betydligt svårare att designa en effektiv data-pipeline, transformera datan och lagra den persistent i ett data-warehouse. Onekligen finns det traditionella sätt att bygga ETL’s på från mainframe [1], RDBMS, till MapReduce och Hive. Dock har det med upptäckten och ökade populariteten av Spark och AWS blivit mer robust, effektivt, billigare och enklare att implementera system för att samla data, bygga dimensions-enliga modeller och genomföra analys av massiva data-set. Den här uppsatsen bidrar till en ökad förståelse kring hur man bygger och optimerar ETL-pipelines baserade på AWS och Spark och jämför med huvudsakliga nuvarande Data-pipelines med hänsyn till diverse aspekter. Uppsatsen drar nytta av att ha tillgång till ett massivt data-set med miljarder användar-events genererade dagligen från ett bil-transport-bolag i mellanöstern.
239

External Data Incorporation into Data Warehouses

Strand, Mattias January 2005 (has links)
<p>Most organizations are exposed to increasing competition and must be able to orient themselves in their environment. Therefore, they need comprehensive systems that are able to present a holistic view of the organization and its business. A data warehouse (DW) may support such tasks, due to its abilities to integrate and aggregate data from organizationally internal, as well as external sources and present the data in formats that support strategic and tactical decision-makers.</p><p>Traditionally, DW development projects have focused on data originating from internal systems, whereas the benefits of data acquired external to the organization, i.e. external data, have been neglected. However, as it has become increasingly important to keep track of the competitive forces influencing an organization, external data is gaining more attention. Still, organizations are experiencing problems when incorporating external data and these hinder the organizations from exploiting the potential of external data and prevent them to achieving return on their investments. In addition, current literature fails to assist organizations in avoiding or solving common problems.</p><p>Therefore, in order to support organizations in their external data incorporation initiatives, a set of guidelines have been developed and contextualized. The guidelines are also complemented with a state of practice description, as a means of taking one step towards a cohesive body of knowledge regarding external data incorporation into DWs. The development of the guidelines, as well as the establishment of a state of practice description, was based upon the material from two literature reviews and four interview studies. The interview studies were conducted with the most important stakeholders when incorporating external data, i.e. the user organizations (2 studies), the DW consultants, and the suppliers of the external data. Additionally, in order to further ground the guidelines, interviews with a second set of DW consultants were conducted.</p>
240

External Data Incorporation into Data Warehouses

Strand, Mattias January 2005 (has links)
Most organizations are exposed to increasing competition and must be able to orient themselves in their environment. Therefore, they need comprehensive systems that are able to present a holistic view of the organization and its business. A data warehouse (DW) may support such tasks, due to its abilities to integrate and aggregate data from organizationally internal, as well as external sources and present the data in formats that support strategic and tactical decision-makers. Traditionally, DW development projects have focused on data originating from internal systems, whereas the benefits of data acquired external to the organization, i.e. external data, have been neglected. However, as it has become increasingly important to keep track of the competitive forces influencing an organization, external data is gaining more attention. Still, organizations are experiencing problems when incorporating external data and these hinder the organizations from exploiting the potential of external data and prevent them to achieving return on their investments. In addition, current literature fails to assist organizations in avoiding or solving common problems. Therefore, in order to support organizations in their external data incorporation initiatives, a set of guidelines have been developed and contextualized. The guidelines are also complemented with a state of practice description, as a means of taking one step towards a cohesive body of knowledge regarding external data incorporation into DWs. The development of the guidelines, as well as the establishment of a state of practice description, was based upon the material from two literature reviews and four interview studies. The interview studies were conducted with the most important stakeholders when incorporating external data, i.e. the user organizations (2 studies), the DW consultants, and the suppliers of the external data. Additionally, in order to further ground the guidelines, interviews with a second set of DW consultants were conducted.

Page generated in 0.0447 seconds