• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 121
  • 114
  • 88
  • 69
  • 38
  • 12
  • 7
  • 7
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • Tagged with
  • 494
  • 494
  • 115
  • 108
  • 99
  • 81
  • 74
  • 73
  • 69
  • 69
  • 63
  • 56
  • 56
  • 53
  • 49
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
441

BIOMA : En modell för att bedöma en organisations BI-mognad ur ett multidimensionellt perspektiv / BIOMA: Business Intelligence Organizational Maturity Analysis : A Model for measure organizational BI-maturity through amultidimensional perspective

Widehammar, Per, Langell, Robin January 2010 (has links)
Den ökade globaliseringen och senaste finanskrisen ställer höga krav på uppföljning och medvetenhet av ett företags prestation. Business Intelligence (BI) är ett område vars syfte är att förbättra en organisations prestation genom analys av historisk data. BI är ett komplext område som inte bara handlar om tekniska lösningar, även om det är en förutsättning. För närvarande investeras det mycket i olika BI-lösningar och företagen behöver veta vad resurserna bör läggas på. I dagsläget finns det ingen modell som bedömer ett företags arbete med Business Intelligence utifrån ett flertal dimensioner. Syftet med den här studien var att utveckla en mognadsmodell för Business Intelligence och sedan jämföra de undersökta företagen Axfood, Scania och Systembolagets mognad. För att uppnå studiens syfte avsåg vi att besvara följande frågeställningar, ”Hur skulle en modell för att bedöma ett företags mognad inom Business Intelligence kunna se ut?” samt ”Vilka förutsättningar påverkar ett företags mognad inom Business Intelligence”. Mognadsmodellen (BIOMA) kom att bestå av fyra hörnstenar som i sin tur delades in i en eller flera underkategorier. Varje delkategori ger poäng som sedan infogas i ett koordinatsystem där axlarna motsvarar hörnstenarna och poängen utgår från origo. Att mäta ett företags mognad inom BI är komplext, då ett antal aspekter såsom organisationsstruktur, användarmedverkan samt klyftan mellan IT-avdelning och verksamhet kan påverka. Den teoretiska modellen är empiriskt testad. Respondenterna på respektive företag har bedömt hur långt de kommit inom varje hörnsten samt ge synpunkter på modellens utformning. Modellen har sedan förädlats utifrån det empiriska materialet. Vi anser att BIOMA har ett stort värde då det saknas en modell som visuellt och relativt enkelt beskriver ett företags mognad inom Business Intelligence. Modellen kan användas i olika syften, såsom benchmarking mellan processer och företag, säljstöd för konsulter samt vid förstudie för att klargöra ett företags nuläge. / The increased globalization and the recent financial crisis have put high demands on the monitoring and awareness of an organization's performance. Business Intelligence (BI) is an area which aims to improve this performance through analysis of historical data. BI is a complex question for organization’s because it involves more than just technical solutions for maximum performance. Organizations are currently investing in different BI solutions and a list of priorities has to be made to ensure balanced resource allocation within a BI-implementation. To this day no single business intelligence model exists that can adequately measure a company’s work from several perspectives. The purpose of this study was to develop a maturity model for BI and use it in a case study of three different well-known Swedish companies; Axfood, Scania and Systembolaget, to measure their BI-maturity. To achieve the purpose of the study, three distinct research questions arose; "What would a model for measuring a company’s Business Intelligence maturity look like? How would this model be constructed? And finally “What conditions could potentially affect an organization’s maturity in Business Intelligence?". The Maturity Model BIOMA (Business Intelligence Organizational Maturity Analysis) is made up of four categories, which in turn are divided into one or more sub-categories. A subcategory consists of several statements. Each statement carries a certain number of points. When the points are combined, the summarized amount is inserted into a coordinate system. Within this, the axies correspond to the pillars and the score is based on the origo-point. Measuring a company's BI-maturity is a complex research question, where a number of aspects such as organizational structure, end-user involvement, and the gap between IT department and business can be of great importance. BIOMA was empirically tested in the case study. The responders in each company judged their company based on the statements in each subcategory. Following this they made suggestions on ways to change the model. By applying these suggestions to the original material, the model was then redeveloped to create a final version. The model can be used for various purposes, such as processes within organizations or in benchmarking. It can also be used by consultants in Sales support as a pilot study for clarifying a company’s present BI-maturity. In this absence of a model that could visually describe a company’s BI maturity multidimensionally, we believe that BIOMA has substantial and existing business potential.
442

Uso de informações de saúde para suporte à decisão: uma metodologia focada no consumidor da informação / Use of health information for decision support: a methodology focused on consumer information

Azevedo, Luiz Fernando de Aguiar January 2009 (has links)
Made available in DSpace on 2011-05-04T12:36:29Z (GMT). No. of bitstreams: 0 Previous issue date: 2009 / O objetivo deste trabalho é apresentar os passos necessários para a construção de data marts /data warehouses como uma solução para um ambiente de suporte à decisão. Seu foco não é, entretanto, o aprofundamento de cada uma das etapas deste método (uma vez que existe uma ampla literatura sobre o assunto), mas realçar a importância do envolvimento do usuário aqui denominado consumidor da informação durante o processo de criação e manutenção destes data marts / data warehouses. O consumidor da informação é chamado a participar de um plano integrado com os membros das áreas detentoras do conhecimento necessário para a construção destas soluções de suporte à decisão, incluindo a área de tecnologia da informação. A disseminação das informações contidas nestes bancos de dados para os diversos tipos de consumidores da informação(com diferentes recursos de hardware, software e humanos disponíveis), e sua aplicação no controle social, também são discutidas aqui. / The purpose of this work is to present the required steps for the construction of data marts / data warehouses as a solution for a decision support environment. However, its focus is not to go deeper in each of the steps of this method (since there is a broad literature about the subject), but to highlight the importance of the user – here called information consumer – engagement during the process of creating and maintaining these data marts / data warehouses. The information consumer is called to take part in an integrated plan together with the members of the areas who own the necessary knowledge to build these decision support solutions, including the information technology (IT) area. The spread of the informations contained in these databases to the different types of information consumers (with different hardware, software and human resources available), and its application in social control, is also discussed here.
443

Istar : um esquema estrela otimizado para Image Data Warehouses baseado em similaridade

Anibal, Luana Peixoto 26 August 2011 (has links)
Made available in DSpace on 2016-06-02T19:05:54Z (GMT). No. of bitstreams: 1 3993.pdf: 3294402 bytes, checksum: 982c043143364db53c8a4e2084205995 (MD5) Previous issue date: 2011-08-26 / A data warehousing environment supports the decision-making process through the investigation and analysis of data in an organized and agile way. However, the current data warehousing technologies do not allow that the decision-making processe be carried out based on images pictorial (intrinsic) features. This analysis can not be carried out in a conventional data warehousing because it requires the management of data related to the intrinsic features of the images to perform similarity comparisons. In this work, we propose a new data warehousing environment called iCube to enable the processing of OLAP perceptual similarity queries over images, based on their pictorial (intrinsic) features. Our approach deals with and extends the three main phases of the traditional data warehousing process to allow the use of images as data. For the data integration phase, or ETL phase, we propose a process to represent the image by its intrinsic content (such as color or texture numerical descriptors) and integrate this data with conventional data in the DW. For the dimensional modeling phase, we propose a star schema, called iStar, that stores both the intrinsic and the conventional image data. Moreover, at this stage, our approach models the schema to represent and support the use of different user-defined perceptual layers. For the data analysis phase, we propose an environment in which the OLAP engine uses the image similarity as a query predicate. This environment employs a filter mechanism to speed-up the query execution. The iStar was validated through performance tests for evaluating both the building cost and the cost to process IOLAP queries. The results showed that our approach provided an impressive performance improvement in IOLAP query processing. The performance gain of the iCube over the best related work (i.e. SingleOnion) was up to 98,21%. / Um ambiente de data warehousing (DWing) auxilia seus usuários a tomarem decisões a partir de investigações e análises dos dados de maneira organizada e ágil. Entretanto, os atuais recursos de DWing não possibilitam que o processo de tomada de decisão seja realizado com base em comparações do conteúdo intrínseco de imagens. Esta análise não pode ser realizada por aplicações de DW convencionais porque essa utiliza, como base, imagens digitais e necessita realizar operações baseadas em similaridade, para as quais um DW convencional não oferece suporte. Neste trabalho, é proposto um ambiente de data warehouse chamado iCube que provê suporte ao processamento de consultas IOLAP (Image On-Line Analytical Processing) baseadas em diversas percepções de similaridade entre as imagens. O iCube realiza adaptações nas três principais fases de um ambiente de data warehousing convencional para permitir o uso de imagens como dados de um data warehouse (DW). Para a fase de integração, ou fase ETL (Extract, Trasnform and Load), nós propomos um processo para representar as imagens a partir de seu conteúdo intrínseco (i.e., por exemplo por meio de descritores numéricos que representam cor ou textura dessas imagens) e integrar esse conteúdo intrínseco a dados convencionais em um DW. Neste trabalho, nós também propomos um esquema estrela otimizado para o iCube, denominado iStar, que armazena tanto dados convencionais quanto dados de representação do conteúdo intrínseco das imagens. Ademais, nesta fase, o iStar foi projetado para representar e prover suporte ao uso de diferentes camadas perceptuais definidas pelo usuário. Para a fase de análise de dados, o iCube permite que processos OLAP sejam executados com o uso de comparações de similaridade como predicado de consultas e com o uso de mecanismos de filtragem para acelerar o processamento de consultas OLAP. O iCube foi validado a partir de testes de desempenho para a construção da estrutura e para o processamento de consultas IOLAP. Os resultados demonstraram que o iCube melhora significativamente o desempenho no processamento de consultas IOLAP quando comparado aos atuais recursos de IDWing. Os ganhos de desempenho do iCube contra o melhor trabalho correlato (i.e. SingleOnion) foram de até 98,21%.
444

Geovisualização analítica: desenvolvimento de um protótipo de um sistema analítico de informações para a gestão da coleta seletiva de resíduos urbanos recicláveis. / Analytical geovisualization: development of a prototype of an analytical information system for the management of the selective collection of recycled urban solid waste.

Carlos Enrique Hernández Simões 16 April 2010 (has links)
Os resíduos urbanos descartados de forma irregular constituem um problema sério, principalmente nas grandes cidades, causando entupimento de bueiros e drenagem com conseqüentes inundações, sujeira e transmissão de doenças tais como leptospirose e dengue além de ser um estorvo para o trânsito e acarretarem gastos para a Prefeitura. Por outro lado, reciclar este material é uma fonte de receitas e um gerador de empregos. A Geovisualização Analítica pode ser de grande auxílio para a análise desse problema complexo e para a tomada de decisão. Com essa motivação, a presente dissertação procura fornecer uma visão geral sobre o estado da arte quanto a conceitos e pesquisas em Geovisualização (GVis) e Processamento Analítico (OLAP e SOLAP). Apresenta também processos, metodologias e tecnologias que foram utilizadas no desenvolvimento de um protótipo de um sistema analítico de informações aplicado à área de gestão da coleta seletiva de resíduos sólidos recicláveis em ambiente urbano. Este protótipo, efetivamente implantado, combinou navegação e consulta para recuperar informações através de seleções espaciais e alfanuméricas. Através de exemplos foi mostrado que este tipo de solução auxilia gestores nos processos de exploração, análise e tomada de decisão. / Domestic waste illegally disposed constitute a serious problem, especially in large cities, causing clogging of drains and drainage with subsequent flooding, dirt and transmission of diseases such as leptospirosis and dengue as well as being a hindrance to traffic and would entail city hall costs. Moreover, recycle this material is a source of revenue and a generator of jobs. The Analytical Geovisualization can be of great help in the analysis of this complex problem and the decision-making. With this motivation, this paper seeks to provide an overview of the state of the art as the concepts and research in Geovisualization (GVis) and Analytical Processing (OLAP and SOLAP). It also presents processes, methodologies and technologies that were used in developing a prototype of an analytical system applied to the area of selective collection information management of recyclable solid waste in urban environments. This prototype, effectively deployed, combined navigation and query to retrieve information through spatial and alphanumeric selections. Through examples it was shown that this type of solution helps managers in operating procedures, analysis and decision making.
445

Towards a business process model warehouse framework

Jacobs, Dina Elizabeth 31 March 2008 (has links)
This dissertation focuses on the re-use of business process reference models, available in a business process model warehouse, to enable the definition of more comprehensive business requirements. It proposes a business process model warehouse framework to promote the re-use of multiple business process reference models and the flexible visualisation of business process models. The critical success factor for such a framework is that it should contribute to minimise to some extent the causes of inadequate business requirements. The proposed framework is based on an analogy with a data warehouse framework, consisting of the following components: usage of multiple business process reference models as source models, the conceptual design of a process to extract, load and transform multiple business process reference models into a repository, a description of repository functionality for managing enterprise architecture artefacts, and motivation of flexible visualisation of business process models to ensure more comprehensive business requirements. / Computer Science (School of Computing) / M.Sc. (Information Systems)
446

An XML-based Multidimensional Data Exchange Study / 以XML為基礎之多維度資料交換之研究

王容, Wang, Jung Unknown Date (has links)
在全球化趨勢與Internet帶動速度競爭的影響下,現今的企業經常採取將旗下部門分散佈署於各地,或者和位於不同地區的公司進行合併結盟的策略,藉以提昇其競爭力與市場反應能力。由於地理位置分散的結果,這類企業當中通常存在著許多不同的資料倉儲系統;為了充分支援管理決策的需求,這些不同的資料倉儲當中的資料必須能夠進行交換與整合,因此需要有一套開放且獨立的資料交換標準,俾能經由Internet在不同的資料倉儲間交換多維度資料。然而目前所知的跨資料倉儲之資料交換解決方案多侷限於逐列資料轉換或是以純文字檔案格式進行資料轉移的方式,這些方式除缺乏效率外亦不夠系統化。在本篇研究中,將探討多維度資料交換的議題,並發展一個以XML為基礎的多維度資料交換模式。本研究並提出一個基於學名結構的方法,以此方法發展一套單一的標準交換格式,並促成分散各地的資料倉儲間形成多對多的系統化映對模式。以本研究所發展之多維度資料模式與XML資料模式間的轉換模式為基礎,並輔以本研究所提出之多維度中介資料管理功能,可形成在網路上通用且以XML為基礎的多維度資料交換過程,並能兼顧效率與品質。本研究並開發一套雛型系統,以XML為基礎來實作多維度資料交換,藉資證明此多維度資料交換模式之可行性,並顯示經由中介資料之輔助可促使多維度資料交換過程更加系統化且更富效率。 / Motivated by the globalization trend and Internet speed competition, enterprise nowadays often divides into many departments or organizations or even merges with other companies that located in different regions to bring up the competency and reaction ability. As a result, there are a number of data warehouse systems in a geographically-distributed enterprise. To meet the distributed decision-making requirements, the data in different data warehouses is addressed to enable data exchange and integration. Therefore, an open, vendor-independent, and efficient data exchange standard to transfer data between data warehouses over the Internet is an important issue. However, current solutions for cross-warehouse data exchange employ only approaches either based on records or transferring plain-text files, which are neither adequate nor efficient. In this research, issues on multidimensional data exchange are studied and an XML-based Multidimensional Data Exchange Model is developed. In addition, a generic-construct-based approach is proposed to enable many-to-many systematic mapping between distributed data warehouses, introducing a consistent and unique standard exchange format. Based on the transformation model we develop between multidimensional data model and XML data model, and enhanced by the multidimensional metadata management function proposed in this research, a general-purpose XML-based multidimensional data exchange process over web is facilitated efficiently and improved in quality. Moreover, we develop an XML-based prototype system to exchange multidimensional data, which shows that the proposed multidimensional data exchange model is feasible, and the multidimensional data exchange process is more systematic and efficient using metadata.
447

An analysis of semantic data quality defiencies in a national data warehouse: a data mining approach

Barth, Kirstin 07 1900 (has links)
This research determines whether data quality mining can be used to describe, monitor and evaluate the scope and impact of semantic data quality problems in the learner enrolment data on the National Learners’ Records Database. Previous data quality mining work has focused on anomaly detection and has assumed that the data quality aspect being measured exists as a data value in the data set being mined. The method for this research is quantitative in that the data mining techniques and model that are best suited for semantic data quality deficiencies are identified and then applied to the data. The research determines that unsupervised data mining techniques that allow for weighted analysis of the data would be most suitable for the data mining of semantic data deficiencies. Further, the academic Knowledge Discovery in Databases model needs to be amended when applied to data mining semantic data quality deficiencies. / School of Computing / M. Tech. (Information Technology)
448

Dynamic cubing for hierarchical multidimensional data space / Cube de données dynamique pour un espace de données hiérarchique multidimensionnel

Ahmed, Usman 18 February 2013 (has links)
De nombreuses applications décisionnelles reposent sur des entrepôts de données. Ces entrepôts permettent le stockage de données multidimensionnelles historisées qui sont ensuite analysées grâce à des outils OLAP. Traditionnellement, les nouvelles données dans ces entrepôts sont chargées grâce à des processus d’alimentation réalisant des insertions en bloc, déclenchés périodiquement lorsque l’entrepôt est hors-ligne. Une telle stratégie implique que d’une part les données de l’entrepôt ne sont pas toujours à jour, et que d’autre part le système de décisionnel n’est pas continuellement disponible. Or cette latence n’est pas acceptable dans certaines applications modernes, tels que la surveillance de bâtiments instrumentés dits "intelligents", la gestion des risques environnementaux etc., qui exigent des données les plus récentes possible pour la prise de décision. Ces applications temps réel requièrent l’intégration rapide et atomique des nouveaux faits dans l’entrepôt de données. De plus, ce type d’applications opérant dans des environnements fortement évolutifs, les données définissant les dimensions d’analyse elles-mêmes doivent fréquemment être mises à jour. Dans cette thèse, de tels entrepôts de données sont qualifiés d’entrepôts de données dynamiques. Nous proposons un modèle de données pour ces entrepôts dynamiques et définissons un espace hiérarchique de données appelé Hierarchical Hybrid Multidimensional Data Space (HHMDS). Un HHMDS est constitué indifféremment de dimensions ordonnées et/ou non ordonnées. Les axes de l’espace de données sont non-ordonnés afin de favoriser leur évolution dynamique. Nous définissons une structure de regroupement de données, appelé Minimum Bounding Space (MBS), qui réalise le partitionnement efficace des données dans l’espace. Des opérateurs, relations et métriques sont définis pour permettre l’optimisation de ces partitions. Nous proposons des algorithmes pour stocker efficacement des données agrégées ou détaillées, sous forme de MBS, dans une structure d’arbre appelée le DyTree. Les algorithmes pour requêter le DyTree sont également fournis. Les nœuds du DyTree, contenant les MBS associés à leurs mesures agrégées, représentent des sections matérialisées de cuboïdes, et l’arbre lui-même est un hypercube partiellement matérialisé maintenu en ligne à l’aide des mises à jour incrémentielles. Nous proposons une méthodologie pour évaluer expérimentalement cette technique de matérialisation partielle ainsi qu’un prototype. Le prototype nous permet d’évaluer la structure et la performance du DyTree par rapport aux autres solutions existantes. L’étude expérimentale montre que le DyTree est une solution efficace pour la matérialisation partielle d’un cube de données dans un environnement dynamique. / Data warehouses are being used in many applications since quite a long time. Traditionally, new data in these warehouses is loaded through offline bulk updates which implies that latest data is not always available for analysis. This, however, is not acceptable in many modern applications (such as intelligent building, smart grid etc.) that require the latest data for decision making. These modern applications necessitate real-time fast atomic integration of incoming facts in data warehouse. Moreover, the data defining the analysis dimensions, stored in dimension tables of these warehouses, also needs to be updated in real-time, in case of any change. In this thesis, such real-time data warehouses are defined as dynamic data warehouses. We propose a data model for these dynamic data warehouses and present the concept of Hierarchical Hybrid Multidimensional Data Space (HHMDS) which constitutes of both ordered and non-ordered hierarchical dimensions. The axes of the data space are non-ordered which help their dynamic evolution without any need of reordering. We define a data grouping structure, called Minimum Bounding Space (MBS), that helps efficient data partitioning of data in the space. Various operators, relations and metrics are defined which are used for the optimization of these data partitions and the analogies among classical OLAP concepts and the HHMDS are defined. We propose efficient algorithms to store summarized or detailed data, in form of MBS, in a tree structure called DyTree. Algorithms for OLAP queries over the DyTree are also detailed. The nodes of DyTree, holding MBS with associated aggregated measure values, represent materialized sections of cuboids and tree as a whole is a partially materialized and indexed data cube which is maintained using online atomic incremental updates. We propose a methodology to experimentally evaluate partial data cubing techniques and a prototype implementing this methodology is developed. The prototype lets us experimentally evaluate and simulate the structure and performance of the DyTree against other solutions. An extensive study is conducted using this prototype which shows that the DyTree is an efficient and effective partial data cubing solution for a dynamic data warehousing environment.
449

Věrnostní program platebních karet, řešení nad DWH / Loyalty program of payment cards, solution based on DWH

Jersák, Pavel January 2010 (has links)
This diploma thesis is covering topics of customer's loyalty, loyalty programs in banking and mainly topic is loyalty programs of payment cards in banking industry. First theoretical parts is guiding us though concept of customer's loyalty in banking industry and is describing specifics of this loyalty compared to loyalty to sales businesses. Further, this thesis is trying to show opportunities to be used to support customer's loyalty in bank with participation of credit and debit cards. In this sense, these payment cards are those that enable client's entry into the loyalty program and into the advantages that is loyalty program offering. Further, this thesis is describing interesting concept of loyalty program that is closely coupled with business partners that give opportunities to make the program very variable and interesting for customers. Practical part of this thesis is describing basis for technologies to be used while deploying the loyalty program and is discussing two main technical concepts regarding loyalty system integration with current bank information systems. One of those is further discussed and described from the point of view technical realization in common data warehouse systems. Main contribution of this work is preview of approaches to payment card loyalty programs and also preview of possibilities in way the loyalty program can be integrated with data warehouse currently deployed in the bank.
450

Acquisition du rythme cardiaque fœtal et analyse de données pour la recherche de facteurs prédictifs de l’acidose fœtale / Fetal heart rate acquisition and data analysis to screen fetal acidosis predictive factors

Houzé de l'Aulnoit, Agathe 30 April 2019 (has links)
L’analyse visuelle du rythme cardiaque fœtal (RCF) est une excellente méthode de dépistage de l’hypoxie fœtale. Cette analyse visuelle est d’autre part sujette à une variabilité inter- et intra-individuelle importante. L’hypoxie fœtale au cours du travail s’exprime par des anomalies du RCF. La sous-évaluation de la gravité d’un RCF entraine une prise de risque indue pour le fœtus avec une augmentation de sa morbi-mortalité et sa surévaluation entraine un interventionnisme obstétrical inutile avec une augmentation du taux de césariennes. Ce dernier point pose par ailleurs en France un problème de santé publique.L’analyse automatisée du signal RCF permet de diminuer la variabilité inter- et intra-individuelle et d’accéder à d’autres paramètres calculés visant à augmenter la valeur diagnostique. Les critères d’analyse morphologiques du RCF (ligne de base, nombre d’accélérations, nombre et typage des ralentissements, variabilité à long terme (VLT)) ont été décrits ainsi que d’autres tels que les surfaces des ralentissements, les indices de variabilité à court terme (VCT) et les analyses fréquentielles. Il n’en demeure pas moins que la définition de la ligne de base, à partir de laquelle sont repérés les accélérations et les ralentissements reste, dans certains cas, difficile à établir.L’objectif principal de la thèse est d’établir un modèle prédictif de l’acidose fœtale à partir d’une analyse automatisée du RCF. L’objectif secondaire est de déterminer la pertinence des différents paramètres élémentaires classiques (CNGOF 2007) (fréquence de base, variabilité, accélérations, ralentissements) et celle d’autres paramètres inaccessible à l’œil (indices de variabilité à court terme, surfaces des ralentissements, analyse fréquentielle…). Par la suite, nous voulons identifier des critères de décision qui aideront à la prise en charge obstétricale.Nous proposons d’aborder l’analyse automatisée du RCF pendant le travail par l’intermédiaire d’une étude cas-témoins ; les cas étant des tracés RCF de nouveau-nés en acidose néonatale (pH artériel au cordon inférieur ou égal à 7,15) et les témoins, des tracés RCF de nouveau-nés sans acidose (pH artériel au cordon supérieur ou égal à 7,25). Il s’agit d’une étude monocentrique à la maternité de l’hôpital Saint Vincent de Paul, GHICL – Lille, sur notre base de données « Bien Naitre » (archivage numérique des tracés RCF depuis 2011), comptant un un nombre suffisant de cas sur ce seul centre. La maternité Saint Vincent de Paul (GHICL) présente depuis 2011 environ 70 cas par an d’acidose néonatale (pHa ≤ 7,10) (3,41%). Le logiciel R sera utilisé pour l’analyse statistique / Visual analysis of the fetal heart rate FHR is a good method for screening for fetal hypoxia but is not sufficiently specific. The visual morphological analysis of the FHR during labor is subject to inter- and intra-observer variability – particularly when the FHR is abnormal. Underestimating the severity of an FHR leads to undue risk-taking for the fetus with an increase in morbidity and mortality and overvaluation leads to unnecessary obstetric intervention with an increased rate of caesarean section. This last point also induces a French public health problem.FHR automated analysis reduces inter and intra-individual variability and accesses other calculated parameters aimed at increasing the diagnostic value. The FHR morphological analysis parameters (baseline, number of accelerations, number and typing of decelerations, long-term variability (LTV)) were described as well as others such as the decelerations surfaces, short-term variability (STV) and frequency analyzes. Nevertheless, when attempting to analyze the FHR automatically, the main problem is computation of the baseline against which all the other parameters are determined.Automatic analysis provides information on parameters that cannot be derived in a visual analysis and that are likely to improve screening for fetal acidosis during labor.The main objective of the thesis is to establish a predictive model of fetal acidosis from a FHR automated analysis. The secondary objective is to determine the relevance of the classical basic parameters (CNGOF 2007) (baseline, variability, accelerations, decelerations) and that of other parameters inaccessible to the eye (indices of short-term variability, surfaces of decelerations, frequency analysis ...). Later, we want to identify decision criteria that will help in the obstetric care management.We propose to validate FHR automated analysis during labor through a case-control study; cases were FHR recordings of neonatal acidosis (arterial cord pH less than or equal to 7.15) and controls, FHR recordings of neonatal without acidosis (arterial cord pH upper than or equal to 7.25). This is a monocentric study at the maternity hospital of Saint Vincent de Paul Hospital, GHICL - Lille, on our « Well Born » database (digital archiving of RCF plots since 2011), with a sufficient number of cases on this only center. Since 2011, the Saint Vincent de Paul hospital (GHICL) has had about 70 cases per year of neonatal acidosis (pHa less than or equal to 7.10) (3.41%). The R software will be used for statistical analysis.

Page generated in 0.0504 seconds