• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 60
  • 32
  • 13
  • 8
  • 7
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 73
  • 56
  • 51
  • 48
  • 38
  • 35
  • 35
  • 22
  • 22
  • 16
  • 15
  • 13
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

以企業流程模型導向實施資料庫重構之研究-以S公司為例 / The study of database reverse engineering based on business process module-with S company as an example

林于新, Lin, Yu-hsin Unknown Date (has links)
1960年代起資訊科技應用興起以協助組織運行,多數企業因缺乏資訊知識背景,紛紛購入套裝軟體協助業務營運。但套裝軟體無法切合企業的流程,且隨環境變遷和科技演進,不敷使用的問題日益嚴重。從資料庫設計的角度出發,套裝軟體複雜的資料架構、長期修改和存取資料而欠缺管理、無關連式資料庫的概念,導致組織的資料品質低落。當今組織如何將資料庫重新設計以符合所需、新舊系統資料該如何轉換以提升品質,是企業面臨的一大挑戰。   有鑑於此,本研究設計一套資料庫重構流程,以企業流程為基礎為企業設計客製化的資料庫,並將資料從套裝軟體移轉至該理想的資料庫。流程分三階段,階段1是運用資料庫反向工程(Database Reverse Engineering)的方法,還原企業現行資料庫的資料語意和模型架構;階段2則結合流程模型(Process Model)和資料模型(Data Model)的概念,建立以企業流程為基礎的理想資料庫;階段3利用ETL(Extract、Transform、Load)和資料整合的技術,將企業資料從現行資料庫中萃取、轉換和載入至理想資料庫,便完成資料庫重構的作業。   本研究亦將資料庫重構流程實做於個案公司,探討企業早期導入之套裝軟體和以流程為基礎的理想資料模型間的設計落差。實做分析結果,二者在資料庫架構設計、資料語意建立和正規化設計等三部分存有落差設計,因此在執行資料庫重構之資料移轉解決落差時,需釐清來源端資料的含糊語意、考量目的端資料的一致性和參考完整性、以及清潔錯誤的來源資料。   最後,總結目前企業老舊資料庫普遍面臨資料庫架構複雜、無法吻合作業流程所需、未制訂完善資料庫管理機制等問題,而本研究之資料庫重構流程的設計概念,能為企業建立以流程為導向的理想資料庫。 / The raising of information technique helped organization governance greatly was started since 1960s, but because of lack information background and knowledge, many organizations just simply brought software packages to assist business processes or organization governance. The result was those software packages which couldn't fit in with the processes of organization' requirements were getting worse because of changes of environment. From the view of database design, it results in low quality of data because of the complexity of database structure, long-term modifications and accessing to data, and the lack of relational database knowledge. Nowadays, the problems of redesign database structure or transform data from a old system to a new system are great challenges to enterprises. Based on the above, thie research designed a process of database restruction in order to establish customized database based on businesss processes. There are three phases of this process. In phase 1, a company acquires the original data structure and semantic of its software package by the method of database reverse engineering. In phase 2, using concepts of process model and data model, the company establishes its ideal database based on businesss processes. In phase 3, it extracts, transforms, and load data from the current database of software package to ideal database by the technique of ETL and data integration. After these three phases, the company completes the process of data restriction. The process of database restruction is done in a case company to analyze the design gap between the current data model of software package and the ideal data model based on business processes. In the result of analysis, this research found out there are three gaps between its as-is and to-be data models. These three gaps are the design of database struction, the definition of data semantic, and the design of database normalization. Because of these design gaps, when removing gaps by data transformation, a company should pay attention to clarify the semantic of source data, considerate the consistency and referential integrity of destination data, and clean dirty data from source database. Finanlly, the summary of the problems a company using old database are the complexity of database structure, the unfit database for businesss processes, the lack of database management, etc. The process of database restruction this research design can assist a company in establishing ideal database based on business processes.
82

Theoretical, numerical and experimental study of DC and AC electric arcs / Étude théorique, numérique et expérimentale d’arcs électrique continu et alternatif

Lisnyak, Marina 20 April 2018 (has links)
L’apparition accidentelle d’un arc électrique dans le système de distribution électrique d’un aéronef peut compromettre la sécurité du vol. Il existe peu de travaux liés à cette problématique.Le but de ce travail est donc d’étudier le comportement d’un arc électrique, en conditions aéronautiques,par des approches théorique, numérique, et expérimentale. Dans ce travail, un modèle MHD de la colonne d’arc à l’ETL a été utilisé, et résolu à l’aide du logiciel commercial comsolMultiphysics. Afin de décrire l’interaction plasma-électrodes, le modèle a dû étendu pour inclure les écarts à l’équilibre près des électrodes. Ces zones ont été prises en compte en considérant la conservation du courant et de l’énergie dans la zone hors-équilibre. L’approche choisie et le développement du modèle ont été détaillés. La validation du modèle dans le cas d’un arc libre a montré un excellent accord avec les résultats numériques et expérimentaux de la littérature.Ce modèle d’arc libre a été étendu au cas de l’arc se propageant entre des électrodes en configuration rails et en géométrie 3D. Une description auto-cohérente du déplacement de l’arc entre les électrodes a été réalisée. La simulation numérique a été faite pour des arcs en régimes DC, pulsé et AC à des pressions atmosphériques et inférieures. Les principales caractéristiques de l’arc ont été analysées et discutées. Les résultats obtenus ont été comparés avec les résultats expérimentaux et ont montré un bon accord.Ce modèle d’arc électrique est capable de prédire le comportement d’un arc de défaut dans des conditions aéronautiques. Des améliorations du modèle sont discutées comme perspectives de ce travail. / The ignition of an electric arc in the electric distribution system of an aircraft can be a serious problem for flight safety. The amount of information on this topic is limited, however. Therefore,the aim of this work is to investigate the electric arc behavior by means of experiment and numerical simulations.The MHD model of the LTE arc column was used and resolved numerically using the commercial software comsol Multiphysics. In order to describe plasma-electride interaction, the model had to be extended to include non-equilibrium effects near the electrodes. These zones were taken into account by means of current and energy conservation in the non-equilibrium layer. The correct matching conditions were developed and are described in the work. Validation of the model in the case of a free burning arc showed excellent agreement between comprehensive models and the experiment.This model was then extended to the case of the electric arc between rail electrodes in a 3D geometry. Due to electromagnetic forces the electric arc displaces along the electrodes. A self-consistent description of this phenomenon was established. The calculation was performed for DC, pulsed and AC current conditions at atmospheric and lower pressures. The main characteristics of the arc were analyzed and discussed. The results obtained were compared with the experimental measurements and showed good agreement.The model of electric arcs between busbar electrodes is able to predict the behavior of a fault arc in aeronautical conditions. Further improvements of the model are discussed as an outlook of the research.
83

Entrepôt de textes : de l'intégration à la modélisation multidimensionnelle de données textuelles / Text Warehouses : from the integration to the multidimensional modeling of textual data

Aknouche, Rachid 26 April 2014 (has links)
Le travail présenté dans ce mémoire vise à proposer des solutions aux problèmes d'entreposage des données textuelles. L'intérêt porté à ce type de données est motivé par le fait qu'elles ne peuvent être intégrées et entreposées par l'application de simples techniques employées dans les systèmes décisionnels actuels. Pour aborder cette problématique, nous avons proposé une démarche pour la construction d'entrepôts de textes. Elle couvre les principales phases d'un processus classique d'entreposage des données et utilise de nouvelles méthodes adaptées aux données textuelles. Dans ces travaux de thèse, nous nous sommes focalisés sur les deux premières phases qui sont l'intégration des données textuelles et leur modélisation multidimensionnelle. Pour mettre en place une solution d'intégration de ce type de données, nous avons eu recours aux techniques de recherche d'information (RI) et du traitement automatique du langage naturel (TALN). Pour cela, nous avons conçu un processus d'ETL (Extract-Transform-Load) adapté aux données textuelles. Il s'agit d'un framework d'intégration, nommé ETL-Text, qui permet de déployer différentes tâches d'extraction, de filtrage et de transformation des données textuelles originelles sous une forme leur permettant d'être entreposées. Certaines de ces tâches sont réalisées dans une approche, baptisée RICSH (Recherche d'information contextuelle par segmentation thématique de documents), de prétraitement et de recherche de données textuelles. D'autre part, l'organisation des données textuelles à des fins d'analyse est effectuée selon TWM (Text Warehouse Modelling), un nouveau modèle multidimensionnel adapté à ce type de données. Celui-ci étend le modèle en constellation classique pour prendre en charge la représentation des textes dans un environnement multidimensionnel. Dans TWM, il est défini une dimension sémantique conçue pour structurer les thèmes des documents et pour hiérarchiser les concepts sémantiques. Pour cela, TWM est adossé à une source sémantique externe, Wikipédia, en l'occurrence, pour traiter la partie sémantique du modèle. De plus, nous avons développé WikiCat, un outil pour alimenter la dimension sémantique de TWM avec des descripteurs sémantiques issus de Wikipédia. Ces deux dernières contributions complètent le framework ETL-Text pour constituer le dispositif d'entreposage des données textuelles. Pour valider nos différentes contributions, nous avons réalisé, en plus des travaux d'implémentation, une étude expérimentale pour chacune de nos propositions. Face au phénomène des données massives, nous avons développé dans le cadre d'une étude de cas des algorithmes de parallélisation des traitements en utilisant le paradigme MapReduce que nous avons testés dans l'environnement Hadoop. / The work, presented in this thesis, aims to propose solutions to the problems of textual data warehousing. The interest in the textual data is motivated by the fact that they cannot be integrated and warehoused by using the traditional applications and the current techniques of decision-making systems. In order to overcome this problem, we proposed a text warehouses approach which covers the main phases of a data warehousing process adapted to textual data. We focused specifically on the integration of textual data and their multidimensional modeling. For the textual data integration, we used information retrieval (IR) techniques and automatic natural language processing (NLP). Thus, we proposed an integration framework, called ETL-Text which is an ETL (Extract- Transform- Load) process suitable for textual data. The ETL-Text performs the extracting, filtering and transforming tasks of the original textual data in a form allowing them to be warehoused. Some of these tasks are performed in our RICSH approach (Contextual information retrieval by topics segmentation of documents) for pretreatment and textual data search. On the other hand, the organization of textual data for the analysis is carried out by our proposed TWM (Text Warehouse Modelling). It is a new multidimensional model suitable for textual data. It extends the classical constellation model to support the representation of textual data in a multidimensional environment. TWM includes a semantic dimension defined for structuring documents and topics by organizing the semantic concepts into a hierarchy. Also, we depend on a Wikipedia, as an external semantic source, to achieve the semantic part of the model. Furthermore, we developed WikiCat, which is a tool permit to feed the TWM semantic dimension with semantics descriptors from Wikipedia. These last two contributions complement the ETL-Text framework to establish the text warehouse device. To validate the different contributions, we performed, besides the implementation works, an experimental study for each model. For the emergence of large data, we developed, as part of a case study, a parallel processing algorithms using the MapReduce paradigm tested in the Apache Hadoop environment.
84

Implementace Business Intelligence ve stavebnictví / Implementation of Business Intelligence in building industry

Melichar, Jan January 2008 (has links)
Diploma thesis is focused on the strategic performance management and Business Intelligence domain. Main objectives of the thesis are to define strategic goals of building enterprise by help of the Balanced Scorecard (BSC) concept and to assign specific metrics to these strategic goals. Another objective of this work is to design Business Intelligence (BI) implementation which means building a data warehouse upon company data, multidimensional cubes and user-defined reports. Initial theoretical principles are described in the first part of this work in which main issues of strategic performance management, BSC concept and BI domain are specified. In the practical part the strategic goals and specific metrics of building enterprise are defined. The output of this chapter is an overall strategic map containing strategic goals with assigned metrics and also comments describing mutual relationships of these goals. Next chapter deals with building a data warehouse upon company data, multidimensional cubes and user-defined reports with measured values interpretation. Contribution of the thesis consists in the enterprise management model upon BSC concept which helps specify strategic goals and also the design of BI implementation which should simplify monitoring of these strategic goals by the help of the metrics specified. Another contribution for building industry enterprise management can be an overview of main BI technologies and the ways and means of its practical application.
85

Měření výkonnosti podniku / Corporate Performance Measurement

Pavlová, Petra January 2012 (has links)
This thesis deals with the application of Business Intelligence (BI) to support the corporate performance management in ISS Europe, spol. s r. o. This company provides licences and implements original software products as well as third-party software products. First, an analysis is conducted in the given company, which then serves as basis for the implementation of the BI solution that should be interconnected with the company strategies. The main goal is the implementation of a pilot BI solution to aid the monitoring and optimisation of corporate performance. Among secondary goals are the analysis of related concepts, business strategy analysis, strategic goals and systems identification and the proposition and implementation of a pilot BI solution. In its theoretical part, this thesis focuses on the analysis of concepts related to corporate performance and BI implementations and shortly describes the company together with its business strategy. The following practical part is based on the theoretical findings. An analysis of the company is carried out using the Balanced Scorecard (BSC) methodology, the result of which is depicted in a strategic map. This methodology is then supplemented by the Activity Based Costing (ABC) analytical method, which divides expenses according to assets. The results are informational data about which expenses are linked to handling individual developmental, implementational and operational demands for particular contracts. This is followed by an original proposition and the implementation of a BI solution which includes the creation of a Data Warehouse (DWH), designing Extract Transform and Load (ETL) and Online Analytical Processing (OLAP) systems and generating sample reports. The main contribution of this thesis is in providing the company management with an analysis of company data using a multidimensional perspective which can be used as basis for prompt and correct decision-making, realistic planning and performance and product optimisation.
86

Řízení podnikové výkonnosti a její implementace v rámci personálních informačních systémů / Corporate Performance Management and Its Implementation in Human Resources Information Systems

Scholz, Martin January 2014 (has links)
The thesis addresses the issue of developing indicators focusing on measuring human capital, which will serve as a reporting output from the data warehouse. Goal is propose a set of indicators that will be able to cover the overall picture of corporate human resources. I focused mainly on building sets of indicators for measuring the area of human resources and human capital.
87

Cardinality estimation in ETL processes

Lehner, Wolfgang, Thiele, Maik, Kiefer, Tim 22 April 2022 (has links)
The cardinality estimation in ETL processes is particularly difficult. Aside from the well-known SQL operators, which are also used in ETL processes, there are a variety of operators without exact counterparts in the relational world. In addition to those, we find operators that support very specific data integration aspects. For such operators, there are no well-examined statistic approaches for cardinality estimations. Therefore, we propose a black-box approach and estimate the cardinality using a set of statistic models for each operator. We discuss different model granularities and develop an adaptive cardinality estimation framework for ETL processes. We map the abstract model operators to specific statistic learning approaches (regression, decision trees, support vector machines, etc.) and evaluate our cardinality estimations in an extensive experimental study.
88

Automatisk kvalitetssäkring av information för järnvägsanläggningar : Automatic quality assurance of information for railway infrastructure / Automatiserad kvalitetssäkring av BIM-data från databas : Automated quality assurance of BIM-data from databases

Abraham, Johannes, Romano, Robin January 2019 (has links)
Järnvägsbranschen står i dagsläget inför stora utmaningar med planerade infrastrukturprojekt och underhåll av befintlig järnväg. Med ökade förväntningar på  utbyggnaden av den framtida järnvägen, medför det en ökad risk för belastning på det nuvarande nätet. Baksidan av utbyggnaden kan bli fler inställda resor och  förseningar. Genom att dra nytta av tekniska innovationer såsom digitalisering och  automatisering kan det befintliga system och arbetsprocesser utvecklas för en  effektivare hantering.  Trafikverket ställer krav på Byggnadsinformationsmodeller (BIM) i upphandlingar. Projektering för signalanläggningar sker hos Sweco med CAD-programmet  Promis.e. Från programmet kan Baninformationslistor (BIS-listor) innehållande  information om objekts attribut hämtas. Trafikverket ställer krav på att attributen ska bestå av ett visst format eller ha specifika värden. I detta examensarbete  undersöks metoder för att automatisk verifiera ifall objekt har tillåtna värden från projekteringsverktyget samt implementering av en metod. Undersökta metoder  innefattar kalkyleringsprogrammet Excel, frågespråket Structured Query Language (SQL) och processen Extract, Transform and Load (ETL).  Efter analys av metoder valdes processen ETL. Resultatet blev att ett program  skapades för att automatiskt välja vilken typ av BIS-lista som skulle granskas och för att verifiera om attributen innehöll tillåtna värden. För att undersöka om kostnaden för programmen skulle gynna företaget utöver kvalitetssäkringen utfördes en  ekonomisk analys. Enligt beräkningarna kunde valet av att automatisera  granskningen även motiveras ur ett ekonomiskt perspektiv. / With increased expectations for the expansion of the future railway, this entails an increased load on the current railway network. The result of the expansion can be an increasing number of cancellations and delays. By taking advantage of technological innovations such as digitalization and automation, the existing system and work  processes can be developed for more efficient management.   The Swedish Transport Administration sets requirements for Building Information Modeling (BIM) in procurements. The planning of signal installations within the railway takes place in Sweco using the CAD program Promis.e. From the program, lists containing the information of the objects (BIS-lists) can be retrieved. The  Swedish Transport Administration requires that the attributes must consist of a  certain format or have specific values. In this thesis project, methods for automatic quality assurance of infrastructure information and the implementation of the method for rail projects were examined. The investigated methods include the  calculation program Excel, the query programming language SQL and the process of ETL.  After analyzing the methods, the ETL process was chosen. The result was that a  program was created to automatically select the type of BIS list that would be  reviewed and to verify that the examined attributes contained allowed values. In  order to investigate whether the cost of the programs would benefit the company in addition to the quality assurance, an economic analysis was carried out. According to the calculations, the choice of method could also be justified from an economic  perspective.
89

Le rôle des collisions avec l'hydrogène dans la determination hors-ETL de l'abondance du fer dans les étoiles froides / Non-LTE iron abundance determination in cool stars : the role of hydrogen collisions

Ezzeddine, Rana 07 December 2015 (has links)
La détermination d'abondances stellaires très précises a toujours été et reste un point clé de toute analyse spectroscopique.Cependant, de nombreuses études ont montré que l'hypothèse de l'équilibre thermodynamique local (ETL), largement utilisée dans les analyses spectroscopiques est inadéquate pour déterminer les abondances et les paramètres stellaires des étoiles géantes et pauvres en métaux où les effets hors-ETL dominent. C'est pourquoi, une modélisation hors-ETL des spectres stellaires est cruciale afin de reproduire les observations et ainsi déterminer avec précision les paramètres stellaires.Cette modélisation hors-ETL nécessite l'utilisation d'un grand jeu de données atomiques, qui ne sont pas toujours connues avec certitude. Dans les étoiles froides, les taux de collisions de l'atome d'hydrogène sont une des principales sources d'incertitudes. Ces taux sont souvent calculés en considérant une approche classique (l'approximation de Drawin) pour les transitions permises lié-lié et les transitions d'ionisations. Cette approche classique tend à surestimer les taux de collisions et ne reproduit pas correctement le comportement avec les énergies.Dans cette thèse, nous démontrons que l'approximation de Drawin ne peut pas décrire les taux de collisions dans le cas de l'atome d'hydrogène. Nous présentons une nouvelle méthode pour estimer ces taux, par le biais d'ajustement sur des taux quantiques existant pour d'autres éléments.Nous montrons que cette méthode d'ajustement quantique (MAQ) est satisfaisante pour les modélisations hors-ETL lorsque les taux quantiques dédiés ne sont pas effectivement disponibles.Nous testons cette nouvelle méthode, avec le modèle d'atome de Fer que nous avons développé, sur des étoiles de référence issues « du Gaia-ESO survey ».En partant de paramètres photosphériques non-spectroscopiques connus, nous déterminons les abondances (1D) en fer de ces étoiles de référence dans les cas ETL et hors-ETL .Nos résultats dans le cas hors ETL conduisent à un excellent accord entre les abondances de FeI et FeII avec de faibles écarts types de raies à raies, particulièrement dans le cas des étoiles pauvres en métaux.Notre méthode est validée par comparaison avec de nouveaux calculs quantiques préliminaires sur l'atome de Fe I et d'hydrogène, dont les ajustements sont en excellent accord avec les nôtres. / Determination of high precision abundances has and will always be an important goal of all spectroscopic studies. The use of LTE assumption in spectroscopic analyses has been extensively shown in the literature to badly affect the determined abundances and stellar parameters, especially in metal-poor and giant stars which can be subject to large non-LTE effects. Non-LTE modeling of stellar spectra is therefore essential to accurately reproduce the observations and derive stellar abundances. Non-LTE calculations require the inputof a bulk of atomic data, which may be subject to uncertainties. In cool stars, hydrogen collisional rates are a major source of uncertainty, which are often approximated using a classical recipe (the Drawin approximation) for allowed bound-bound, and ionization transitions only. This approximation has been shown to overestimate the collisional rates, and does not reproduce the correct behavior with energies. We demonstrate in this dissertation the inability of the Drawin approximation to describe the hydrogen collisional rates.We introduce a new method to estimate these rates based on fitting the existing quantum rates of other elements. We show that this quantum fitting method (QFM) performs well in non-LTE calculations when detailed quantum rates are not available. We test the newly proposed method, with a complete iron model atom that we developed, on a reference set of stars from the Gaia-ESO survey. Starting from well determined non-spectroscopic atmospheric parameters, we determine 1D, non-LTE, and LTE iron abundances for this set ofstars. Our non-LTE results show excellent agreement between Fe I and Fe II abundances and small line-by-line dispersions, especially for the metal-poor stars. Our method is validated upon comparison with new preliminary Fe I+H quantum calculations, whose fits show an excellent agreement with ours.
90

Integrating Heterogeneous Data

Nieva, Gabriel January 2016 (has links)
Technological advances, particularly in the areas of processing and storage have made it possible to gather an unprecedented vast and heterogeneous amount of data. The evolution of the internet, particularly Social media, the internet of things, and mobile technology together with new business trends has precipitated us in the age of Big data and add complexity to the integration task. The objective of this study has been to explore the question of data heterogeneity trough the deployment of a systematic literature review methodology. The study surveys the drivers of this data heterogeneity, the inner workings of it, and it explores the interrelated fields and technologies that deal with the capture, organization and mining of this data and their limitations. Developments such as Hadoop and its suit components together with new computing paradigms such as cloud computing and virtualization help palliate the unprecedented amount of rapidly changing, heterogeneous data which we see today. Despite these dramatic developments, the study shows that there are gaps which need to be filled in order to tackle the challenges of Web 3.0.

Page generated in 0.155 seconds