• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 5
  • 5
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Integrering av befintliga operationella system för beslutsstöd / Systems Integration for Decision Support

Johansson, Peter, Stiernström, Peter January 2003 (has links)
<p>Detta arbete har sin utgångspunkt i Tekniska Verkens och Östkrafts integrerade operationella system. Dessa är utvecklade för att stödja beslutsprocesser för bl.a. fysisk och finansiell elhandel. Integreringen har gjorts genom annamandet av en IRM-baserad lösning, av verksamheterna benämnt "datavaruhus". </p><p>Avregleringen av elmarknaden medförde större krav på elleverantörerna med avseende på flexibilitet och funktionalitet när kunderna fick välja elbolag själva. Det som främst bidrar till komplexiteten gällande elhandel är de många olika sorters elavtal som kan tecknas och det ständigt varierande inköpspriset på nordiska kraftbörsen. </p><p>För fallstudiens företag gäller att deras datavaruhuslösning lider av osedvanligt dåliga prestanda. Syftet med uppsatsen är att utifrån en kvalitativ studie försöka identifiera primära faktorer för dessa prestandaproblem. Vidare vill vi belysa hur man bör integrera befintliga operationella system för att uppnå goda prestanda. </p><p>Arbetets slutsats är att prestandaproblemen kan härledas både till det arkitekturella och det strukturella planet såväl som till valet att egenutveckla den logik som bearbetar data genom att hämta, transformera och uppdatera datavaruhuset. Ytterligare en faktor utgörs av den höga detaljeringsgrad som kännetecknar data i datavaruhuset.</p>
2

Integrering av befintliga operationella system för beslutsstöd / Systems Integration for Decision Support

Johansson, Peter, Stiernström, Peter January 2003 (has links)
Detta arbete har sin utgångspunkt i Tekniska Verkens och Östkrafts integrerade operationella system. Dessa är utvecklade för att stödja beslutsprocesser för bl.a. fysisk och finansiell elhandel. Integreringen har gjorts genom annamandet av en IRM-baserad lösning, av verksamheterna benämnt "datavaruhus". Avregleringen av elmarknaden medförde större krav på elleverantörerna med avseende på flexibilitet och funktionalitet när kunderna fick välja elbolag själva. Det som främst bidrar till komplexiteten gällande elhandel är de många olika sorters elavtal som kan tecknas och det ständigt varierande inköpspriset på nordiska kraftbörsen. För fallstudiens företag gäller att deras datavaruhuslösning lider av osedvanligt dåliga prestanda. Syftet med uppsatsen är att utifrån en kvalitativ studie försöka identifiera primära faktorer för dessa prestandaproblem. Vidare vill vi belysa hur man bör integrera befintliga operationella system för att uppnå goda prestanda. Arbetets slutsats är att prestandaproblemen kan härledas både till det arkitekturella och det strukturella planet såväl som till valet att egenutveckla den logik som bearbetar data genom att hämta, transformera och uppdatera datavaruhuset. Ytterligare en faktor utgörs av den höga detaljeringsgrad som kännetecknar data i datavaruhuset.
3

Donner une autre vie à vos besoins fonctionnels : une approche dirigée par l'entreposage et l'analyse en ligne / Give Another Life to Your Functional Requirements : An Approach Drvicen by Warehousing and Online Anaysis

Djilani, Zouhir 12 July 2017 (has links)
Les besoins fonctionnels et non fonctionnels représentent la première brique pour la conception de toute application, logiciel, système, etc. L'ensemble des traitements associés aux besoins est établi dans le cadre de l'ingénierie des Besoins (IB). Le processus de l'IB comporte plusieurs étapes consistant à découvrir, analyser, valider et faire évoluer l'ensemble des besoins relatifs aux fonctionnalités du système. La maturité de la communauté de l'IB lui a permis d'établir un cycle de vie bien déterminé pour le processus de besoins qui comprend les phases suivantes :l'élicitation, la modélisation, la spécification, la validation et la gestion des besoins. Une fois ces besoins validés, ils sont archivés ou stockés dans des référentiels ou des dépôts au sein des entreprises. Avec l'archivage continu des besoins, ces entreprises disposent d'une mine d'informations qu'il faudra analyser afin de reproduire les expériences cumulées et le savoir-faire acquis en réutilisant et en exploitant ces besoins pour des nouveaux projets. Proposer à ces entreprises un entrepôt dans lequel l'ensemble de besoins est stocké représente une excellente opportunité pour les analyser à des fins décisionnelles et les fouiller pour reproduire des anciennes expériences. Récemment, la communauté des processus (BPM) a émis le même besoin pour les processus. Dans cette thèse, nous souhaitons exploiter le succès des entrepôts de données pour le reproduire sur les besoins fonctionnels. Les problèmes rencontrés lors de la conception des entrepôts de données se retrouvent presque à l'identique dans le cas des besoins fonctionnels.Ces derniers sont souvent hétérogènes, surtout dans le cas d'entreprises de grande taille comme Airbus, où chaque partenaire a la liberté d'utiliser ses propres vocabulaire et formalisme pour décrire ses besoins. Pour réduire cette hétérogénéité, l'appel aux ontologies est nécessaire. Afin d'assurer l'autonomie de chaque source, nous supposons que chaque source a sa propre ontologie.Cela nécessite des efforts de matching entre les ontologies afin d'assurer l' intégration des besoins fonctionnels. Une particularité importante liée à l'entreposage de besoins réside dans le fait que ces derniers sont souvent exprimés à l'aide des formalismes semi-formels comme les use cases d'UML avec une partie textuelle importante. Afin de nous rapprocher le plus possible de ce que nous avons fait dans le cadre de l'entreposage de données, nous proposons un modèle pivot permettant de factoriser trois semi-formalismes répandus utilisés par les sources de besoins avec une description précise de ces derniers. Ce modèle pivot permettra de définir le modèle multidimensionnel del' entrepôt de besoins, qui sera ensuite alimenté par les besoins des sources en utilisant un algorithme ETL (Extract, Transform, Load). À l'aide des mécanismes de raisonnement offerts par les ontologies et des métriques de matching, nous avons nettoyé notre entrepôt de besoins. Une fois l'entrepôt déployé, il est exploité par des outils d'analyse OLAP.Notre méthodologie est supportée par un outil couvrant l'ensemble des phases de conception et d'exploitation d'un entrepôt de besoins. / Functiona] and non-functional requirements represent the first step for the design of any application, software, system, etc. Ail the issues associated to requirements are analyzed in the Requirements Engineering (RE) field. The RE process consists of several steps consisting of discovering, analyzing, validating and evolving the requirements related to the functionalities of the system. The RE community proposed a well-defined life-cycle for the requirements process that includes the following phases: elicitation, modeling, specification, validation and management. Once the requirements are validated, they are archived or stored in repositories in companies. With the continuous storage of requirements, companies accumulate an important amount of requirements information that needs to be analyzed in order to reproduce the previous experiences and the know-how acquired by reusing and exploiting these requirements for new projects. Proposing to these companies a warehouse in which all requirements are stored represents an excellent opportunity to analyze them for decision-making purposes. Recently, the Business Process Management Community (BPM) emitted the same needs for processes. In this thesis, we want to exploit the success of data warehouses and to replicate it for functional requirements. The issues encountered in the design of data warehouses are almost identical in the case of functional requirements. Requirements are often heterogeneous, especially in the case of large companies such Airbus, where each panner bas the freedom to use its own vocabulary and formalism to describe the requirements. To reduce this heterogeneity, using ontologies is necessary. In order to ensure the autonomy of each partner, we assume that each source bas its own ontology. This requires matching efforts between ontologies to ensure the integration of functional requirements. An important feature related to the storage of requirements is that they are often expressed using semi-forma! formalisms such as use cases of UML with an important textual part. In order to get as close as possible to our contributions in data warehousing,we proposed a pivot model factorizing three well-known semi-formalisms. This pivot model is used to define the multidimensional model of the requirements warehouse, which is then alimented by the sources requirements using an ETL algorithm (Extract,Transform, Load).Using reasoning mechanisms otfered by ontologies and matching metrics, we cleaned up our requirements warehouse. Once the warehouse is deployed, it is exploited using OLAP analysis tools. Our methodology is supported by a tool covering all design phases of the requirements warehouse
4

Návrh datového skladu / Design of Data Warehouse

Szkuta, David January 2018 (has links)
This diploma thesis deals with the design of a data warehouse that stores events created in the mobile app. The goal was to design an alternative to the current solution. The thesis explains concepts, mainly data warehouse terminology, which are used in subsequent chapters. An analysis of the current solution is conducted, and as well as research of available data warehouse and ETL services. Based on the results of the analysis, a suitable new solution is chosen, implemented, and tested.
5

Automating User-Centered Design of Data-Intensive Processes

Theodorou, Vasileios 20 January 2017 (has links)
Business Intelligence (BI) enables organizations to collect and analyze internal and external business data to generate knowledge and business value, and provide decision support at the strategic, tactical, and operational levels. The consolidation of data coming from many sources as a result of managerial and operational business processes, usually referred to as Extract-Transform-Load (ETL) is itself a statically defined process and knowledge workers have little to no control over the characteristics of the presentable data to which they have access. There are two main reasons that dictate the reassessment of this stiff approach in context of modern business environments. The first reason is that the service-oriented nature of today’s business combined with the increasing volume of available data make it impossible for an organization to proactively design efficient data management processes. The second reason is that enterprises can benefit significantly from analyzing the behavior of their business processes fostering their optimization. Hence, we took a first step towards quality-aware ETL process design automation by defining through a systematic literature review a set of ETL process quality characteristics and the relationships between them, as well as by providing quantitative measures for each characteristic. Subsequently, we produced a model that represents ETL process quality characteristics and the dependencies among them and we showcased through the application of a Goal Model with quantitative components (i.e., indicators) how our model can provide the basis for subsequent analysis to reason and make informed ETL design decisions. In addition, we introduced our holistic view for a quality-aware design of ETL processes by presenting a framework for user-centered declarative ETL. This included the definition of an architecture and methodology for the rapid, incremental, qualitative improvement of ETL process models, promoting automation and reducing complexity, as well as a clear separation of business users and IT roles where each user is presented with appropriate views and assigned with fitting tasks. In this direction, we built a tool —POIESIS— which facilitates incremental, quantitative improvement of ETL process models with users being the key participants through well-defined collaborative interfaces. When it comes to evaluating different quality characteristics of the ETL process design, we proposed an automated data generation framework for evaluating ETL processes (i.e., Bijoux). To this end, we classified the operations based on the part of input data they access for processing, which facilitated Bijoux during data generation processes both for identifying the constraints that specific operation semantics imply over input data, as well as for deciding at which level the data should be generated (e.g., single field, single tuple, complete dataset). Bijoux offers data generation capabilities in a modular and configurable manner, which can be used to evaluate the quality of different parts of an ETL process. Moreover, we introduced a methodology that can apply to concrete contexts, building a repository of patterns and rules. This generated knowledge base can be used during the design and maintenance phases of ETL processes, automatically exposing understandable conceptual representations of the processes and providing useful insight for design decisions. Collectively, these contributions have raised the level of abstraction of ETL process components, revealing their quality characteristics in a granular level and allowing for evaluation and automated (re-)design, taking under consideration business users’ quality goals.

Page generated in 0.0475 seconds