• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 8
  • 4
  • Tagged with
  • 26
  • 26
  • 23
  • 11
  • 11
  • 10
  • 7
  • 7
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Srovnání produktů z oblasti Product Information Management / Comparison of Product Information Management software tools

Vytiska, Tomáš January 2008 (has links)
This diploma thesis deals with the Product Information Management (PIM) and compares PIM software tools. Its goal is to introduce the area of the PIM systems in Czech language. Next subgoal is to define system of criteria. It is also necessary to achieve the last goal -- to analyze and compare PIM software. The method I used is the exploration of information sources; obtaining information through email communication and use of empirical knowledge to define system of criteria. The contribution of this work is the same as its goals. The work is divided into two parts. The first theoretical part deals with PIM definitions, context, functionality, architectures and PIM market developing. The second practical part involves selecting of particular PIM software tools, defining system of criteria and comparison of PIM software tools.
12

Customer Data Management

Sehat, Mahdis, PAVEZ FLORES, RENÉ January 2012 (has links)
Abstract As the business complexity, number of customers continues to grow and customers evolve into multinational organisations that operate across borders, many companies are faced with great challenges in the way they manage their customer data. In today’s business, a single customer may have a relationship with several entities of an organisation, which means that the customer data is collected through different channels. One customer may be described in different ways by each entity, which makes it difficult to obtain a unified view of the customer. In companies where there are several sources of data and the data is distributed to several systems, data environments become heterogenic. In this state, customer data is often incomplete, inaccurate and inconsistent throughout the company. This thesis aims to study how organisations with heterogeneous customer data sources implement the Master Data Management (MDM) concept to achieve and maintain high customer data quality. The purpose is to provide recommendations for how to achieve successful customer data management using MDM based on existing literature related to the topic and an interview-based empirical study. Successful customer data management is more of an organisational issue than a technological one and requires a top-down approach in order to develop a common strategy for an organisation’s customer data management. Proper central assessment and maintenance processes that can be adjusted according to the entities’ needs must be in place. Responsibilities for the maintenance of customer data should be delegated to several levels of an organisation in order to better manage customer data.
13

Evaluation of Machine Learning techniques for Master Data Management

Toçi, Fatime January 2023 (has links)
In organisations, duplicate customer master data present a recurring problem. Duplicate records can result in errors, complication, and inefficiency since they frequently result from dissimilar systems or inadequate data integration. Since this problem is made more complicated by changing client information over time, prompt detection and correction are essential. In addition to improving data quality, eliminating duplicate information also improves business processes, boosts customer confidence, and makes it easier to make wise decisions. This master’s thesis explores machine learning’s application to the field of Master Data Management. The main objective of the project is to assess how machine learning may improve the accuracy and consistency of master data records. The project aims to support the improvement of data quality within enterprises by managing issues like duplicate customer data. One of the research topics of study is if machine learning can be used to improve the accuracy of customer data, and another is whether it can be used to investigate scientific models for customer analysis when cleaning data using machine learning. Dimension identification, appropriate algorithm selection, appropriate parameter value selection, and output analysis are the four steps in the study's process. As a ground truth for our project, we came to conclusion that 22,000 is the correct number of clusters for our clustering algorithms which represents the number of unique customers. Saying this, the best performing algorithm based on number of clusters and the silhouette score metric turned out the be KMEANS with 22,000 clusters and a silhouette score of 0.596, followed by BIRCH with 22,000 number of clusters and a silhouette score of 0.591.
14

Kvalita kmenových dat a datová synchronizace v segmentu FMCG / Master Data Quality and Data Synchronization in FMCG

Tlučhoř, Tomáš January 2013 (has links)
This master thesis deals with a topic of master data quality at retailers and suppliers of fast moving consumer goods. The objective is to map a flow of product master data in FMCG supply chain and identify what is the cause bad quality of the data. Emphasis is placed on analyzing a listing process of new item at retailers. Global data synchronization represents one of the tools to increase efficiency of listing process and improve master data quality. Therefore another objective is to clarify the cause of low adoption of global data synchronization at Czech market. The thesis also suggests some measures leading to better master data quality in FMCG and expansion of global data synchronization in Czech Republic. The thesis consists of theoretical and practical part. Theoretical part defines several terms and explores supply chain operation and communication. It also covers theory of data quality and its governance. Practical part is focused on objectives of the thesis. Accomplishment of those objectives is based on results of a survey among FMCG suppliers and retailers in Czech Republic. The thesis contributes to enrichment of academic literature that does not focus on master data quality in FMCG and global data synchronization very much at the moment. Retailers and suppliers of FMCG can use the results of the thesis as an inspiration to improve the quality of their master data. A few methods of achieving better data quality are introduced. The thesis has been assigned by non-profit organization GS1 Czech Republic that can use the results as one of the supporting materials for development of next global data synchronization strategy.
15

Masterdatahantering i större företag : En kvalitativ studie om utvecklingsmöjligheter i masterdatahantering / Master data management in larger enterprises : A qualitative study on opportunities in the development of master data management

Gustavsson, Tea, Nordlander, Emil January 2023 (has links)
Masterdata är viktigt för företag att ha kontroll över, vilket underlättas med masterdatahantering. Masterdata används i hela företaget vilket gör det komplext att hantera och kräver struktur samt en gemensam bild. Med de tekniska möjligheter som finns idag kan dessa hjälpa till att bibehålla god masterdatakvalitet. För att det ska ske behöver dessa teknologier även integreras i systemen. Syftet med studien är därför att bidra till utvecklingen av masterdatahantering. Detta sker genom att applicera ett befintligt ramverk på ett fallföretag för att undersöka om tekniska utvecklingsmöjligheter kan identifieras. Vid applicering av ett befintligt ramverk undersöks även vilka faktorer som påverkar hur ett större företags masterdatahantering beskrivs. För att möjliggöra masterdatahantering finns olika ramverk tillgängliga. The Seven Building Blocks of MDM (Radcliffe, 2007) är ett av flera och i denna studie appliceras det på fallföretaget för att sammanställa empiri utifrån ramverket. I studien framkommer det att The Seven Building Blocks of MDM är övergripande och omfattar de delar som andra ramverk inom litteraturen tar upp. Genom att applicera ramverket framkom ett behov av ytterligare teknologisk infrastruktur hos fallföretaget. Med hjälp av litteratur utöver det befintliga ramverket upptäcktes det att en masterdataplattform skulle kunna bidra till utvecklingen av masterdatahanteringen. Att avgöra vilka konkreta tekniska möjligheter som finns enbart genom ramverket visade studien var svårt. Studiens slutsats är att det är svårt att identifiera vilka konkreta tekniska utvecklingsmöjligheter som finns för ett större företag med enbart The Seven Building Blocks of MDM som grund.
16

Krav på hållbar produktinformation –idag och i framtiden : Ökad vetskap och förbättrad hantering av produktinformation inom hälsokostföretag

Selerud, Moa, Jernek, Julia January 2023 (has links)
Syfte: Syftet är att identifiera vilka krav som ställs på Företag X produktinformation inom miljömässig och social hållbarhet idag och i framtiden från kunder, konsumenter och myndigheter, samt hur hanteringen av produktrelaterad masterdata och information på Företag X kan förbättras. Metod: Gemensamt för alla forskningsfrågor är dess kvalitativa forskningsmetod, induktiva ansats, semistrukturerade intervjuer med 11 respondenter och etiska överväganden. Likväl tillämpas ett icke sannolikhetsurval och därigenom ett snöbollsurval genom en kontakt på fallföretaget Företag X, för att intervjua adekvata respondenter. Till forskningsfrågor 1 och 2 antogs en representativ och informationsrik/avslöjande fallstudie baserat på fallföretaget Företag X och vid analys av forskningsfrågorna tillämpades pattern matching. För forskningsfråga 3 användes multipla fallstudier genom skapandet av fyra olika scenarion (fall), där analysen bestod av en komparativ analys och pattern matching. Slutsats: Studien har identifierat flera önskvärda krav från kunder på produktinformation inom miljömässig och social hållbarhet, medan myndigheterna utgör indirekta krav för att skapa en efterfråga på produktinformation och således stödja intressenter att göra välgrundade beslut. Dagens krav speglar sig huvudsakligen av transparens och spårbarhet inom både miljömässig och social hållbarhet. År 2045 utgörs kraven på produktinformation av bevis och certifiering. Kunder, konsumenter och myndigheter ställer utökade krav på biologisk mångfald, lojalitet mot sin befolkning, transparens samt socialt ansvarstagande genom prisgarantier och priser som passar alla plånböcker. Vidare påvisar studien flertal förbättringsåtgärder Företag X kan vidta för att förbättra hanteringen av produktinformation, där tydligare ansvarsområden, kontinuerlig dataunderhåll och kontroll samt tydligare rutiner och policys är några av möjliga förbättringsåtgärder. Studiens bidrag: Fördjupad inblick i vilka krav som ställs på produktinformation inom miljömässig och social hållbarhet möjliggör en ökad förståelse och förenklad prioritering för Företag X och likartade hälsokostföretag. Likväl kan identifierade förbättringsåtgärder för hantering av produktrelaterad masterdata och information bidra till att skapa en förbättrad hantering och en mer stödjande organisation, vilket kan generaliseras till likartade hälsokostföretag. Studien bidrar till en djupare förståelse för vilka krav som ställs på produktinformation utifrån ett hållbarhetsperspektiv, där förståelsen kan möjliggöra mer samarbete mellan hälsokostföretag och dess intressenter, vilket skapar en bild av samhällets roll i relation till studiens syfte. De framtida krav som ställs på produktinformation inom miljömässig och social hållbarhet kan bidra till organisatoriskt lärande och möjliggöra ett proaktivt agerande hos hälsokostföretag. Nyckelord: Miljömässig hållbarhet, social hållbarhet, produktinformation, legala krav, önskvärda krav, hälsokostföretag, masterdata, master data management, informationshantering
17

Discovering data quality rules in a master data management context / Fouille de règles de qualité de données dans un contexte de gestion de données de référence

Diallo, Thierno Mahamoudou 17 July 2013 (has links)
Le manque de qualité des données continue d'avoir un impact considérable pour les entreprises. Ces problèmes, aggravés par la quantité de plus en plus croissante de données échangées, entrainent entre autres un surcoût financier et un rallongement des délais. De ce fait, trouver des techniques efficaces de correction des données est un sujet de plus en plus pertinent pour la communauté scientifique des bases de données. Par exemple, certaines classes de contraintes comme les Dépendances Fonctionnelles Conditionnelles (DFCs) ont été récemment introduites pour le nettoyage de données. Les méthodes de nettoyage basées sur les CFDs sont efficaces pour capturer les erreurs mais sont limitées pour les corriger . L’essor récent de la gestion de données de référence plus connu sous le sigle MDM (Master Data Management) a permis l'introduction d'une nouvelle classe de règle de qualité de données: les Règles d’Édition (RE) qui permettent d'identifier les attributs en erreur et de proposer les valeurs correctes correspondantes issues des données de référence. Ces derniers étant de très bonne qualité. Cependant, concevoir ces règles manuellement est un processus long et coûteux. Dans cette thèse nous développons des techniques pour découvrir de manière automatique les RE à partir des données source et des données de référence. Nous proposons une nouvelle sémantique des RE basée sur la satisfaction. Grace à cette nouvelle sémantique le problème de découverte des RE se révèle être une combinaison de la découverte des DFCs et de l'extraction des correspondances entre attributs source et attributs des données de référence. Nous abordons d'abord la découverte des DFCs, en particulier la classe des DFCs constantes très expressives pour la détection d'incohérence. Nous étendons des techniques conçues pour la découverte des traditionnelles dépendances fonctionnelles. Nous proposons ensuite une méthode basée sur les dépendances d'inclusion pour extraire les correspondances entre attributs source et attributs des données de référence avant de construire de manière automatique les RE. Enfin nous proposons quelques heuristiques d'application des ER pour le nettoyage de données. Les techniques ont été implémenté et évalué sur des données synthétiques et réelles montrant la faisabilité et la robustesse de nos propositions. / Dirty data continues to be an important issue for companies. The datawarehouse institute [Eckerson, 2002], [Rockwell, 2012] stated poor data costs US businesses $611 billion dollars annually and erroneously priced data in retail databases costs US customers $2.5 billion each year. Data quality becomes more and more critical. The database community pays a particular attention to this subject where a variety of integrity constraints like Conditional Functional Dependencies (CFD) have been studied for data cleaning. Repair techniques based on these constraints are precise to catch inconsistencies but are limited on how to exactly correct data. Master data brings a new alternative for data cleaning with respect to it quality property. Thanks to the growing importance of Master Data Management (MDM), a new class of data quality rule known as Editing Rules (ER) tells how to fix errors, pointing which attributes are wrong and what values they should take. The intuition is to correct dirty data using high quality data from the master. However, finding data quality rules is an expensive process that involves intensive manual efforts. It remains unrealistic to rely on human designers. In this thesis, we develop pattern mining techniques for discovering ER from existing source relations with respect to master relations. In this set- ting, we propose a new semantics of ER taking advantage of both source and master data. Thanks to the semantics proposed in term of satisfaction, the discovery problem of ER turns out to be strongly related to the discovery of both CFD and one-to-one correspondences between sources and target attributes. We first attack the problem of discovering CFD. We concentrate our attention to the particular class of constant CFD known as very expressive to detect inconsistencies. We extend some well know concepts introduced for traditional Functional Dependencies to solve the discovery problem of CFD. Secondly, we propose a method based on INclusion Dependencies to extract one-to-one correspondences from source to master attributes before automatically building ER. Finally we propose some heuristics of applying ER to clean data. We have implemented and evaluated our techniques on both real life and synthetic databases. Experiments show both the feasibility, the scalability and the robustness of our proposal.
18

Processägares syn på relationen mellan masterdata och processer : Affärsprocesser och masterdata: Hur kunskap påverkar processägarens syn på de egna processerna / The process owners view on the relationship between master data and processes : Business processes and master data: How knowledge affects the processowners view on the processes

Hunt, Marcus, Strömberg, Robert January 2015 (has links)
I affärsprocesser flödar information i olika format. Verbal, skriven, dokumenterad, eller i form av databaser. Oavsett format måste informationen lagras men den måste dessutom vara tillgänglig för alla som kan ha nytta av den. När det gäller information i en processorienterad organisation måste den plats där informationen lagras också möjliggöra för flödet av den. Detta examensarbete är baserat på öppna intervjuer genomförda på Tekniska verken i Linköping (hädanefter kallat TvAB) och på Alstom Power i Växjö (hädanefter kallat Alstom). Intervjuerna syftar till att ge oss empiri för undersökningen, och även till att ge en inblick i den vardag som finns på företagen vilket förankrar studien i praktik och teori. Studien syftar till att undersöka hur processägarnas kunskap påverkar informationsflödet och spridningen av masterdata i organisationen. Vår studie visar att beroende på vem som tillfrågas skiljer sig synen på informationsflödet från processägarna. Förklaringen till denna skillnad är att processägarnas förståelse om masterdata skiljer sig från varandra. Somliga hade en grundlig förståelse för masterdata, medan andra aldrig hört ordet. Detta leder till att informationsasymmetri uppstår, ett övertag för den personen med större eller bättre förståelse för begreppet masterdata. / Information in different formats flow within business processes. It can be verbal, written or previously documented information. It can also take the shape of databases. Regardless of the format the information has to be stored but at the same time readily available for those who must use it. When it comes to information in a process oriented organization the storage facility for the information must also allow for the usage and the flow of the information itself. This paper which is based on a study conducted at Tekniska verken in Linköping and at Alstom Power in Växjö, Sweden aims to investigate how a process owner’s view on the information flow can affect the sharing of master data in an organization. Open interviews were conducted both at Tekniska verken and Alstom since no research had previously been done on the specific view of the process owner. The interviews were not solely aimed at gathering empirical data but also to give an insight into the everyday routine of the companies. This principle forms the theoretical and practical base for this study. The study shows the way a process owner views the internal flow of information is completely dependent on how well the process owner understands the term “master data”. This study also shows proof of asymmetrical information in the sense of one person having a greater understanding of master data than another. Thus, the person with the greater understanding can use this to his or her advantage.
19

Kvalita dat a efektivní využití rejstříků státní správy / Data Quality and Effective Use of Registers of State Administration

Rut, Lukáš January 2009 (has links)
This diploma thesis deals with registers of state administration in term of data quality. The main objective is to analyze the ways how to evaluate data quality and to apply appropriate method to data in business register. Analysis of possibilities of data cleansing and data quality improving and proposal of solution of found inaccuracy in business register is another objective. The last goal of this paper is to analyze approaches how to set identifier of persons and to choose suitable key for identification of persons in registers of state administration. The thesis is divided into several parts. The first one includes introduction into the sphere of registers of state administration. It closely analyzes several selected registers especially in terms of which data contain and how they are updated. Description of legislation changes, which will come into operation in the middle of year 2010, is great contribution of this part. Special attention is dedicated to the impact of these changes from data quality point of view. Next part deals with problems of legal and physical entities identifiers. This section contains possible solution how to identify entities in data from registers. Third part analyzes ways how to determine data quality. Method called data profiling is closely described and applied to extensive data quality analysis of business register. Correct metadata and information about incorrect data are the outputs of this analysis. The last chapter deals with possibilities how to solve data quality problems. There are proposed and compared three variations of solution. The paper as a whole represents compact material how to solve problems with effective using of data contained in registers of state administration. Nevertheless, proposed solutions and described approaches can be used in many other projects which deal with data quality.
20

Master Data Management, Integrace zákaznických dat a hodnota pro business / Master Data Management, Customer Data Integration and value for business

Rais, Filip January 2009 (has links)
This thesis is focused on Master Data Management (MDM), Customer Data Integration (CDI) area and its main domains. It is also a reference to a various theoretical directions that can be found in this area of expertise. It summarizes main aspects, domains and presents different perspectives to referenced principles. It is an exhaustive background research in area of Master Data Management with emphasis on practical use with references on authors experience and opinions. Secondary focus is directed to the field of business value of Master Data Management initiatives. Thesis presents a thought concept for initiations of MDM project. The reason for such a concept is based on current trend, where companies are struggling to determine actual benefits of MDM initiatives. There is overall accord on the subject of necessity of such initiatives, but the struggle is in area of determining actual measureable impact on company's revenue or profit. Since the MDM initiative is more of an enabling function, rather than direct revenue function, the benefit is less straight forward and therefore harder to determine. This work describes different layers and mapping of business requirements through layers for transparent linkage between enabling functions to revenue generating ones. The emphasis is given to financial benefit calculation, measurability and responsibility of business and IT departments. To underline certain conclusions thesis also presents real world interviews with possible stakeholders of MDM initiative within the company. These representatives were selected as key drivers for such an initiative. Interviews map their recognition of MDM and related terms. It also focus on their reasons and expectations from MDM. The representatives were also selected to equally represent business and IT departments, which presents interesting clash of views and expectations.

Page generated in 0.0813 seconds