Spelling suggestions: "subject:"canprocess minining"" "subject:"canprocess chanining""
11 |
Mining Projects from Structured and Unstructured DataBala, Saimir January 2017 (has links) (PDF)
Companies working on safety-critical projects must adhere to strict rules imposed by
the domain, especially when human safety is involved. These projects need to be compliant to
standard norms and regulations. Thus, all the process steps must be clearly documented in order
to be verifiable for compliance in a later stage by an auditor. Nevertheless, documentation often
comes in the form of manually written textual documents in different formats. Moreover, the project
members use diverse proprietary tools. This makes it difficult for auditors to understand how the
actual project was conducted. My research addresses the project mining problem by exploiting logs
from project-generated artifacts, which come from software repositories used by the project team.
|
12 |
Continual process improvement based on reference models and process miningGerke, Kerstin 29 July 2011 (has links)
Geschäftsprozesse stellen ein wichtiges Gut eines Unternehmens dar. Für den Unternehmenserfolg sind nicht einmalig optimal gestaltete Prozesse entscheidend, sondern die Fähigkeit, schnell auf neue Entwicklungen reagieren und die betroffenen Prozesse flexibel anpassen zu können. In vielen Unternehmen ist eine aktuelle Beschreibung ihrer Prozesse als wesentliche Voraussetzung für die Prozessverbesserung jedoch nicht oder nur unzureichend gegeben. Nicht selten wird ein erstelltes Prozessmodell nicht weiterverwendet, so dass es nach kurzer Zeit von der betrieblichen Realität abweicht. Diese fehlende Übereinstimmung kann durch die Nutzung von Prozess-Mining-Technologien verhindert werden, indem das in den Informationssystemen implizit vorhandene Prozesswissen automatisiert extrahiert und in Form von Prozessmodellen abgebildet wird. Ein weiteres wichtiges Element für die effiziente Gestaltung und Steuerung von Prozessen bilden Referenzmodelle, wie z. B. ITIL und CobiT. Die Prozessverbesserung durchläuft in der Regel mehrere Analyse-, Design-, Implementierungs- , Ausführungs-, Monitoring-, und Evaluierungsschritte. Die Arbeit stellt eine Methodik vor, die die Identifizierung und Lösung der auftretenden Aufgaben unterstützt und erleichtert. Eine empirische Untersuchung zeigt die Herausforderungen und die Potenziale für den erfolgreichen Einsatz von Process-Mining-Techniken. Auf der Basis der Resultate dieser Untersuchung wurden spezielle Aspekte der Datenaufbereitung für Process-Mining-Algorithmen detailliert betrachtet. Der Fokus liegt dabei auf der Bereitstellung von Enterprise- und RFID-Daten. Weiterhin beleuchtet die Arbeit die Wichtigkeit, die Referenzprozessausführung zu überprüfen, um deren Einhaltung in Bezug auf neue oder geänderte Prozesse zu sichern. Die Methodik wurde anhand einer Reihe von Praxisbeispielen erprobt. Die Ergebnisse unterstreichen ihre generelle unternehmensübergreifende Anwendbarkeit für die effiziente kontinuierliche Prozessverbesserung. / The dissertation at hand takes as its subject business processes. Naturally they are subject to continual improvement and are a major asset of any given organization. An optimally-designed process, having once proven itself, must be flexible, as new developments demand swift adaptations. However, many organizations do not adequately describe these processes, though doing so is a prerequisite for their improvement. Very often the process model created during an information system’s implementation either is not used in the first place or is not maintained, resulting in an obvious lack of correspondence between the model and operational reality. Process mining techniques prevent this. They extract the process knowledge inherent in an information system and visualize it in the form of process models. Indeed, continual process improvement depends greatly on this modeling approach, and reference models, such as ITIL and CobiT, are entirely suitable and powerful means for dealing with the efficient design and control of processes. Process improvement typically consists of a number of analysis, design, implementation, execution, monitoring, and evaluation activities. This dissertation proposes a methodology that supports and facilitates them. An empirical analysis both revealed the challenges and the potential benefits of these processes mining techniques’ successful. This in turn led to the detailed consideration of specific aspects of the data preparation for process mining algorithms. Here the focus is on the provision of enterprise data and RFID events. This dissertation as well examines the importance of analyzing the execution of reference processes to ensure compliance with modified or entirely new business processes. The methodology involved a number of cases’ practical trials; the results demonstrate its power and universality. This new approach ushers in an enhanced continual inter-departmental and inter-organizational improvement process.
|
13 |
A method for measuring Internal Fraud Risk (IFR) of business organisations with ERP systemsDayan, Imran January 2017 (has links)
ERP system has shaped the way modern organisations design, control, and execute business processes. It has not only paved the way for efficient use of organisational resources but also offered the opportunity to utilise data logged in the system for ensuring internal control. The key contribution of this research is that it has resulted in a method which can practically be employed by internal auditors for measuring internal fraud risk of business organisations with ERP systems, by utilising process mining technique and evidential reasoning in the form of Bayesian theorem, in a much more effective way compared to conventional frequentist method. The other significant contribution is that it has paved the way for combining process mining technique and evidential reasoning in addressing problems prevalent within organisational contexts. This research has contributed in developing IS theories for design and action especially in the area of soft systems methodology as it has relied on business process modelling in addressing the issue of internal fraud risk. The chosen method has contributed in facilitating incorporation of design science method in problem solving. Researchers have focused on applying data mining techniques within organisational contexts for extracting valuable information. Process mining is a comparatively new technique which allows business processes to be analysed based on event logs. Analysis of business processes can be useful for organisations not only for attaining greater efficiency but also for ensuring internal control inside the organisation. Large organisations have various measures in place for ensuring internal control. Measuring the risk of fraud within a business process is an important practice for preventing fraud as accurate measurement of fraud risk provides business experts with the opportunity to comprehend the extent of the problem. Business experts, such as internal auditors, still heavily rely upon conventional methods for measuring internal fraud risk way by of random check of process compliance. Organisations with ERP systems in place can avail themselves of the opportunity to use event logs for extending the scope of assessing process conformance. This has not been put into practice as there is a lack of well researched methods which can allow event logs to be utilised for enhancing internal control. This research can be proved to be useful for practitioners as it has developed a method for measuring internal fraud risk within organisations. This research aimed to utilise process mining technique that allows business experts to exert greater control over business process execution by allowing the internal fraud risk to be measured effectively. A method has been developed for measuring internal fraud risk of business originations with ERP systems by using process mining and Bayesian theorem. In this method, rate of process deviation is calculated by conducting process mining on relevant logs of events and then that process deviation rate is applied in Bayesian theorem along with historic internal fraud risk rate and process deviation rate calculated manually for arriving at a revised internal fraud risk rate. Bayesian theorem has been relied upon for the purpose of developing this new method as it allows evidential reasoning to be incorporated. The method has been developed as a Design Science Research Method (DSRM) artefact by conducting three case-studies. Data has been collected from three case companies, operating in readymade garments manufacturing industry, pharmaceuticals industry, and aviation industry, regarding their procurement process for conducting process mining. The revised internal fraud risk rates were then evaluated by considering the feedback received from respective business experts of each of the case company. The proposed method is beneficial as it has paved the way for practitioners to utilise process mining using a soft system methodology. The developed method is of immense significance as it has contributed in the field of business intelligence and analytics (BI&A) and the big data analytics which have become significantly important to both academics and practitioners over the past couple of decades.
|
14 |
Repairing event logs using stochastic process modelsRogge-Solti, Andreas, Mans, Ronny S., van der Aalst, Wil M. P., Weske, Mathias January 2013 (has links)
Companies strive to improve their business processes in order to remain competitive. Process mining aims to infer meaningful insights from process-related data and attracted the attention of practitioners, tool-vendors, and researchers in recent years. Traditionally, event logs are assumed to describe the as-is situation. But this is not necessarily the case in environments where logging may be compromised due to manual logging. For example, hospital staff may need to manually enter information regarding the patient’s treatment. As a result, events or timestamps may be missing or incorrect.
In this paper, we make use of process knowledge captured in process models, and provide a method to repair missing events in the logs. This way, we facilitate analysis of incomplete logs. We realize the repair by combining stochastic Petri nets, alignments, and Bayesian networks. We evaluate the results using both synthetic data and real event data from a Dutch hospital. / Unternehmen optimieren ihre Geschäftsprozesse laufend um im kompetitiven Umfeld zu bestehen. Das Ziel von Process Mining ist es, bedeutende Erkenntnisse aus prozessrelevanten Daten zu extrahieren. In den letzten Jahren sorgte Process Mining bei Experten, Werkzeugherstellern und Forschern zunehmend für Aufsehen. Traditionell wird dabei angenommen, dass Ereignisprotokolle die tatsächliche Ist-Situation widerspiegeln. Dies ist jedoch nicht unbedingt der Fall, wenn prozessrelevante Ereignisse manuell erfasst werden. Ein Beispiel hierfür findet sich im Krankenhaus, in dem das Personal Behandlungen meist manuell dokumentiert. Vergessene oder fehlerhafte Einträge in Ereignisprotokollen sind in solchen Fällen nicht auszuschließen.
In diesem technischen Bericht wird eine Methode vorgestellt, die das Wissen aus Prozessmodellen und historischen Daten nutzt um fehlende Einträge in Ereignisprotokollen zu reparieren. Somit wird die Analyse unvollständiger Ereignisprotokolle erleichtert. Die Reparatur erfolgt mit einer Kombination aus stochastischen Petri Netzen, Alignments und Bayes'schen Netzen. Die Ergebnisse werden mit synthetischen Daten und echten Daten eines holländischen Krankenhauses evaluiert.
|
15 |
[pt] DESCOBERTA, CONFORMIDADE E APRIMORAMENTO DE PROCESSOS EDUCACIONAIS VIA PLANOS TÍPICOS / [en] DISCOVERY, CONFORMANCE AND ENHANCEMENT OF EDUCATIONAL PROCESSES VIA TYPICAL PLANSVINICIUS MICHEL GOTTIN 19 June 2020 (has links)
[pt] Nesta tese propomos a aplicação de um paradigma de planejamento baseado em uma disciplina de modelagem conceitual para as tarefas de Mineração de Processos. Postulamos que a abordagem apresentada habilita as tarefas de descoberta de processos, checagem de conformidade e melhoria de modelos em
domínios educacionais, que tem características de processos não-estruturados – dependências entre tarefas, múltiplas dependências, eventos concorrentes, atividades que falham, atividades repetidas, traços parciais e estruturas de nocaute. Relacionamos os conceitos em ambas as áreas de pesquisa e demonstramos a
abordagem aplicada a um exemplo em um domínio acadêmico, implementando os algoritmos como parte de uma Biblioteca de Planos Típicos para Mineração de Processos que constrói sobre a extensa literatura prévia. / [en] In this thesis we propose the application of an automated planning paradigm based on a conceptual modeling discipline for the Process Mining tasks. We posit that the presented approach enables the process discovery, conformance checking and model enhancement tasks for educational domains, comprising characteristics
of unstructured processes – with intertask dependencies, multiple dependencies, concurrent events, failing activities, repeated activities, partial traces and knock-out structures. We relate the concepts in both areas of research, and demonstrate the approach applied to an academic domain example, implementing the algorithms as part of a Library for Typical Plans for Process Mining that leverages the extensive prior art in the literature.
|
16 |
Continuous Event Log Extraction for Process MiningSelig, Henny January 2017 (has links)
Process mining is the application of data science technologies on transactional business data to identify or monitor processes within an organization. The analyzed data often originates from process-unaware enterprise software, e.g. Enterprise Resource Planning (ERP) systems. The differences in data management between ERP and process mining systems result in a large fraction of ambiguous cases, affected by convergence and divergence. The consequence is a chasm between the process as interpreted by process mining, and the process as executed in the ERP system. In this thesis, a purchasing process of an SAP ERP system is used to demonstrate, how ERP data can be extracted and transformed into a process mining event log that expresses ambiguous cases as accurately as possible. As the content and structure of the event log already define the scope (i.e. which process) and granularity (i.e. activity types), the process mining results depend on the event log quality. The results of this thesis show how the consideration of case attributes, the notion of a case and the granularity of events can be used to manage the event log quality. The proposed solution supports continuous event extraction from the ERP system. / Process mining är användningen av datavetenskaplig teknik för transaktionsdata, för att identifiera eller övervaka processer inom en organisation. Analyserade data härstammar ofta från processomedvetna företagsprogramvaror, såsom SAP-system, vilka är centrerade kring affärsdokumentation. Skillnaderna i data management mellan Enterprise Resource Planning (ERP)och process mining-system resulterar i en stor andel tvetydiga fall, vilka påverkas av konvergens och divergens. Detta resulterar i ett gap mellan processen som tolkas av process mining och processen som exekveras i ERP-systemet. I denna uppsats används en inköpsprocess för ett SAP ERP-system för att visa hur ERP-data kan extraheras och omvandlas till en process mining-orienterad händelselogg som uttrycker tvetydiga fall så precist som möjligt. Eftersom innehållet och strukturen hos händelseloggen redan definierar omfattningen (vilken process) och granularitet (aktivitetstyperna), så beror resultatet av process mining på kvalitén av händelseloggen. Resultaten av denna uppsats visar hur definitioner av typfall och händelsens granularitet kan användas för att förbättra kvalitén. Den beskrivna lösningen stöder kontinuerlig händelseloggsextraktion från ERPsystemet.
|
17 |
Evaluating and Automating a Scaled Agile Framework Maturity Model / Utvärdering och automatisering av ett uppskalat agilt ramverks mognadsmodellReitz, Fabienne January 2021 (has links)
While agile development is becoming ever more popular, studies have shown that few organisations successfully transition from traditional to agile practices. One such study showed that large organisations can benefit greatly from agile methods, but evaluating agile maturity and tailoring the method to the organisation’s needs is crucial. An agile maturity model is a tool with which an organisation’s practices and their conformance to agile development is evaluated. The purpose of this study is to discover the best suited agile maturity model for large organisations and to minimise costs, resources and the subjectivity of the model’s evaluation. In this study we take a closer look at four agile maturity models, the Scaled Agile Framework Maturity Model (SAFeMM) by Turetken, Stojanov and Trienekens (2017), the Scaled Agile Maturity Model (SAMM) by Chandrasekaran (2016), the Agile Adoption Framework (AAF) by Sidky, Arthur and Bohner (2007) and the Scaled Agile Framework Business Agility Assessment (SAFeBAA) by the Scaled Agile Incorporation. By evaluating each model on their scalability, completeness, generality, precision, simplicity, usability and meaningfulness, consistency, minimum overlapping, balance and proportion of automatable measurements, the best model is chosen. Based on the evaluation criteria for the maturity models, the SAFeMM is deemed the most suitable model. It proves to be a comprehensive, well-rounded tool with persistent high scores in all criteria. In order to improve the model’s objectivity and resource needs, it is also applied in a case study at the Swedish Tax Agency, where the possibilities to automate the model are investigated. The results show that the SAFeMM can be automated to roughly 50%, with the use of process mining and software system querying. Process mining uses event logs to extract and analyse information, while software querying extracts information directly from the software systems used in an organisation. The study suggests primary sources for querying and process mining techniques and perspectives to enable and encourage future research in the area of process mining within agile development. / Agil utveckling är en mycket populär utvecklingsmetod, samtidigt visar studier att få stora organisationer lyckas med övergången från traditionella metoder direkt. Som hjälpmedel kan dessa organisationer använda så kallade agila mognadsmodeller. En agil mognadsmodell är ett verktyg som mäter hur väl en organisation och dess processer överensstämmer med agila principer. Syftet med denna studie är att undersöka vilken agil mognadsmodell som är bäst lämpad för stora organisationer och kan samtidigt minimera kostnader, resurser och subjektiviteten i mätningarna. Därför tittar denna studie på fyra agila mognadsmodeller, Scaled Agile Framework Maturity Model (SAFeMM) av Turetken, Stojanov och Trienekens (2017), Scaled Agile Maturity Model (SAMM) av Chandrasekaran (2016), Agile Adoption Framework (AAF) av Sidky, Arthur och Bohner (2007) och Scaled Agile Framework Business Agility Assessment (SAFeBAA) av Scaled Agile Incorporation. Genom att utvärdera varje modell baserat på dess skalbarhet, helhetsbild, generaliserbarhet, precision, enkelhet, användbar-het och meningfullhet, kontinuitet, minimal överlappning, balans och andel automatiserbara mätvärden, bestäms vilken modell som är bäst. Resultaten visar, att baserat på de ovannämnda kriterierna, är SAFeMM modellen den bäst lämpade för stora organisationer. Den visade sig vara särsilkt helhetstäckande, enkel att förstå och använda, med höga poäng på de flesta kriterierna. För att förbättra modellens objektivitet och resurskrav, gjordes även en fallstudie där modellen applicerades på Skatteverkets IT avdelning. Där undersöktes möjligheterna för att automatisera modellen. Resultaten visar att knappt 50% av modellen är automatiserbar genom metoder såsom process mining och software querying. Process mining, använder event loggar från mjukvarusystem för att analysera och utvinna information, medan software querying utvinnar information direkt från mjukvarusystemen. Studien presenterar förslag på utvinningskällor och process mining tekniker och metoder för sammanhanget.
|
18 |
Anonymization Techniques for Privacy-preserving Process MiningFahrenkrog-Petersen, Stephan A. 30 August 2023 (has links)
Process Mining ermöglicht die Analyse von Event Logs. Jede Aktivität ist durch ein Event in einem Trace recorded, welcher jeweils einer Prozessinstanz entspricht. Traces können sensible Daten, z.B. über Patienten enthalten. Diese Dissertation adressiert Datenschutzrisiken für Trace Daten und Process Mining. Durch eine empirische Studie zum Re-Identifikations Risiko in öffentlichen Event Logs wird die hohe Gefahr aufgezeigt, aber auch weitere Risiken sind von Bedeutung. Anonymisierung ist entscheidend um Risiken zu adressieren, aber schwierig weil gleichzeitig die Verhaltensaspekte des Event Logs erhalten werden sollen. Dies führt zu einem Privacy-Utility-Trade-Off. Dieser wird durch neue Algorithmen wie SaCoFa und SaPa angegangen, die Differential Privacy garantieren und gleichzeitig Utility erhalten. PRIPEL ergänzt die anonymiserten Control-flows um Kontextinformationen und ermöglich so die Veröffentlichung von vollständigen, geschützten Logs. Mit PRETSA wird eine Algorithmenfamilie vorgestellt, die k-anonymity garantiert. Dafür werden privacy-verletztende Traces miteinander vereint, mit dem Ziel ein möglichst syntaktisch ähnliches Log zu erzeugen. Durch Experimente kann eine bessere Utility-Erhaltung gegenüber existierenden Lösungen aufgezeigt werden. / Process mining analyzes business processes using event logs. Each activity execution is recorded as an event in a trace, representing a process instance's behavior. Traces often hold sensitive info like patient data. This thesis addresses privacy concerns arising from trace data and process mining. A re-identification risk study on public event logs reveals high risk, but other threats exist. Anonymization is vital to address these issues, yet challenging due to preserving behavioral aspects for analysis, leading to a privacy-utility trade-off. New algorithms, SaCoFa and SaPa, are introduced for trace anonymization using noise for differential privacy while maintaining utility. PRIPEL supplements anonymized control flows with trace contextual info for complete protected logs. For k-anonymity, the PRETSA algorithm family merges privacy-violating traces based on a prefix representation of the event log, maintaining syntactic similarity. Empirical evaluations demonstrate utility improvements over existing techniques.
|
19 |
Data-driven business process improvement : An illustrative case study about the impacts and success factors of business process miningDecker, Sebastian January 2019 (has links)
The current business environment is rapidly and fundamentally changing. The main driver are digital technologies. Companies face the pressure to exploit those technologies to improve their business processes in order to achieve competitive advantage. In the light of increased complexity of business processes and the existence of corporate Big Data stored in information systems, the discipline of process mining has emerged. Investigate how process mining can support the optimization of business processes. In this qualitative study, an illustrative case study research is utilized involving eight research participants. Hereby, data is primarily collected from semi-structured interviews. The data is analyzed using content analysis. In addition, the illustrative case serves the purpose to demonstrate the application of process mining. The research revealed that process mining has important impacts on current business process improvement. Not all of them were explicitly positive. The derived success factors should support vendors, current and potential users to apply process mining safe and successfully.
|
20 |
Matching events and activities by integrating behavioral aspects and label analysisBaier, Thomas, Di Ciccio, Claudio, Mendling, Jan, Weske, Mathias 05 1900 (has links) (PDF)
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during the execution of a process. These event data can be used to analyze the process using process mining techniques to discover the real process, measure conformance to a given process model, or to enhance existing models with performance information. Mapping the produced events to activities of a given process model is essential for conformance checking, annotation and understanding of process mining results. In order to accomplish this mapping with low manual effort, we developed a semi-automatic approach that maps events to activities using insights from behavioral analysis and label analysis. The approach extracts Declare constraints from both the log and the model to build matching constraints to efficiently reduce the number of possible mappings. These mappings are further reduced using techniques from natural language processing, which allow for a matching based on labels and external knowledge sources. The evaluation with synthetic and real-life data demonstrates the effectiveness of the approach and its robustness toward non-conforming execution logs.
|
Page generated in 0.0611 seconds