• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • Tagged with
  • 8
  • 8
  • 6
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A framework for efficiently mining the organisational perspective of business processes

Schönig, Stefan, Cabanillas Macias, Cristina, Jablonski, Stefan, Mendling, Jan 23 June 2016 (has links) (PDF)
Process mining aims at discovering processes by extracting knowledge from event logs. Such knowledge may refer to different business process perspectives. The organisational perspective deals, among other things, with the assignment of human resources to process activities. Information about the resources that are involved in process activities can be mined from event logs in order to discover resource assignment conditions, which is valuable for process analysis and redesign. Prior process mining approaches in this context present one of the following issues: (i) they are limited to discovering a restricted set of resource assignment conditions; (ii) they do not aim at providing efficient solutions; or (iii) the discovered process models are difficult to read due to the number of assignment conditions included. In this paper we address these problems and develop an efficient and effective process mining framework that provides extensive support for the discovery of patterns related to resource assignment. The framework is validated in terms of performance and applicability.
2

Exploring Event Log Analysis with Minimum Apriori Information

Makanju, Adetokunbo 02 April 2012 (has links)
The continued increase in the size and complexity of modern computer systems has led to a commensurate increase in the size of their logs. System logs are an invaluable resource to systems administrators during fault resolution. Fault resolution is a time-consuming and knowledge intensive process. A lot of the time spent in fault resolution is spent sifting through large volumes of information, which includes event logs, to find the root cause of the problem. Therefore, the ability to analyze log files automatically and accurately will lead to significant savings in the time and cost of downtime events for any organization. The automatic analysis and search of system logs for fault symptoms, otherwise called alerts, is the primary motivation for the work carried out in this thesis. The proposed log alert detection scheme is a hybrid framework, which incorporates anomaly detection and signature generation to accomplish its goal. Unlike previous work, minimum apriori knowledge of the system being analyzed is assumed. This assumption enhances the platform portability of the framework. The anomaly detection component works in a bottom-up manner on the contents of historical system log data to detect regions of the log, which contain anomalous (alert) behaviour. The identified anomalous regions are then passed to the signature generation component, which mines them for patterns. Consequently, future occurrences of the underlying alert in the anomalous log region, can be detected on a production system using the discovered pattern. The combination of anomaly detection and signature generation, which is novel when compared to previous work, ensures that a framework which is accurate while still being able to detect new and unknown alerts is attained. Evaluations of the framework involved testing it on log data for High Performance Cluster (HPC), distributed and cloud systems. These systems provide a good range for the types of computer systems used in the real world today. The results indicate that the system that can generate signatures for detecting alerts, which can achieve a Recall rate of approximately 83% and a false positive rate of approximately 0%, on average.
3

Mining team compositions for collaborative work in business processes

Schönig, Stefan, Cabanillas Macias, Cristina, Di Ciccio, Claudio, Jablonski, Stefan, Mendling, Jan 22 October 2016 (has links) (PDF)
Process mining aims at discovering processes by extracting knowledge about their different perspectives from event logs. The resource perspective (or organisational perspective) deals, among others, with the assignment of resources to process activities. Mining in relation to this perspective aims to extract rules on resource assignments for the process activities. Prior research in this area is limited by the assumption that only one resource is responsible for each process activity, and hence, collaborative activities are disregarded. In this paper, we leverage this assumption by developing a process mining approach that is able to discover team compositions for collaborative process activities from event logs. We evaluate our novel mining approach in terms of computational performance and practical applicability.
4

Security vs. Usability: designing a secure and usable access control event log

Zeba, Vedrana, Levin, Lykke January 2019 (has links)
Säkerhet och användbarhet beskrivs ofta som motpoler. I detta examensarbete så undersöks möjligheterna till att inkorporera både säkerhet och användbarhet i ett passagekontrollsgränssnitt. Forskningen är fokuserad på den del av passagekontrollen som benämns som händelseloggen. Loggens ändamål är att lagra och presentera information om händelser som sker i övervakade entréer. Syftet med forskningen är att undersöka i vilken utsträckning det är möjligt att implementera användarkrav och samtidigt uppfylla säkerhets- och användbarhetsheuristik. En klassisk interaktionsdesignsprocess utförs. Semi-strukturerade intervjuer genomförs med respondenter från två olika målgrupper, för att kontrollera om deras behov skiljer sig åt. Den ena gruppen består av användare som primärt jobbar med säkerhetsrelaterade arbetsuppgifter medan den andra gruppen har säkerhet som sekundär arbetsuppgift. Svaren analyseras genom en tematisk analys. Analysen resulterar i fyra olika teman innehållandes 26 stycken användarkrav. Användarkraven och heuristiken tas i beaktning när en prototyp skapas. Prototypen utvärderas sedan genom en heuristisk utvärdering av experter. Resultatet av denna forskning tyder på att användarkrav bidrar till att uppfylla heuristik. Utöver detta, så visar det sig att de två målgrupperna, på flera punkter, har olika behov. Användarkrav som härstammar från den första gruppen anses vara mer dynamiska och omedelbara, medan den andra gruppen har krav som är desto mer statiska och sporadiska. / Security and usability are often thought of as being contradictive. In this thesis, we explore the possibility of incorporating both security and usability in an access control GUI. The research is concentrated towards the part of the access control that is referred to as the event log. The purpose of the log is to store and present information about events that occur at monitored entry points. The intention of the research is to investigate to what extent it is possible to implement user requirements, while still complying with security and usability heuristics. A traditional interaction design process is conducted. Semi-structured interviews are held with respondents from two different target groups, to see if their needs differ. One of the groups consists of users who primarily do security related work, and the other one consists of users who have security as a secondary job assignment. The answers undergo a thematic analysis. The outcome of the analysis is four different themes, consisting of a total of 26 user requirements. The user requirements and the heuristics are taken into consideration when creating a prototype. The prototype is then subjected to a heuristic evaluation by experts. The results of this research indicate that the gathering of user requirements does aid the compliance with heuristics. Moreover, the user needs between the two groups do differ on several accounts. The requirements that originate from the first group can be thought of as more dynamic and instantaneous, while the other group has requirements that are more static and occasional.
5

Dolování procesů jako služba / Process Mining as a Service

Dobias, Ondrej January 2017 (has links)
Softwérové a hardvérové aplikácie zaznamenávajú veľké množstvo informácií do protokolov udalostí. Každé dva roky sa množstvo zaznamenaných dát viac než zdvojnásobí. Dolovanie procesov je relatívne mladá disciplína, ktorá sa nachádza na rozmedzí strojového učenia a dolovania dát na jednej strane a modelovania a analýzy procesov na druhej strane. Cieľom dolovania procesov je popísať a analyzovať skutočné procesy extrahovaním znalostí z protokolov udalostí, ktoré sú v dnešných aplikáciách bežne dostupné. Táto práca mieri na spojenie obchodných príležitostí (organizácie bohaté na dáta; dopyt po službách BPM; limitácie na strane tradičnej dodávky BPM služieb) s technickými možnosťammi Dolovania procesov. Cieľom práce je návrh produktu, ktorý bude riešiť potreby zákazníkov a poskytovateľov služieb v oblasti Dolovania procesov lepšie než súčasné riešenie vybranej spoločnosti.
6

NEURAL NETWORK ON VIRTUALIZATION SYSTEM, AS A WAY TO MANAGE FAILURE EVENTS OCCURRENCE ON CLOUD COMPUTING

Pham, Khoi Minh 01 June 2018 (has links)
Cloud computing is one important direction of current advanced technology trends, which is dominating the industry in many aspects. These days Cloud computing has become an intense battlefield of many big technology companies, whoever can win this war can have a very high potential to rule the next generation of technologies. From a technical point of view, Cloud computing is classified into three different categories, each can provide different crucial services to users: Infrastructure (Hardware) as a Service (IaaS), Software as a Service (SaaS), and Platform as a Service (PaaS). Normally, the standard measurements for cloud computing reliability level is based on two approaches: Service Level Agreements (SLAs) and Quality of Service (QoS). This thesis will focus on IaaS cloud systems’ Error Event Logs as an aspect of QoS in IaaS cloud reliability. To have a better view, basically, IaaS is a derivation of the traditional virtualization system where multiple virtual machines (VMs) with different Operating System (OS) platforms, are run solely on one physical machine (PM) that has enough computational power. The PM will play the role of the host machine in cloud computing, and the VMs will play the role as the guest machines in cloud computing. Due to the lack of fully access to the complete real cloud system, this thesis will investigate the technical reliability level of IaaS cloud through simulated virtualization system. By collecting and analyzing the event logs generated from the virtualization system, we can have a general overview of the system’s technical reliability level based on number of error events occur in the system. Then, these events will be used on neural network time series model to detect the system failure events’ pattern, as well as predict the next error event that is going to occur in the virtualization system.
7

Continuous Event Log Extraction for Process Mining

Selig, Henny January 2017 (has links)
Process mining is the application of data science technologies on transactional business data to identify or monitor processes within an organization. The analyzed data often originates from process-unaware enterprise software, e.g. Enterprise Resource Planning (ERP) systems. The differences in data management between ERP and process mining systems result in a large fraction of ambiguous cases, affected by convergence and divergence. The consequence is a chasm between the process as interpreted by process mining, and the process as executed in the ERP system. In this thesis, a purchasing process of an SAP ERP system is used to demonstrate, how ERP data can be extracted and transformed into a process mining event log that expresses ambiguous cases as accurately as possible. As the content and structure of the event log already define the scope (i.e. which process) and granularity (i.e. activity types), the process mining results depend on the event log quality. The results of this thesis show how the consideration of case attributes, the notion of a case and the granularity of events can be used to manage the event log quality. The proposed solution supports continuous event extraction from the ERP system. / Process mining är användningen av datavetenskaplig teknik för transaktionsdata, för att identifiera eller övervaka processer inom en organisation. Analyserade data härstammar ofta från processomedvetna företagsprogramvaror, såsom SAP-system, vilka är centrerade kring affärsdokumentation. Skillnaderna i data management mellan Enterprise Resource Planning (ERP)och process mining-system resulterar i en stor andel tvetydiga fall, vilka påverkas av konvergens och divergens. Detta resulterar i ett gap mellan processen som tolkas av process mining och processen som exekveras i ERP-systemet. I denna uppsats används en inköpsprocess för ett SAP ERP-system för att visa hur ERP-data kan extraheras och omvandlas till en process mining-orienterad händelselogg som uttrycker tvetydiga fall så precist som möjligt. Eftersom innehållet och strukturen hos händelseloggen redan definierar omfattningen (vilken process) och granularitet (aktivitetstyperna), så beror resultatet av process mining på kvalitén av händelseloggen. Resultaten av denna uppsats visar hur definitioner av typfall och händelsens granularitet kan användas för att förbättra kvalitén. Den beskrivna lösningen stöder kontinuerlig händelseloggsextraktion från ERPsystemet.
8

Predictive Maintenance of Construction Equipment using Log Data : A Data- centric Approach

Kotriwala, Bazil Muzaffar January 2021 (has links)
Construction equipment manufacturers want to reduce the downtime of their equipment by moving from the typical reactive maintenance to a predictive maintenance approach. They would like to define a method to predict the failure of the construction equipment ahead of time by leveraging the real- world data that is being logged by their vehicles. This data is logged as general event data and specific sensor data belonging to different components of the vehicle. For the scope of this study, the focus is on articulated hauler vehicles with engine as the specific component under observation. In the study, extensive time and resources are spent on preparing both the real- world data sources and coming up with methods such that both data sources are ready for predictive maintenance and can also be merged together. The prepared data is used to build respective remaining useful life machine learning models which classify whether there will be a failure in the next x days. These models are built using data from two different approaches namely, lead data shift and resampling approach respectively. Three different experiments are carried out for both of these approaches using three different combinations of data namely event log only, engine sensor log only, event and sensor log combined. All these experiments have an increasing look ahead window size of how far into the future we would like to predict the failure. The results of these experiments are evaluated in relation to which is the best approach, data combination, and window size to foresee engine failures. The model performance is primarily distinguished by the F- Score and Area under Precision- Recall Curve. / Tillverkare av anläggningsutrustning vill minska stilleståndstiden för sin utrustning genom att övergå från det typiska reaktiva underhållet till ett förebyggande underhåll. De vill definiera en metod för att förutse fel på byggutrustningen i förväg genom att utnyttja de verkliga data som loggas av fordonen. Dessa data loggas som allmänna händelsedata och specifika sensordata som tillhör olika komponenter i fordonet. I den här studien ligger fokus på ledade dragfordon med motorn som den specifika komponent som observeras. I studien läggs mycket tid och resurser på att förbereda båda datakällorna i den verkliga världen och att ta fram metoder så att båda datakällorna är redo för förebyggande underhåll och kan slås samman. De förberedda uppgifterna används för att bygga maskininlärnings modeller för återstående livslängd som klassificerar om det kommer att ske ett fel inom de närmaste x dagarna. Modellerna byggs upp med hjälp av data från två olika metoder, nämligen lead data shift och resampling approach. Tre olika experiment utförs för båda dessa metoder med tre olika kombinationer av data, nämligen endast händelselogg, endast motorsensorlogg och kombinerad händelselogg och sensorlogg. Alla dessa experiment har en ökande fönsterstorlek för hur långt in i framtiden vi vill förutsäga felet. Resultaten av dessa experiment utvärderas med avseende på vilket tillvägagångssätt, vilken datakombination och vilken fönsterstorlek som är bäst för att förutse motorhaverier. Modellens prestanda bedöms i första hand med hjälp av F- poäng och arean under Precision- Recall- kurvan.

Page generated in 0.0844 seconds