• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 42
  • 8
  • 8
  • 5
  • 5
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 248
  • 248
  • 96
  • 74
  • 62
  • 57
  • 56
  • 37
  • 34
  • 31
  • 30
  • 29
  • 25
  • 25
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

An empirical investigation into wicked operational problems

Godsiff, Philip January 2012 (has links)
This thesis begins by considering the nature of research in Operations Management, the methods that are employed and the types of problems it addresses. We contend that as the discipline matures and it extends its boundaries the research challenges become more complex and the reductionist techniques of Operations Research become less appropriate. To explore this issue we use the concept of wicked problems. Wicked problems were developed by Rittel and Webber during the 1970’s. They suggest the existence of a class of problems which could not be solved using the techniques of Operations Research. They describe Wicked Problems using ten properties or characteristics, which, after a thorough review of their descriptions, we have condensed to six themes. We consider the current state of the “Wicked Problem” literature and have identified the paucity relating to Operations Management. Thus we develop our research question: “what are the characteristics of wicked operational problems?” We investigate this question using a single extended case study of an operation experiencing significant unresolved performance issues. We analyse the case using the tenets of systems thinking, structure and behaviour, and extend the empirical literature on wicked problems to identify the characteristics of wicked operational problems. The research indicates that elements of wicked problems exist at an operational level. The significance of this finding is that reductionist techniques to problem solving e.g. lean and six sigma may not be applicable to the challenges facing operational managers when confronted with the characteristics of a wicked operational problem.
42

Identifying delay factors in electrical distribution projects at Eskom Northern Cape Operating Unit

Ntshangase, Bonga January 2017 (has links)
Delays on electrical engineering projects are a phenomenon at Eskom distribution due to a wide range of causes. These project delays result in Eskom to contravene with Electricity Regulation Act 4 of 2006 in terms of providing efficient, effective and sustainable operation of electricity supply infrastructure, promoting the use of renewable energy sources and energy efficiency as well as to facilitate universal access to electricity for South African consumers (Gazette, 2006). Eskom strives to comply with the Electricity Regulation Act by initiating and implementing strengthening projects, refurbishment (reliability) projects, direct customer projects, infills projects and electrification projects (Eskom, 2014).The severe delays experienced in the delivery of electrical distribution projects have a negative impact on South African economic growth and population. This research study adopted interactive management methodology for the identification of project delay factors in Eskom distribution projects through the use of the idea writing technique, nominal group technique, and interpretive structural modelling technique. The interactive management methodology allows a group of people collaboratively to develop a structure that defines the relationship among the system elements. Using interactive management approach, a total of one hundred and twelve project delay factors were reduced to twenty six significant project delays which formed part of interpretive structural modelling. This research study revealed the hierarchical model illustrating interrelationships between the twenty six identified project delay factors. The research study identified three root causes of delays in electrical distribution projects at Eskom Northern Cape Operating Unit, namely "poor communication", "poor planning", and "project scheduling not properly done". The three identified root causes can be used as critical points for eradicating delays in electrical distribution projects at Eskom Northern Cape Operating Unit. The research study found that a total of ten out of twenty six project delay factors were unique to electrical distribution projects at Eskom Northern Cape Operating Unit.
43

Exploring the Potential of the Metaverse in Operations Management: Towards Sustainable Practices

Ukhanov, Yaroslav, Berggren, Andreas January 2023 (has links)
Problem statement - While reading earlier conducted research on how the Metaverse can be adopted into organizations' operations, a knowledge gap was identified. It was observed that despite the hype around the Metaverse, there is no empirical research on how organizations can adopt the Metaverse into their operations and how this can influence sustainable development.  Purpose - This study aims to explore what values the Metaverse can bring to organizations in the context of sustainable operations management. The goal is to develop a framework that can guide organizations to develop their use of the Metaverse within operations management in a sustainable way. Method - In order to answer the purpose of this study, qualitative research with inductive reasoning was conducted. Empirical data was collected by interviewing eight people in five different companies. The data was later coded through the grounded theory coding.  Result - From the empirical data eight categories were identified. The interviewees expressed their views of the Metaverse today and in the future. It was stated that the Metaverse can be applied to organizational processes, the development of products and/or services,  marketing and sales, human encounters, and competence development. Limitations - This study includes eight participants from five different companies in the technology sector which decreases the generalizability. Two of the included participants, namely, the experts, can be viewed as biased as they have an interest in spreading a positive picture of the Metaverse. Furthermore, one of the experts provided this research with five participants who also can be seen as biased. Implications - This study has both theoretical and practical implications. By exploring the potential of the Metaverse within operations management and how it consequently impacts sustainable development, the researchers of this study filled the knowledge gap in the traditional literature of operations management and expanded it, adding new aspects, which is the theoretical implication. As regards practical implications, this study highlights findings that can guide companies on the way to reduce their negative environmental impact, contribute to social sustainability and enhance economic growth. Conclusion - The Metaverse can be seen as a universal tool that can be applied almost anywhere in organizations' operations. Organizations can use it for experimentation, testing and construction in a virtual environment. Scenario simulations can help forecast breakdown, measure energy consumption, and track emissions. Organizations can establish showrooms, apartment shows, test groups, meetings, and social hangouts. The Metaverse can change the way people interact and acquire knowledge. Therefore, it can change the way how we think of and perform operations management.
44

Data-driven Operations Management: Combining Machine Learning and Optimization for Improved Decision-making / Datengetriebenes Operations Management: Kombination von maschinellem Lernen und Optimierung zur besseren Entscheidungsunterstützung

Meller, Jan Maximilian January 2020 (has links) (PDF)
This dissertation consists of three independent, self-contained research papers that investigate how state-of-the-art machine learning algorithms can be used in combination with operations management models to consider high dimensional data for improved planning decisions. More specifically, the thesis focuses on the question concerning how the underlying decision support models change structurally and how those changes affect the resulting decision quality. Over the past years, the volume of globally stored data has experienced tremendous growth. Rising market penetration of sensor-equipped production machinery, advanced ways to track user behavior, and the ongoing use of social media lead to large amounts of data on production processes, user behavior, and interactions, as well as condition information about technical gear, all of which can provide valuable information to companies in planning their operations. In the past, two generic concepts have emerged to accomplish this. The first concept, separated estimation and optimization (SEO), uses data to forecast the central inputs (i.e., the demand) of a decision support model. The forecast and a distribution of forecast errors are then used in a subsequent stochastic optimization model to determine optimal decisions. In contrast to this sequential approach, the second generic concept, joint estimation-optimization (JEO), combines the forecasting and optimization step into a single optimization problem. Following this approach, powerful machine learning techniques are employed to approximate highly complex functional relationships and hence relate feature data directly to optimal decisions. The first article, “Machine learning for inventory management: Analyzing two concepts to get from data to decisions”, chapter 2, examines performance differences between implementations of these concepts in a single-period Newsvendor setting. The paper first proposes a novel JEO implementation based on the random forest algorithm to learn optimal decision rules directly from a data set that contains historical sales and auxiliary data. Going forward, we analyze structural properties that lead to these performance differences. Our results show that the JEO implementation achieves significant cost improvements over the SEO approach. These differences are strongly driven by the decision problem’s cost structure and the amount and structure of the remaining forecast uncertainty. The second article, “Prescriptive call center staffing”, chapter 3, applies the logic of integrating data analysis and optimization to a more complex problem class, an employee staffing problem in a call center. We introduce a novel approach to applying the JEO concept that augments historical call volume data with features like the day of the week, the beginning of the month, and national holiday periods. We employ a regression tree to learn the ex-post optimal staffing levels based on similarity structures in the data and then generalize these insights to determine future staffing levels. This approach, relying on only few modeling assumptions, significantly outperforms a state-of-the-art benchmark that uses considerably more model structure and assumptions. The third article, “Data-driven sales force scheduling”, chapter 4, is motivated by the problem of how a company should allocate limited sales resources. We propose a novel approach based on the SEO concept that involves a machine learning model to predict the probability of winning a specific project. We develop a methodology that uses this prediction model to estimate the “uplift”, that is, the incremental value of an additional visit to a particular customer location. To account for the remaining uncertainty at the subsequent optimization stage, we adapt the decision support model in such a way that it can control for the level of trust in the predicted uplifts. This novel policy dominates both a benchmark that relies completely on the uplift information and a robust benchmark that optimizes the sum of potential profits while neglecting any uplift information. The results of this thesis show that decision support models in operations management can be transformed fundamentally by considering additional data and benefit through better decision quality respectively lower mismatch costs. The way how machine learning algorithms can be integrated into these decision support models depends on the complexity and the context of the underlying decision problem. In summary, this dissertation provides an analysis based on three different, specific application scenarios that serve as a foundation for further analyses of employing machine learning for decision support in operations management. / Diese Dissertation besteht aus drei inhaltlich abgeschlossenen Teilen, welche ein gemeinsames Grundthema besitzen: Wie lassen sich neue maschinelle Lernverfahren in Entscheidungsunterstützungsmodelle im Operations Management einbetten, sodass hochdimensionale, planungsrelevante Daten für bessere Entscheidungen berücksichtigt werden können? Ein spezieller Fokus liegt hierbei auf der Fragestellung, wie die zugrunde liegenden Planungsmodelle strukturell angepasst werden müssen und wie sich in Folge dessen die Qualität der Entscheidungen verändert. Die vergangenen Jahre haben ein starkes Wachstum des global erzeugten und zur Verfügung stehenden Datenvolumens gezeigt. Die wachsende Verbreitung von Sensoren in Produktionsmaschinen und technischen Geräten, Möglichkeiten zur Nachverfolgung von Nutzerverhalten sowie die sich verstärkende Nutzung sozialer Medien führen zu einer Fülle von Daten über Produktionsprozesse, Nutzerverhalten und -interaktionen sowie Zustandsdaten und Interaktionen von technischen Geräten. Unternehmen möchten diese Daten nun für unterschiedlichste betriebswirtschaftliche Entscheidungsprobleme nutzen. Hierfür haben sich zwei grundsätzliche Ansätze herauskristallisiert: Im ersten, sequentiellen Verfahren wird zunächst ein Vorhersagemodell erstellt, welches zentrale Einflussgrößen (typischerweise die Nachfrage) vorhersagt. Die Vorhersagen werden dann in einem nachgelagerten Optimierungsproblem verwendet, um unter Berücksichtigung der verbliebenen Vorhersageunsicherheit eine optimale Lösung zu ermitteln. Im Gegensatz zu diesem traditionellen, zweistufigen Vorgehensmodell wurde in den letzten Jahren eine neue Klasse von Planungsmodellen entwickelt, welche Vorhersage und Entscheidungsunterstützung in einem integrierten Optimierungsmodell kombinieren. Hierbei wird die Leistungsfähigkeit maschineller Lernverfahren genutzt, um automatisiert Zusammenhänge zwischen optimalen Entscheidungen und Ausprägungen von bestimmten Kovariaten direkt aus den vorhandenen Daten zu erkennen. Der erste Artikel, “Machine learning for inventory management: Analyzing two concepts to get from data to decisions”, Kapitel 2, beschreibt konkrete Ausprägungen dieser beiden Ansätze basierend auf einem Random Forest Modell für ein Bestandsmanagementszenario. Es wird gezeigt, wie durch die Integration des Optimierungsproblems in die Zielfunktion des Random Forest-Algorithmus die optimale Bestandsmenge direkt aus einem Datensatz bestimmt werden kann. Darüber hinaus wird dieses neue, integrierte Verfahren anhand verschiedener Analysen mit einem äquivalenten klassischen Vorgehen verglichen und untersucht, welche Faktoren Performance-Unterschiede zwischen den Verfahren treiben. Hierbei zeigt sich, dass das integrierte Verfahren signifikante Verbesserungen im Vergleich zum klassischen, sequentiellen, Verfahren erzielt. Ein wichtiger Einflussfaktor auf diese Performance-Unterschiede ist hierbei die Struktur der Vorhersagefehler beim sequentiellen Verfahren. Der Artikel “Prescriptive call center staffing”, Kapitel 3, überträgt die Logik, optimale Planungsentscheidungen durch integrierte Datenanalyse und Optimierung zu bestimmen, auf eine komplexere Problemklasse, die Schichtplanung von Mitarbeitern. Da die höhere Komplexität eine direkte Integration des Optimierungsproblems in das maschinelle Lernverfahren nicht erlaubt, wird in dem Artikel ein Datenvorverarbeitungsverfahren entwickelt, mit dessen Hilfe die Eingangsdaten mit den ex post-optimalen Entscheidungen angereichert werden. Durch die Vorverarbeitung kann dann eine angepasste Variante des Regression Tree Lernverfahrens diesen Datensatz nutzen, um optimale Entscheidungen zu lernen. Dieses Verfahren, welches mit sehr wenigen und schwachen Modellierungsannahmen bezüglich des zugrunde liegenden Problems auskommt, führt zu deutlich geringeren Kosten durch Fehlplanungen als ein konkurrierendes Verfahren mit mehr Modellstruktur und -annahmen. Dem dritten Artikel, “Data-driven sales force scheduling”, Kapitel 4, liegt ein noch komplexeres Planungsproblem, die Tourenplanung von Außendienstmitarbeitern, zugrunde. Anhand eines konkreten Anwendungsszenarios bei einem Farben- und Lackhersteller beschreibt der Artikel, wie maschinelle Lernverfahren auch bei Einsatz im traditionellen, sequentiellen Ansatz als reine Vorhersagemodelle die nachgelagerten Entscheidungsmodelle verändern können. In diesem Fall wird ein Entscheidungsbaum-basiertes Lernverfahren in einem neuartigen Ansatz verwendet, um den Wert eines Besuchs bei einem potentiellen Kunden abzuschätzen. Diese Informationen werden dann in einem Optimierungsmodell, welches die verbliebene Unsicherheit der Vorhersagen berücksichtigen kann, zur Routenplanung verwendet. Es wird ersichtlich, dass Daten und fortschrittliche Analyseverfahren hier den Einsatz von neuen Optimierungsmodellen erlauben, welche vorher mangels zuverlässiger Schätzung von wichtigen Eingangsfaktoren nicht nutzbar waren. Die in dieser Dissertation erarbeiteten Ergebnisse belegen, dass betriebswirtschaftliche Planungsmodelle durch die Berücksichtigung neuer Daten und Analysemethoden fundamental verändert werden und davon in Form von besserer Entscheidungsqualität bzw. niedrigerer Kosten durch Fehlplanungen profitieren. Die Art und Weise, wie maschinelle Lernverfahren zur Datenanalyse eingebettet werden können, hängt hierbei von der Komplexität sowie der konkreten Rahmenparameter des zu Grunde liegenden Entscheidungsproblems ab. Zusammenfassend stellt diese Dissertation eine Analyse basierend auf drei unterschiedlichen, konkreten Anwendungsfällen dar und bildet damit die Grundlage für weitergehende Untersuchungen zum Einsatz von maschinellen Lernverfahren bei der Entscheidungsunterstützung für betriebswirtschaftliche Planungsprobleme.
45

Prescriptive Analytics for Data-driven Capacity Management / Prescriptive Analytics für datengetriebenes Kapazitätsmanagement

Notz, Pascal Markus January 2021 (has links) (PDF)
Digitization and artificial intelligence are radically changing virtually all areas across business and society. These developments are mainly driven by the technology of machine learning (ML), which is enabled by the coming together of large amounts of training data, statistical learning theory, and sufficient computational power. This technology forms the basis for the development of new approaches to solve classical planning problems of Operations Research (OR): prescriptive analytics approaches integrate ML prediction and OR optimization into a single prescription step, so they learn from historical observations of demand and a set of features (co-variates) and provide a model that directly prescribes future decisions. These novel approaches provide enormous potential to improve planning decisions, as first case reports showed, and, consequently, constitute a new field of research in Operations Management (OM). First works in this new field of research have studied approaches to solving comparatively simple planning problems in the area of inventory management. However, common OM planning problems often have a more complex structure, and many of these complex planning problems are within the domain of capacity planning. Therefore, this dissertation focuses on developing new prescriptive analytics approaches for complex capacity management problems. This dissertation consists of three independent articles that develop new prescriptive approaches and use these to solve realistic capacity planning problems. The first article, “Prescriptive Analytics for Flexible Capacity Management”, develops two prescriptive analytics approaches, weighted sample average approximation (wSAA) and kernelized empirical risk minimization (kERM), to solve a complex two-stage capacity planning problem that has been studied extensively in the literature: a logistics service provider sorts daily incoming mail items on three service lines that must be staffed on a weekly basis. This article is the first to develop a kERM approach to solve a complex two-stage stochastic capacity planning problem with matrix-valued observations of demand and vector-valued decisions. The article develops out-of-sample performance guarantees for kERM and various kernels, and shows the universal approximation property when using a universal kernel. The results of the numerical study suggest that prescriptive analytics approaches may lead to significant improvements in performance compared to traditional two-step approaches or SAA and that their performance is more robust to variations in the exogenous cost parameters. The second article, “Prescriptive Analytics for a Multi-Shift Staffing Problem”, uses prescriptive analytics approaches to solve the (queuing-type) multi-shift staffing problem (MSSP) of an aviation maintenance provider that receives customer requests of uncertain number and at uncertain arrival times throughout each day and plans staff capacity for two shifts. This planning problem is particularly complex because the order inflow and processing are modelled as a queuing system, and the demand in each day is non-stationary. The article addresses this complexity by deriving an approximation of the MSSP that enables the planning problem to be solved using wSAA, kERM, and a novel Optimization Prediction approach. A numerical evaluation shows that wSAA leads to the best performance in this particular case. The solution method developed in this article builds a foundation for solving queuing-type planning problems using prescriptive analytics approaches, so it bridges the “worlds” of queuing theory and prescriptive analytics. The third article, “Explainable Subgradient Tree Boosting for Prescriptive Analytics in Operations Management” proposes a novel prescriptive analytics approach to solve the two capacity planning problems studied in the first and second articles that allows decision-makers to derive explanations for prescribed decisions: Subgradient Tree Boosting (STB). STB combines the machine learning method Gradient Boosting with SAA and relies on subgradients because the cost function of OR planning problems often cannot be differentiated. A comprehensive numerical analysis suggests that STB can lead to a prescription performance that is comparable to that of wSAA and kERM. The explainability of STB prescriptions is demonstrated by breaking exemplary decisions down into the impacts of individual features. The novel STB approach is an attractive choice not only because of its prescription performance, but also because of the explainability that helps decision-makers understand the causality behind the prescriptions. The results presented in these three articles demonstrate that using prescriptive analytics approaches, such as wSAA, kERM, and STB, to solve complex planning problems can lead to significantly better decisions compared to traditional approaches that neglect feature data or rely on a parametric distribution estimation. / Digitalisierung und künstliche Intelligenz führen zu enormen Veränderungen in nahezu allen Bereichen von Wirtschaft und Gesellschaft. Grundlegend für diese Veränderungen ist die Technologie des maschinellen Lernens (ML), ermöglicht durch ein Zusammenspiel großer Datenmengen, geeigneter Algorithmen und ausreichender Rechenleistung. Diese Technologie bildet die Basis für die Entwicklung neuartiger Ansätze zur Lösung klassischer Planungsprobleme des Operations Research (OR): Präskriptive Ansätze integrieren Methoden des ML und Optimierungsverfahren des OR mit dem Ziel, Lösungen für Planungsprobleme direkt aus historischen Observationen von Nachfrage und Features (erklärenden Variablen) abzuleiten. Diese neuartigen Lösungsansätze bieten ein enormes Potential zur Verbesserung von Planungsentscheidungen, wie erste numerische Analysen mit historischen Daten gezeigt haben, und begründen damit ein neues Forschungsfeld innerhalb des OR. In ersten Beiträgen zu diesem neuen Forschungsfeld wurden präskriptive Verfahren für verhältnismäßig einfache Planungsprobleme aus dem Bereich des Lagerbestandsmanagements entwickelt. Häufig weisen Planungsprobleme aber eine deutlich höhere Komplexität auf, und viele dieser komplexen Planungsprobleme gehören zum Bereich der Kapazitätsplanung. Daher ist die Entwicklung präskriptiver Ansätze zur Lösung komplexer Probleme im Kapazitätsmanagement das Ziel dieser Dissertation. In drei inhaltlich abgeschlossenen Teilen werden neuartige präskriptive Ansätze konzipiert und auf realistische Kapazitätsplanungsprobleme angewendet. Im ersten Artikel, „Prescriptive Analytics for Flexible Capacity Management”, werden zwei präskriptive Verfahren entwickelt, und zwar weighted Sample Average Approximation (wSAA) und kernelized Empirical Risk Minimization (kERM), um ein komplexes, zweistufiges stochastisches Kapazitätsplanungsproblem zu lösen: Ein Logistikdienstleister sortiert täglich eintreffende Sendungen auf drei Sortierlinien, für die die wöchentliche Mitarbeiterkapazität geplant werden muss. Dieser Artikel ist der erste Beitrag, in dem ein kERM-Verfahren zur direkten Lösung eines komplexen Planungsproblems mit matrixwertiger Nachfrage und vektorwertiger Entscheidung entwickelt, eine Obergrenze für die erwarteten Kosten für nichtlineare, kernelbasierte Funktionen abgeleitet und die Universal Approximation Property bei Nutzung spezieller Kernelfunktionen gezeigt wird. Die Ergebnisse der numerischen Studie demonstrieren, dass präskriptive Verfahren im Vergleich mit klassischen Lösungsverfahren zu signifikant besseren Entscheidungen führen können und ihre Entscheidungsqualität bei Variation der exogenen Kostenparameter deutlich robuster ist. Im zweiten Artikel, „Prescriptive Analytics for a Multi-Shift Staffing Problem”, werden wSAA und kERM auf ein Planungsproblem der klassischen Warteschlangentheorie angewendet: Ein Dienstleister erhält über den Tag verteilt Aufträge, deren Anzahl und Zeitpunkt des Eintreffens unsicher sind, und muss die Mitarbeiterkapazität für zwei Schichten planen. Dieses Planungsproblem ist komplexer als die bisher mit präskriptiven Ansätzen gelösten Probleme: Auftragseingang und Bearbeitung werden als Wartesystem modelliert und die Nachfrage innerhalb einer Schicht folgt einem nicht stationären Prozess. Diese Komplexität wird mit zwei Näherungsmethoden bewältigt, sodass das Planungsproblem mit wSAA und kERM sowie dem neu entwickelten Optimization-Prediction-Verfahren gelöst werden kann. Die in diesem Artikel entwickelte Methode legt den Grundstein zur Lösung komplexer Warteschlangenmodelle mit präskriptiven Verfahren und schafft damit eine Verbindung zwischen den „Welten“ der Warteschlangentheorie und der präskriptiven Verfahren. Im dritten Artikel, „Explainable Subgradient Tree Boosting for Prescriptive Analytics in Operations Management”, wird ein neues präskriptives Verfahren zur Lösung der Planungsprobleme der ersten beiden Artikel entwickelt, das insbesondere durch die Erklärbarkeit der Entscheidungen attraktiv ist: Subgradient Tree Boosting (STB). Es kombiniert das erfolgreiche Gradient-Boosting-Verfahren aus dem ML mit SAA und verwendet Subgradienten, da die Zielfunktion von OR-Planungsproblemen häufig nicht differenzierbar ist. Die numerische Analyse zeigt, dass STB zu einer vergleichbaren Entscheidungsqualität wie wSAA und kERM führen kann, und dass die Kapazitätsentscheidungen in Beiträge einzelner Features zerlegt und damit erklärt werden können. Das STB-Verfahren ist damit nicht nur aufgrund seiner Entscheidungsqualität attraktiv für Entscheidungsträger, sondern insbesondere auch durch die inhärente Erklärbarkeit. Die in diesen drei Artikeln präsentierten Ergebnisse zeigen, dass die Nutzung präskriptiver Verfahren, wie wSAA, kERM und STB, bei der Lösung komplexer Planungsprobleme zu deutlich besseren Ergebnissen führen kann als der Einsatz klassischer Methoden, die Feature-Daten vernachlässigen oder auf einer parametrischen Verteilungsschätzung basieren.
46

Design and Evaluation of Data-Driven Enterprise Process Monitoring Systems / Design und Evaluation von datengetriebenen Prozess Überwachungssystemen in Unternehmen

Oberdorf, Felix January 2022 (has links) (PDF)
Increasing global competition forces organizations to improve their processes to gain a competitive advantage. In the manufacturing sector, this is facilitated through tremendous digital transformation. Fundamental components in such digitalized environments are process-aware information systems that record the execution of business processes, assist in process automation, and unlock the potential to analyze processes. However, most enterprise information systems focus on informational aspects, process automation, or data collection but do not tap into predictive or prescriptive analytics to foster data-driven decision-making. Therefore, this dissertation is set out to investigate the design of analytics-enabled information systems in five independent parts, which step-wise introduce analytics capabilities and assess potential opportunities for process improvement in real-world scenarios. To set up and extend analytics-enabled information systems, an essential prerequisite is identifying success factors, which we identify in the context of process mining as a descriptive analytics technique. We combine an established process mining framework and a success model to provide a structured approach for assessing success factors and identifying challenges, motivations, and perceived business value of process mining from employees across organizations as well as process mining experts and consultants. We extend the existing success model and provide lessons for business value generation through process mining based on the derived findings. To assist the realization of process mining enabled business value, we design an artifact for context-aware process mining. The artifact combines standard process logs with additional context information to assist the automated identification of process realization paths associated with specific context events. Yet, realizing business value is a challenging task, as transforming processes based on informational insights is time-consuming. To overcome this, we showcase the development of a predictive process monitoring system for disruption handling in a production environment. The system leverages state-of-the-art machine learning algorithms for disruption type classification and duration prediction. It combines the algorithms with additional organizational data sources and a simple assignment procedure to assist the disruption handling process. The design of such a system and analytics models is a challenging task, which we address by engineering a five-phase method for predictive end-to-end enterprise process network monitoring leveraging multi-headed deep neural networks. The method facilitates the integration of heterogeneous data sources through dedicated neural network input heads, which are concatenated for a prediction. An evaluation based on a real-world use-case highlights the superior performance of the resulting multi-headed network. Even the improved model performance provides no perfect results, and thus decisions about assigning agents to solve disruptions have to be made under uncertainty. Mathematical models can assist here, but due to complex real-world conditions, the number of potential scenarios massively increases and limits the solution of assignment models. To overcome this and tap into the potential of prescriptive process monitoring systems, we set out a data-driven approximate dynamic stochastic programming approach, which incorporates multiple uncertainties for an assignment decision. The resulting model has significant performance improvement and ultimately highlights the particular importance of analytics-enabled information systems for organizational process improvement. / Der zunehmende globale Wettbewerb zwingt Unternehmen zur Verbesserung ihrer Prozesse, um sich dadurch einen Wettbewerbsvorteil zu verschaffen. In der Fertigungsindustrie wird das durch die die digitale Transformation unterstützt. Grundlegende Komponenten in den entstehenden digitalisierten Umgebungen sind prozessorientierte Informationssysteme, die die Ausführung von Geschäftsprozessen aufzeichnen, bei der Prozessautomatisierung unterstützen und wiederum Potenzial zur Prozessanalyse freisetzen. Die meisten Informationssysteme in Unternehmen konzentrieren sich jedoch auf die Anzeige von Informationen, Prozessautomatisierung oder Datenerfassung, nutzen aber keine predictive analytics oder prescriptive analytics, um datengetriebene Entscheidungen zu unterstützen. Daher wird in dieser Dissertation der Aufbau von analytics-enabled Informationssystemen in fünf unabhängigen Teilen untersucht, die schrittweise analytische Methoden einführen und potenzielle Möglichkeiten zur Prozessverbesserung in realen Szenarien bewerten. Eine wesentliche Voraussetzung für den Auf- und Ausbau von analytics-enabled Informationssystemen ist die Identifikation von Erfolgsfaktoren, die wir im Kontext von Process Mining als deskriptive Methode untersuchen. Wir kombinieren einen etablierten Process Mining Framework und ein Process Mining Erfolgsmodell, um einen strukturierten Ansatz zur Bewertung von Erfolgsfaktoren zu ermöglichen, den wir aufbauend zur Identifizierung von Herausforderungen, Motivationen und des wahrgenommenen Mehrwerts (engl. Business Value) von Process Mining durch Mitarbeiter in Organisationen und Process Mining Experten nutzen. Auf Grundlage der gewonnenen Erkenntnisse erweitern wir das bestehende Erfolgsmodell und leiten Implikationen für die Generierung von Business Value durch Process Mining ab. Um die Realisierung des durch Process Mining ermöglichten Business Value zu unterstützen, entwickeln wir ein Artefakt für kontextbezogenes Process Mining. Das Artefakt kombiniert standard Prozessdaten mit zusätzlichen Kontextinformationen, um die automatische Identifizierung von Prozesspfaden, die mit den Kontextereignissen in Verbindung gebracht werden, zu unterstützen. Die entsprechende Realisierung ist jedoch eine herausfordernde Aufgabe, da die Transformation von Prozessen auf der Grundlage von Informationserkenntnissen zeitaufwendig ist. Um dies zu überwinden, stellen wir die Entwicklung eines predictive process monitoring Systems zur Automatisierung des Störungsmanagements in einer Produktionsumgebung vor. Das System nutzt etablierte Algorithmen des maschinellen Lernens zur Klassifizierung von Störungsarten und zur Vorhersage der Störungsdauer. Es kombiniert die Algorithmen mit zusätzlichen Datenquellen und einem einfachen Zuweisungsverfahren, um den Prozess der Störungsbearbeitung zu unterstützen. Die Entwicklung eines solchen Systems und entsprechender Modelle ist eine anspruchsvolle Aufgabe, die wir durch die Entwicklung einer Fünf-Phasen-Methode für predictive end-to-end process monitoring von Unternehmensprozessen unter Verwendung von multi-headed neural networks adressieren. Die Methode erleichtert die Integration heterogener Datenquellen durch dedizierte Modelle, die für eine Vorhersage kombiniert werden. Die Evaluation eines realen Anwendungsfalls unterstreicht die Kompetitivität des eines aus der entwickelten Methode resultierenden Modells. Allerdings sind auch die Ergebnisse des verbesserten Modells nicht perfekt. Somit muss die Entscheidung über die Zuweisung von Agenten zur Lösung von Störungen unter Unsicherheit getroffen werden. Dazu können zwar mathematische Modelle genutzt werden, allerdings steigt die Anzahl der möglichen Szenarien durch komplexe reale Bedingungen stark an und limitiert die Lösung mathematischer Modelle. Um dies zu überwinden und das Potenzial eines prescriptive process monitoring Systems zu beleuchten, haben wir einen datengetriebenen Ansatz zur Approximation eines dynamischen stochastischen Problems entwickelt, der mehrere Unsicherheiten bei der Zuweisung der Agenten berücksichtigt. Das resultierende Modell hat eine signifikant bessere Leistung und unterstreicht letztlich die besondere Bedeutung von analytics-enabled Informationssystemen für die Verbesserung von Organisationsprozessen.
47

Towards Causal Reinforcement Learning

Zhang, Junzhe January 2023 (has links)
Causal inference provides a set of principles and tools that allows one to combine data and knowledge about an environment to reason with questions of a counterfactual nature - i.e., what would have happened if the reality had been different - even when no data of this unrealized reality is currently available. Reinforcement learning provides a collection of methods that allows the agent to reason about optimal decision-making under uncertainty by trial and error - i.e., what would the consequences (e.g., subsequent rewards, states) be had the action been different? While these two disciplines have evolved independently and with virtually no interaction, they operate over different aspects of the same building block, i.e., counterfactual reasoning, making them umbilically connected. This dissertation provides a unified framework of causal reinforcement learning (CRL) that formalizes the connection between a causal inference and reinforcement learning and studies them side by side. The environment where the agent will be deployed is parsimoniously modeled as a structural causal model (Pearl, 2000) consisting of a collection of data-generating mechanisms that lead to different causal invariances. This formalization, in turn, will allow for a unifying treatment for learning strategies that are seemingly different in the literature, including online learning, off-policy learning, and causal identification. Moreover, novel learning opportunities naturally arise which are not addressed by existing strategies but entail new dimensions of analysis. Specifically, this work advances our understanding of several dimensions of optimal decision-making under uncertainty, which includes the following capabilities and research questions: Confounding-Robust Policy Evaluation. How to evaluate candidate policies from observations when unobserved confounders exist and the effects of actions on rewards appear more effective than they are? More importantly, how to derive a bound over the effect of a policy (i.e., a partial evaluation) when it cannot be uniquely determined from the observational data? Offline-to-Online Learning. Online learning could be applied to fine-tune the partial evaluation of candidate policies. How to leverage such partial knowledge in a future online learning process without hurting the performance of the agent? More importantly, under what conditions can the learning process be accelerated instead? Imitation Learning from Confounded Demonstrations. How to design a proper performance measure (e.g., reward or utility) from the behavioral trajectories of an expert demonstrating the task? Specifically, under which conditions could an imitator achieve the expert's performance by optimizing the learned reward, even when the imitator and the expert's sensory capabilities differ, and unobserved confounding bias is present in the demonstration data? By building on the modern theory of causation, approximation, and statistical learning, this dissertation provides algebraic and graphical criteria and algorithmic procedures to support the inference required in the corresponding learning tasks. This characterization generalizes the computational framework of reinforcement learning by leveraging underlying causal relationships so that it is robust to the distribution shift presented in the data collected from the agent's different interaction regimes with the environment, from passive observation to active intervention. The theory provided in this work is general in that it takes any arbitrary set of structural causal assumptions as input and decides whether this specific instance admits an optimal solution. The problems and methods discussed have several applications in the empirical sciences, statistics, operations management, machine learning, and artificial intelligence.
48

Operations management research: contemporary themes, trends and potential future directions

Taylor, Andrew, Taylor, Margaret January 2009 (has links)
Purpose The purpose of this paper is to identify the contemporary research themes published in IJOPM in order to contribute to current debates about the future directions of operations management (OM) research. Design/methodology/approach All 310 articles published in IJOPM from volume 24 issue 9 in 2004 through volume 29, issue 12 in 2009 are analysed using content analysis methods. This period of analysis is chosen because it represents all the articles published in issues for which the authors are able to have full control, during their period of tenure as Editors of the journal. This analysis is supplemented by data on all 1,853 manuscripts submitted to the journal during the same time period and further, by analysis of reviews and feedback sent to all authors after review. Findings The paper reports the main research themes and research methods inherent in the 310 published papers. Statistics on the countries represented by these papers and the size and international composition of author teams are provided, together with the publication success rates of the countries that submit in the highest volumes, and the success rates associated with the size of the author team. Finally, data on the reasons for rejection of manuscripts are presented. Research limitations/implications There is some residual inaccuracy in content analysis methods, whereby, in extracting research themes there is often more than one topic covered. In the same vein, as regards categorisation of the causes of rejection of manuscripts during the review process, there is frequently more than one reason for rejection, so perhaps a weighted scoring system would have been more insightful. In determining the country of origin of papers, while the country of the corresponding author is used, it should be recognised that some studies originate from international collaborations so that this method may give a slightly distorted picture. Finally, in computing publication success rates by comparison of submissions and published papers there is a time delay between the two data sets within any defined period of analysis. Practical implications The analysis adds generally to debates about contemporary research themes; in particular it extends the work of Pilkington and Fitzgerald, which analyses all articles solely in IJOPM between 1994 and 2003. In addition, the findings suggest a need for more frequent exploitation of multiple research methods, for greater rigour in the planning and execution of fieldwork, for greater engagement with the world of OM practice and finally, consideration of how OM research can address wider social and political issues. Originality/value This paper represents an inside view of the publication process from a leading OM journal; this kind of insight is rarely available in the public domain.
49

The Challenges of performance measurement in third sector organisations: the case of UK advocacy services

Taylor, Margaret, Taylor, Andrew January 2015 (has links)
No
50

Exploring the role of supplier relationship management for sustainable operations: an OR perspective

Sharif, Amir M., Alshawi, S., Kamal, M.M., Eldabi, T., Mazhar, A. 13 November 2013 (has links)
No / This paper provides a systems-based approach to the exploration of the relationship and integration between Supplier Relationship Management (SRM) factors as part of a Sustainable Operations Management (SOM) agenda. The authors have chosen electronic procurement (e-Procurement) as a suitable context in this light. Through a review of extant literature, a Systems Archetype (SA) model was developed (based on the ‘Accidental Adversaries’ archetype) and findings from a quantitative pilot study exploring key factors pertinent to e-Procurement SRM were gathered, and hence evaluated against SOM factors. The objective of this research was to describe and visualise the causal interrelationships involved in SRM-SOM through the application of a SA (as an Operations Research tool). The authors believe that this research also provides a unique approach to developing and harnessing the useful and unique properties of Systems Thinking (ST), by attempting to reduce and organise the (generally ad hoc and wide-ranging) sequence of subjective perspectives commonly experienced in causal mapping experiments. The paper builds upon the extant literature, and provides further basis for continuing research in the areas of ST, SAs and the application of operational research to plan sustainable operations.

Page generated in 0.1578 seconds