• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 74
  • 14
  • 2
  • Tagged with
  • 301
  • 301
  • 209
  • 186
  • 177
  • 133
  • 123
  • 123
  • 61
  • 36
  • 35
  • 31
  • 29
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Machine learning assisted real‑time deformability cytometry of CD34+ cells allows to identify patients with myelodysplastic syndromes

Herbig, Maik, Jacobi, Angela, Wobus, Manja, Weidner, Heike, Mies, Anna, Kräter, Martin, Otto, Oliver, Thiede, Christian, Weickert, Marie‑Theresa, Götze, Katharina S., Rauner, Martina, Hofbauer, Lorenz C., Bornhäuser, Martin, Guck, Jochen, Ader, Marius, Platzbecker, Uwe, Balaian, Ekaterina 16 May 2024 (has links)
Diagnosis of myelodysplastic syndrome (MDS) mainly relies on a manual assessment of the peripheral blood and bone marrow cell morphology. The WHO guidelines suggest a visual screening of 200 to 500 cells which inevitably turns the assessor blind to rare cell populations and leads to low reproducibility. Moreover, the human eye is not suited to detect shifts of cellular properties of entire populations. Hence, quantitative image analysis could improve the accuracy and reproducibility of MDS diagnosis. We used real-time deformability cytometry (RT-DC) to measure bone marrow biopsy samples of MDS patients and age-matched healthy individuals. RT-DC is a high-throughput (1000 cells/s) imaging flow cytometer capable of recording morphological and mechanical properties of single cells. Properties of single cells were quantified using automated image analysis, and machine learning was employed to discover morpho-mechanical patterns in thousands of individual cells that allow to distinguish healthy vs. MDS samples. We found that distribution properties of cell sizes differ between healthy and MDS, with MDS showing a narrower distribution of cell sizes. Furthermore, we found a strong correlation between the mechanical properties of cells and the number of disease-determining mutations, inaccessible with current diagnostic approaches. Hence, machine-learning assisted RT-DC could be a promising tool to automate sample analysis to assist experts during diagnosis or provide a scalable solution for MDS diagnosis to regions lacking sufficient medical experts.
212

On the Efficient Utilization of Dense Nonlocal Adjacency Information In Graph Neural Networks

Bünger, Dominik 14 December 2021 (has links)
In den letzten Jahren hat das Teilgebiet des Maschinellen Lernens, das sich mit Graphdaten beschäftigt, durch die Entwicklung von spezialisierten Graph-Neuronalen Netzen (GNNs) mit mathematischer Begründung in der spektralen Graphtheorie große Sprünge nach vorn gemacht. Zusätzlich zu natürlichen Graphdaten können diese Methoden auch auf Datensätze ohne Graphen angewendet werden, indem man einen Graphen künstlich mithilfe eines definierten Adjazenzbegriffs zwischen den Samplen konstruiert. Nach dem neueste Stand der Technik wird jedes Sample mit einer geringen Anzahl an Nachbarn verknüpft, um gleichzeitig das dünnbesetzte Verhalten natürlicher Graphen nachzuahmen, die Stärken bestehender GNN-Methoden auszunutzen und quadratische Abhängigkeit von der Knotenanzahl zu verhinden, welche diesen Ansatz für große Datensätze unbrauchbar machen würde. Die vorliegende Arbeit beleuchtet die alternative Konstruktion von vollbesetzten Graphen basierend auf Kernel-Funktionen. Dabei quantifizieren die Verknüpfungen eines jeden Samples explizit die Ähnlichkeit zu allen anderen Samplen. Deshalb enthält der Graph eine quadratische Anzahl an Kanten, die die lokalen und nicht-lokalen Nachbarschaftsinformationen beschreiben. Obwohl dieser Ansatz in anderen Kontexten wie der Lösung partieller Differentialgleichungen ausgiebig untersucht wurde, wird er im Maschinellen Lernen heutzutage meist wegen der dichtbesetzten Adjazenzmatrizen als unbrauchbar empfunden. Aus diesem Grund behandelt ein großer Teil dieser Arbeit numerische Techniken für schnelle Auswertungen, insbesondere Eigenwertberechnungen, in wichtigen Spezialfällen, bei denen die Samples durch niedrigdimensionale Vektoren (wie z.B. in dreidimensionalen Punktwolken) oder durch kategoriale Attribute beschrieben werden. Weiterhin wird untersucht, wie diese dichtbesetzten Adjazenzinformationen in Lernsituationen auf Graphen benutzt werden können. Es wird eine eigene transduktive Lernmethode vorgeschlagen und präsentiert, eine Version eines Graph Convolutional Networks (GCN), das auf die spektralen und räumlichen Eigenschaften von dichtbesetzten Graphen abgestimmt ist. Schließlich wird die Anwendung von Kernel-basierten Adjazenzmatrizen in der Beschleunigung der erfolgreichen Architektur “PointNet++” umrissen. Im Verlauf der Arbeit werden die Methoden in ausführlichen numerischen Experimenten evaluiert. Zusätzlich zu der empirischen Genauigkeit der Neuronalen Netze liegt der Fokus auf wettbewerbsfähigen Laufzeiten, um die Berechnungs- und Energiekosten der Methoden zu reduzieren. / Over the past few years, graph learning - the subdomain of machine learning on graph data - has taken big leaps forward through the development of specialized Graph Neural Networks (GNNs) that have mathematical foundations in spectral graph theory. In addition to natural graph data, these methods can be applied to non-graph data sets by constructing a graph artificially using a predefined notion of adjacency between samples. The state of the art is to only connect each sample to a low number of neighbors in order to simultaneously mimic the sparse behavior of natural graphs, play into the strengths of existing GNN methods, and avoid quadratic scaling in the number of nodes that would make the approach infeasible for large problem sizes. In this thesis, we shine light on the alternative construction of kernel-based fully-connected graphs. Here the connections of each sample explicitly quantify the similarities to all other samples. Hence the graph contains a quadratic number of edges which encode local and non-local neighborhood information. Though this approach is well studied in other settings including the solution of partial differential equations, it is typically dismissed in machine learning nowadays because of its dense adjacency matrices. We thus dedicate a large portion of this work to showcasing numerical techniques for fast evaluations, especially eigenvalue computations, in important special cases where samples are described by low-dimensional feature vectors (e.g., three-dimensional point clouds) or by a small set of categorial attributes. We then continue to investigate how this dense adjacency information can be utilized in graph learning settings. In particular, we present our own proposed transductive learning method, a version of a Graph Convolutional Network (GCN) designed towards the spectral and spatial properties of dense graphs. We furthermore outline the application of kernel-based adjacency matrices in the speedup of the successful PointNet++ architecture. Throughout this work, we evaluate our methods in extensive numerical experiments. In addition to the empirical accuracy of our neural network tasks, we focus on competitive runtimes in order to decrease the computational and energy cost of our methods.
213

Automating Geospatial RDF Dataset Integration and Enrichment

Sherif, Mohamed Ahmed Mohamed 12 May 2016 (has links)
Over the last years, the Linked Open Data (LOD) has evolved from a mere 12 to more than 10,000 knowledge bases. These knowledge bases come from diverse domains including (but not limited to) publications, life sciences, social networking, government, media, linguistics. Moreover, the LOD cloud also contains a large number of crossdomain knowledge bases such as DBpedia and Yago2. These knowledge bases are commonly managed in a decentralized fashion and contain partly verlapping information. This architectural choice has led to knowledge pertaining to the same domain being published by independent entities in the LOD cloud. For example, information on drugs can be found in Diseasome as well as DBpedia and Drugbank. Furthermore, certain knowledge bases such as DBLP have been published by several bodies, which in turn has lead to duplicated content in the LOD . In addition, large amounts of geo-spatial information have been made available with the growth of heterogeneous Web of Data. The concurrent publication of knowledge bases containing related information promises to become a phenomenon of increasing importance with the growth of the number of independent data providers. Enabling the joint use of the knowledge bases published by these providers for tasks such as federated queries, cross-ontology question answering and data integration is most commonly tackled by creating links between the resources described within these knowledge bases. Within this thesis, we spur the transition from isolated knowledge bases to enriched Linked Data sets where information can be easily integrated and processed. To achieve this goal, we provide concepts, approaches and use cases that facilitate the integration and enrichment of information with other data types that are already present on the Linked Data Web with a focus on geo-spatial data. The first challenge that motivates our work is the lack of measures that use the geographic data for linking geo-spatial knowledge bases. This is partly due to the geo-spatial resources being described by the means of vector geometry. In particular, discrepancies in granularity and error measurements across knowledge bases render the selection of appropriate distance measures for geo-spatial resources difficult. We address this challenge by evaluating existing literature for point set measures that can be used to measure the similarity of vector geometries. Then, we present and evaluate the ten measures that we derived from the literature on samples of three real knowledge bases. The second challenge we address in this thesis is the lack of automatic Link Discovery (LD) approaches capable of dealing with geospatial knowledge bases with missing and erroneous data. To this end, we present Colibri, an unsupervised approach that allows discovering links between knowledge bases while improving the quality of the instance data in these knowledge bases. A Colibri iteration begins by generating links between knowledge bases. Then, the approach makes use of these links to detect resources with probably erroneous or missing information. This erroneous or missing information detected by the approach is finally corrected or added. The third challenge we address is the lack of scalable LD approaches for tackling big geo-spatial knowledge bases. Thus, we present Deterministic Particle-Swarm Optimization (DPSO), a novel load balancing technique for LD on parallel hardware based on particle-swarm optimization. We combine this approach with the Orchid algorithm for geo-spatial linking and evaluate it on real and artificial data sets. The lack of approaches for automatic updating of links of an evolving knowledge base is our fourth challenge. This challenge is addressed in this thesis by the Wombat algorithm. Wombat is a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. Wombat is based on generalisation via an upward refinement operator to traverse the space of Link Specifications (LS). We study the theoretical characteristics of Wombat and evaluate it on different benchmark data sets. The last challenge addressed herein is the lack of automatic approaches for geo-spatial knowledge base enrichment. Thus, we propose Deer, a supervised learning approach based on a refinement operator for enriching Resource Description Framework (RDF) data sets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples. Each of the proposed approaches is implemented and evaluated against state-of-the-art approaches on real and/or artificial data sets. Moreover, all approaches are peer-reviewed and published in a conference or a journal paper. Throughout this thesis, we detail the ideas, implementation and the evaluation of each of the approaches. Moreover, we discuss each approach and present lessons learned. Finally, we conclude this thesis by presenting a set of possible future extensions and use cases for each of the proposed approaches.
214

Learning Continuous Human-Robot Interactions from Human-Human Demonstrations

Vogt, David 02 March 2018 (has links)
In der vorliegenden Dissertation wurde ein datengetriebenes Verfahren zum maschinellen Lernen von Mensch-Roboter Interaktionen auf Basis von Mensch-Mensch Demonstrationen entwickelt. Während einer Trainingsphase werden Bewegungen zweier Interakteure mittels Motion Capture erfasst und in einem Zwei-Personen Interaktionsmodell gelernt. Zur Laufzeit wird das Modell sowohl zur Erkennung von Bewegungen des menschlichen Interaktionspartners als auch zur Generierung angepasster Roboterbewegungen eingesetzt. Die Leistungsfähigkeit des Ansatzes wird in drei komplexen Anwendungen evaluiert, die jeweils kontinuierliche Bewegungskoordination zwischen Mensch und Roboter erfordern. Das Ergebnis der Dissertation ist ein Lernverfahren, das intuitive, zielgerichtete und sichere Kollaboration mit Robotern ermöglicht.
215

Machine-Vision-Based Activity, Mobility and Motion Analysis for Assistance Systems in Human Health Care

Richter, Julia 18 April 2019 (has links)
Due to the continuous ageing of our society, both the care and the health sector will encounter challenges in maintaining the quality of human care and health standards. While the number of people with diseases such as dementia and physical illness will be rising, we are simultaneously recording a lack of medical personnel such as caregivers and therapists. One possible approach that tackles the described problem is the employment of technical assistance systems that support both medical personnel and elderly living alone at home. This thesis presents approaches to provide assistance for these target groups. In this work, algorithms that are integrated in prototypical assistance systems for vision-based human daily activity, mobility and motion analysis have been developed. The developed algorithms process 3-D point clouds as well as skeleton joint positions to generate meta information concerning activities and the mobility of elderly persons living alone at home. Such type of information was not accessible so far and is now available for monitoring. By generating this meta information, a basis for the detection of long-term and short-term health changes has been created. Besides monitoring meta information, mobilisation for maintaining physical capabilities, either ambulatory or at home, is a further focus of this thesis. Algorithms for the qualitative assessment of physical exercises were therefore investigated. Thereby, motion sequences in the form of skeleton joint trajectories as well as the heat development in active muscles were considered. These algorithms enable an autonomous physical training under the supervision of a virtual therapist even at home. / Aufgrund der voranschreitenden Überalterung unserer Gesellschaft werden sowohl der Pflege- als auch der Gesundheitssektor vor enorme Herausforderungen gestellt. Während die Zahl an vorrangig altersbedingten Erkrankungen, wie Demenz oder physische Erkrankungen des Bewegungsapparates, weiterhin zunehmen wird, stagniert die Zahl an medizinischem Fachpersonal, wie Therapeuten und Pflegekräften. An dieser Stelle besteht das Ziel, die Qualität medizinischer Leistungen auf hohem Niveau zu halten und dabei die Einhaltung von Pflege- und Gesundheitsstandards sicherzustellen. Ein möglicher Ansatz hierfür ist der Einsatz technischer Assistenzsysteme, welche sowohl das medizinische Personal und Angehörige entlasten als auch ältere, insbesondere allein lebende Menschen zu Hause unterstützen können. Die vorliegende Arbeit stellt Ansätze zur Unterstützung der genannten Zielgruppen vor, die prototypisch in Assistenzsystemen zur visuellen, kamerabasierten Analyse von täglichen Aktivitäten, von Mobilität und von Bewegungen bei Trainingsübungen integriert sind. Die entwickelten Algorithmen verarbeiten dreidimensionale Punktwolken und Gelenkpositionen des menschlichen Skeletts, um sogenannte Meta-Daten über tägliche Aktivitäten und die Mobilität einer allein lebenden Person zu erhalten. Diese Informationen waren bis jetzt nicht verfügbar, können allerdings für den Patienten selbst, für medizinisches Personal und Angehörige aufschlussreich sein, denn diese Meta-Daten liefern die Grundlage für die Detektion kurz- und langfristiger Veränderungen im Verhalten oder in der Mobilität, die ansonsten wahrscheinlich unbemerkt geblieben wären. Neben der Erfassung solcher Meta-Informationen liegt ein weiterer Fokus der Arbeit in der Mobilisierung von Patienten durch angeleitetes Training, um ihre Mobilität und körperliche Verfassung zu stärken. Dabei wurden Algorithmen zur qualitativen Bewertung und Vermittlung von Korrekturhinweisen bei physischen Trainingsübungen entwickelt, die auf Trajektorien von Gelenkpositionen und der Wärmeentwicklung in Muskeln beruhen. Diese Algorithmen ermöglichen aufgrund der Nachahmung eines durch den Therapeuten gegebenen Feedbacks ein autonomes Training.
216

Learning Sampling-Based 6D Object Pose Estimation

Krull, Alexander 31 August 2018 (has links)
The task of 6D object pose estimation, i.e. of estimating an object position (three degrees of freedom) and orientation (three degrees of freedom) from images is an essential building block of many modern applications, such as robotic grasping, autonomous driving, or augmented reality. Automatic pose estimation systems have to overcome a variety of visual ambiguities, including texture-less objects, clutter, and occlusion. Since many applications demand real time performance the efficient use of computational resources is an additional challenge. In this thesis, we will take a probabilistic stance on trying to overcome said issues. We build on a highly successful automatic pose estimation framework based on predicting pixel-wise correspondences between the camera coordinate system and the local coordinate system of the object. These dense correspondences are used to generate a pool of hypotheses, which in turn serve as a starting point in a final search procedure. We will present three systems that each use probabilistic modeling and sampling to improve upon different aspects of the framework. The goal of the first system, System I, is to enable pose tracking, i.e. estimating the pose of an object in a sequence of frames instead of a single image. By including information from previous frames tracking systems can resolve many visual ambiguities and reduce computation time. System I is a particle filter (PF) approach. The PF represents its belief about the pose in each frame by propagating a set of samples through time. Our system uses the process of hypothesis generation from the original framework as part of a proposal distribution that efficiently concentrates samples in the appropriate areas. In System II, we focus on the problem of evaluating the quality of pose hypotheses. This task plays an essential role in the final search procedure of the original framework. We use a convolutional neural network (CNN) to assess the quality of an hypothesis by comparing rendered and observed images. To train the CNN we view it as part of an energy-based probability distribution in pose space. This probabilistic perspective allows us to train the system under the maximum likelihood paradigm. We use a sampling approach to approximate the required gradients. The resulting system for pose estimation yields superior results in particular for highly occluded objects. In System III, we take the idea of machine learning a step further. Instead of learning to predict an hypothesis quality measure, to be used in a search procedure, we present a way of learning the search procedure itself. We train a reinforcement learning (RL) agent, termed PoseAgent, to steer the search process and make optimal use of a given computational budget. PoseAgent dynamically decides which hypothesis should be refined next, and which one should ultimately be output as final estimate. Since the search procedure includes discrete non-differentiable choices, training of the system via gradient descent is not easily possible. To solve the problem, we model behavior of PoseAgent as non-deterministic stochastic policy, which is ultimately governed by a CNN. This allows us to use a sampling-based stochastic policy gradient training procedure. We believe that some of the ideas developed in this thesis, such as the sampling-driven probabilistically motivated training of a CNN for the comparison of images or the search procedure implemented by PoseAgent have the potential to be applied in fields beyond pose estimation as well.
217

Fenchel duality-based algorithms for convex optimization problems with applications in machine learning and image restoration

Heinrich, André 21 March 2013 (has links)
The main contribution of this thesis is the concept of Fenchel duality with a focus on its application in the field of machine learning problems and image restoration tasks. We formulate a general optimization problem for modeling support vector machine tasks and assign a Fenchel dual problem to it, prove weak and strong duality statements as well as necessary and sufficient optimality conditions for that primal-dual pair. In addition, several special instances of the general optimization problem are derived for different choices of loss functions for both the regression and the classifification task. The convenience of these approaches is demonstrated by numerically solving several problems. We formulate a general nonsmooth optimization problem and assign a Fenchel dual problem to it. It is shown that the optimal objective values of the primal and the dual one coincide and that the primal problem has an optimal solution under certain assumptions. The dual problem turns out to be nonsmooth in general and therefore a regularization is performed twice to obtain an approximate dual problem that can be solved efficiently via a fast gradient algorithm. We show how an approximate optimal and feasible primal solution can be constructed by means of some sequences of proximal points closely related to the dual iterates. Furthermore, we show that the solution will indeed converge to the optimal solution of the primal for arbitrarily small accuracy. Finally, the support vector regression task is obtained to arise as a particular case of the general optimization problem and the theory is specialized to this problem. We calculate several proximal points occurring when using difffferent loss functions as well as for some regularization problems applied in image restoration tasks. Numerical experiments illustrate the applicability of our approach for these types of problems.
218

Machine Learning im CAE

Thieme, Cornelia 24 May 2023 (has links)
Many companies have a large collection of different model variants and results. Hexagon's (formerly MSC Software) software Odyssee helps to find out what information is contained in this data. New calculations can sometimes be avoided because the results for new parameter combinations can be predicted from the existing calculations. This is particularly interesting for non-linear or large models with long run times. The software also helps when setting up new DOEs and offers a variety of options for statistical displays. In the lecture, the number-based and image-based methods are compared. / Viele Firmen können auf eine große Sammlung vorhandener Rechnungen für verschiedene Modellvarianten zurückgreifen. Die Software Odyssee von Hexagon (früher MSC Software) hilft herauszufinden, welche Informationen in diesen Daten stecken. Neue Rechnungen kann man sich teilweise ersparen, weil die Ergebnisse für neue Parameterkombinationen aus den vorhandenen Rechnungen vorhergesagt werden können. Dies ist besonders interessant für nichtlineare oder große Modelle mit langer Rechenzeit. Die Software hilft auch beim Aufsetzen neuer DOEs und bietet vielfältige Möglichkeiten für statistische Darstellungen. In dem Vortrag werden die zahlenbasierte und bildbasierte Methode gegenübergestellt.
219

Improving drill-core hyperspectral mineral mapping using machine learning

Contreras Acosta, Isabel Cecilia 21 July 2022 (has links)
Considering the ever-growing global demand for raw materials and the complexity of the geological deposits that are still to be found, high-quality extensive mineralogical information is required. Mineral exploration remains a risk-prone process, with empirical approaches prevailing over data-driven strategy. Amongst the many ways to innovate, hyperspectral imaging sensors for drill-core mineral mapping are one of the disruptive technologies. This potential could be multiplied by implementing machine learning. This dissertation introduces a workflow that allows the use of supervised learning to map minerals by means of ancillary data commonly acquired during exploration campaigns (i.e., mineralogy, geochemistry and core photography). The fusion of hyperspectral with such ancillary data allows not only to upscale to complete boreholes information acquired locally, but also to enhance the spatial resolution of the mineral maps. Thus, the proposed approaches provide digitally archived objective maps that serve as vectors for exploration and support geologists in their decision making.:List of Figures xviii List of Tables xix List of Acronyms xxi 1 Introduction 1 1.1 Mineral resources and the need for innovation . . . . . . . . . . . . . 2 1.2 Spectroscopy and hyperspectral imaging . . . . . . . . . . . . . . . . 5 1.2.1 Imaging spectroscopy ....................... 6 1.2.2 Spectroscopy of minerals ..................... 8 1.2.3 Mineral mapping.......................... 12 1.2.4 Mineral mapping in exploration ................. 15 1.2.5 Drill-core mineral mapping.................... 16 1.3 Machine learning .............................. 19 1.3.1 Supervised learning for drill-core hyperspectral data . . . . . 20 1.4 Motivation and approach ......................... 22 2 Hyperspectral mineral mapping using supervised learning and mineralogical data 25 Preface ....................................... 25 Abstract....................................... 26 2.1 Introduction ................................. 27 2.2 Data acquisition............................... 30 2.2.1 Hyperspectral data......................... 30 2.2.2 High-resolution mineralogica ldata . . . . . . . . . . . . . . . 31 2.3 Proposed system architecture ....................... 33 2.3.1 Re-sampling and co-registration ................. 33 2.3.2 Classification ............................ 35 2.4 Experimental results ............................ 36 2.4.1 Data description .......................... 36 2.4.2 Experimental setup......................... 37 2.4.3 Quantitative and qualitative assessment . . . . . . . . . . . . . 37 2.5 Discussion.................................. 40 2.6 Conclusion.................................. 42 3 Geochemical and hyperspectral data integration 45 Preface ....................................... 45 Abstract....................................... 46 3.1 Introduction ................................. 47 3.2 Basis for the integration of geochemical and hyperspectral data . . . 50 3.3 Proposed approach ............................. 51 3.3.1 Geochemical data labeling..................... 51 3.3.2 Superpixel segmentation ..................... 53 3.3.3 Classification ............................ 53 3.4 Experimental results ............................ 54 3.4.1 Data description .......................... 54 3.4.2 Data acquisition........................... 55 3.4.3 Experimental setup......................... 55 3.4.4 Assessment of the geochemical data labeling . . . . . . . . . . 58 3.4.5 Quantitative and Qualitative Assessment . . . . . . . . . . . . 58 3.5 Discussion.................................. 61 3.6 Conclusion.................................. 63 4 Improved spatial resolution for mineral mapping 65 Preface ....................................... 65 Abstract....................................... 66 4.1 Introduction ................................. 67 4.2 Methods: Resolution Enhancement for Mineral Mapping . . . . . . . 69 4.2.1 Hyperspectral Resolution Enhancement . . . . . . . . . . . . . 69 4.2.2 Mineral Mapping.......................... 71 4.2.3 Supervised Classification ..................... 71 4.3 Case Study.................................. 72 4.3.1 Data Acquisition .......................... 72 4.3.2 Resolution Enhancement Application . . . . . . . . . . . . . . 74 4.3.3 Evaluation of the Resolution Enhancement . . . . . . . . . . . 75 4.4 Results .................................... 76 4.4.1 Mineral Mapping.......................... 76 4.4.2 Supervised Classification ..................... 77 4.4.3 Validation .............................. 80 4.5 Discussion.................................. 82 4.6 Conclusions ................................. 84 5 Bibliography 92
220

Vorhersage des in-game Status im Fußball mit Maschinellem Lernen basierend auf zeitkontinuierlichen Spielerpositionsdaten

Lang, Steffen, Wild, Raphael, Isenko, Alexander, Link, Daniel 14 October 2022 (has links)
Diese Studie beschäftigt sich mit der Vorhersage, ausschließlich auf Basis von Spielerpositionsdaten, ob ein Fußballspiel in einem Moment unterbrochen ist oder nicht. Hierfür wurden vier machine-learning Modelle mit Daten von 102 Spielen der Fußball Bundesliga trainiert und ihre Genauigkeit evaluiert. Dabei zeigte sich eine Genauigkeit von bis zu 92% für einen einzelnen Moment und eine Präzision von 81% für ganze Unterbrechungen. / This study deals with the prediction, based solely on player position data, whether a soccer match is interrupted at a moment or not. For this purpose, four machine-learning models were trained with data from 102 matches of the German Bundesliga and their accuracy was evaluated. The results showed an accuracy of up to 92% for a single moment and a precision of 81% for whole interruptions.

Page generated in 0.0976 seconds