• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 212
  • 74
  • 14
  • 2
  • Tagged with
  • 301
  • 301
  • 209
  • 186
  • 177
  • 133
  • 123
  • 123
  • 61
  • 36
  • 35
  • 31
  • 29
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

Non-Rigid Liver Registration for Laparoscopy using Data-Driven Biomechanical Models

Pfeiffer, Micha 02 June 2022 (has links)
During laparoscopic liver resection, the limited access to the organ, the small field of view and lack of palpation can obstruct a surgeon’s workflow. Automatic navigation systems could use the images from preoperative volumetric organ scans to help the surgeons find their target (tumors) and risk-structures (vessels) more efficiently. This requires the preoperative data to be fused (or registered) with the intraoperative scene in order to display information at the correct intraoperative position. One key challenge in this setting is the automatic estimation of the organ’s current intra-operative deformation, which is required in order to predict the position of internal structures. Parameterizing the many patient-specific unknowns (tissue properties, boundary conditions, interactions with other tissues, direction of gravity) is very difficult. Instead, this work explores how to employ deep neural networks to solve the registration problem in a data-driven manner. To this end, convolutional neural networks are trained on synthetic data to estimate an organ’s intraoperative displacement field and thus its current deformation. To drive this estimation, visible surface cues from the intraoperative camera view must be supplied to the networks. Since reliable surface features are very difficult to find, the networks are adapted to also find correspondences between the pre- and intraoperative liver geometry automatically. This combines the search for correspondences with the biomechanical behavior estimation and allows the networks to tackle the full non-rigid registration problem in one single step. The result is a model which can quickly predict the volume deformation of a liver, given only sparse surface information. The model combines the advantages of a physically accurate biomechanical simulation with the speed and powerful feature extraction capabilities of deep neural networks. To test the method intraoperatively, a registration pipeline is developed which constructs a map of the liver and its surroundings from the laparoscopic video and then uses the neural networks to fuse the preoperative volume data into this map. The deformed organ volume can then be rendered as an overlay directly onto the laparoscopic video stream. The focus of this pipeline is to be applicable to real surgery, where everything should be quick and non-intrusive. To meet these requirements, a SLAM system is used to localize the laparoscopic camera (avoiding setup of an external tracking system), various neural networks are used to quickly interpret the scene and semi-automatic tools let the surgeons guide the system. Beyond the concrete advantages of the data-driven approach for intraoperative registration, this work also demonstrates general benefits of training a registration system preoperatively on synthetic data. The method lets the engineer decide which values need to be known explicitly and which should be estimated implicitly by the networks, which opens the door to many new possibilities.:1 Introduction 1.1 Motivation 1.1.1 Navigated Liver Surgery 1.1.2 Laparoscopic Liver Registration 1.2 Challenges in Laparoscopic Liver Registration 1.2.1 Preoperative Model 1.2.2 Intraoperative Data 1.2.3 Fusion/Registration 1.2.4 Data 1.3 Scope and Goals of this Work 1.3.1 Data-Driven, Biomechanical Model 1.3.2 Data-Driven Non-Rigid Registration 1.3.3 Building a Working Prototype 2 State of the Art 2.1 Rigid Registration 2.2 Non-Rigid Liver Registration 2.3 Neural Networks for Simulation and Registration 3 Theoretical Background 3.1 Liver 3.2 Laparoscopic Liver Resection 3.2.1 Staging Procedure 3.3 Biomechanical Simulation 3.3.1 Physical Balance Principles 3.3.2 Material Models 3.3.3 Numerical Solver: The Finite Element Method (FEM) 3.3.4 The Lagrangian Specification 3.4 Variables and Data in Liver Registration 3.4.1 Observable 3.4.2 Unknowns 4 Generating Simulations of Deforming Organs 4.1 Organ Volume 4.2 Forces and Boundary Conditions 4.2.1 Surface Forces 4.2.2 Zero-Displacement Boundary Conditions 4.2.3 Surrounding Tissues and Ligaments 4.2.4 Gravity 4.2.5 Pressure 4.3 Simulation 4.3.1 Static Simulation 4.3.2 Dynamic Simulation 4.4 Surface Extraction 4.4.1 Partial Surface Extraction 4.4.2 Surface Noise 4.4.3 Partial Surface Displacement 4.5 Voxelization 4.5.1 Voxelizing the Liver Geometry 4.5.2 Voxelizing the Displacement Field 4.5.3 Voxelizing Boundary Conditions 4.6 Pruning Dataset - Removing Unwanted Results 4.7 Data Augmentation 5 Deep Neural Networks for Biomechanical Simulation 5.1 Training Data 5.2 Network Architecture 5.3 Loss Functions and Training 6 Deep Neural Networks for Non-Rigid Registration 6.1 Training Data 6.2 Architecture 6.3 Loss 6.4 Training 6.5 Mesh Deformation 6.6 Example Application 7 Intraoperative Prototype 7.1 Image Acquisition 7.2 Stereo Calibration 7.3 Image Rectification, Disparity- and Depth- estimation 7.4 Liver Segmentation 7.4.1 Synthetic Image Generation 7.4.2 Automatic Segmentation 7.4.3 Manual Segmentation Modifier 7.5 SLAM 7.6 Dense Reconstruction 7.7 Rigid Registration 7.8 Non-Rigid Registration 7.9 Rendering 7.10 Robotic Operating System 8 Evaluation 8.1 Evaluation Datasets 8.1.1 In-Silico 8.1.2 Phantom Torso and Liver 8.1.3 In-Vivo, Human, Breathing Motion 8.1.4 In-Vivo, Human, Laparoscopy 8.2 Metrics 8.2.1 Mean Displacement Error 8.2.2 Target Registration Error (TRE) 8.2.3 Champfer Distance 8.2.4 Volumetric Change 8.3 Evaluation of the Synthetic Training Data 8.4 Data-Driven Biomechanical Model (DDBM) 8.4.1 Amount of Intraoperative Surface 8.4.2 Dynamic Simulation 8.5 Volume to Surface Registration Network (V2S-Net) 8.5.1 Amount of Intraoperative Surface 8.5.2 Dependency on Initial Rigid Alignment 8.5.3 Registration Accuracy in Comparison to Surface Noise 8.5.4 Registration Accuracy in Comparison to Material Stiffness 8.5.5 Champfer-Distance vs. Mean Displacement Error 8.5.6 In-vivo, Human Breathing Motion 8.6 Full Intraoperative Pipeline 8.6.1 Intraoperative Reconstruction: SLAM and Intraoperative Map 8.6.2 Full Pipeline on Laparoscopic Human Data 8.7 Timing 9 Discussion 9.1 Intraoperative Model 9.2 Physical Accuracy 9.3 Limitations in Training Data 9.4 Limitations Caused by Difference in Pre- and Intraoperative Modalities 9.5 Ambiguity 9.6 Intraoperative Prototype 10 Conclusion 11 List of Publications List of Figures Bibliography
142

Digitalisierung in der Bauteilreinigung: Chancen für die Qualitätssicherung

Windisch, Markus 31 May 2019 (has links)
Die Qualitätslenkung von Reinigungsprozessen erfordert die systematische Erfassung von Eingangs-, Prozess- und Ausgangsgrößen, für die nur teilweise Sensoren zur automatischen Messung verfügbar sind. Da die Eingangsgrößen (Verschmutzungszustand) nicht vollständig inline messbar sind und die Wirkung von Restschmutz auf den Folgeprozess – als Grundlage der Grenzwertfestlegung – nicht vollständig bekannt ist, müssen Vor- und Folgeprozesse in die Datenerfassung einbezogen werden. In diesem Vortrag erläutert Dipl.-Ing. Markus Windisch (Teamleiter Bauteilreinigung des Fraunhofer IVV Dresden) die Entwicklung einer Systemlösung zur Prozessdatenerfassung, zeigt dabei branchenspezifische Herausforderungen und den Praxisnutzen beim Einsatz auf und gibt einen Ausblick auf eine zukünftige Integration von selbstlernenden Assistenzsystemen.
143

Learning to Predict Dense Correspondences for 6D Pose Estimation

Brachmann, Eric 17 January 2018 (has links)
Object pose estimation is an important problem in computer vision with applications in robotics, augmented reality and many other areas. An established strategy for object pose estimation consists of, firstly, finding correspondences between the image and the object’s reference frame, and, secondly, estimating the pose from outlier-free correspondences using Random Sample Consensus (RANSAC). The first step, namely finding correspondences, is difficult because object appearance varies depending on perspective, lighting and many other factors. Traditionally, correspondences have been established using handcrafted methods like sparse feature pipelines. In this thesis, we introduce a dense correspondence representation for objects, called object coordinates, which can be learned. By learning object coordinates, our pose estimation pipeline adapts to various aspects of the task at hand. It works well for diverse object types, from small objects to entire rooms, varying object attributes, like textured or texture-less objects, and different input modalities, like RGB-D or RGB images. The concept of object coordinates allows us to easily model and exploit uncertainty as part of the pipeline such that even repeating structures or areas with little texture can contribute to a good solution. Although we can train object coordinate predictors independent of the full pipeline and achieve good results, training the pipeline in an end-to-end fashion is desirable. It enables the object coordinate predictor to adapt its output to the specificities of following steps in the pose estimation pipeline. Unfortunately, the RANSAC component of the pipeline is non-differentiable which prohibits end-to-end training. Adopting techniques from reinforcement learning, we introduce Differentiable Sample Consensus (DSAC), a formulation of RANSAC which allows us to train the pose estimation pipeline in an end-to-end fashion by minimizing the expectation of the final pose error.
144

Hypothesis Generation for Object Pose Estimation From local sampling to global reasoning

Michel, Frank 14 February 2019 (has links)
Pose estimation has been studied since the early days of computer vision. The task of object pose estimation is to determine the transformation that maps an object from it's inherent coordinate system into the camera-centric coordinate system. This transformation describes the translation of the object relative to the camera and the orientation of the object in three dimensional space. The knowledge of an object's pose is a key ingredient in many application scenarios like robotic grasping, augmented reality, autonomous navigation and surveillance. A general estimation pipeline consists of the following four steps: extraction of distinctive points, creation of a hypotheses pool, hypothesis verification and, finally, the hypotheses refinement. In this work, we focus on the hypothesis generation process. We show that it is beneficial to utilize geometric knowledge in this process. We address the problem of hypotheses generation of articulated objects. Instead of considering each object part individually we model the object as a kinematic chain. This enables us to use the inner-part relationships when sampling pose hypotheses. Thereby we only need K correspondences for objects consisting of K parts. We show that applying geometric knowledge about part relationships improves estimation accuracy under severe self-occlusion and low quality correspondence predictions. In an extension we employ global reasoning within the hypotheses generation process instead of sampling 6D pose hypotheses locally. We therefore formulate a Conditional-Random-Field operating on the image as a whole inferring those pixels that are consistent with the 6D pose. Within the CRF we use a strong geometric check that is able to assess the quality of correspondence pairs. We show that our global geometric check improves the accuracy of pose estimation under heavy occlusion.
145

Binary Geometric Transformer Descriptor Based Machine Learning for Pattern Recognition in Design Layout

Treska, Fergo 13 September 2023 (has links)
This paper proposes a novel algorithm in pixel-based pattern recognition in design layout which offers simplicity, speed and accuracy to recognize any patterns that later can be used to detect problematic pattern in lithography process so they can be removed or improved earlier in design stage.:Abstract 1 Content 3 List of Figure 6 List of Tables 8 List of Abbreviations 9 Chapter 1: Introduction 10 1.1 Motivation 10 1.2 Related Work 11 1.3 Purpose and Research Question 12 1.4 Approach and Methodology 12 1.5 Scope and Limitation 12 1.6 Target group 13 1.7 Outline 13 Chapter 2: Theoretical Background 14 2.1 Problematic Pattern in Computational Lithography 14 2.2 Optical Proximity Effect 16 2.3 Taxonomy of Pattern Recognition 17 2.3.1 Feature Generation 18 2.3.2 Classifier Model 19 2.3.3 System evaluation 20 2.4 Feature Selection Technique 20 2.4.1 Wrapper-Based Methods 21 2.4.2 Average-Based Methods 22 2.4.3 Binary Geometrical Transformation 24 2.4.3.1 Image Interpolation 24 2.4.3.2 Geometric Transformation 26 2.4.3.2.1 Forward Mapping: 26 2.4.3.2.2 Inverse Mapping: 27 2.4.3.3 Thresholding 27 2.5 Machine Learning Algorithm 28 2.5.1 Linear Classifier 29 2.5.2 Linear Discriminant Analysis (LDA) 30 2.5.3 Maximum likelihood 30 2.6 Scoring (Metrics to Measure Classifier Model Quality) 31 2.6.1 Accuracy 32 2.6.2 Sensitivity 32 2.6.3 Specifity 32 2.6.4 Precision 32 Chapter 3: Method 33 3.1 Problem Formulation 33 3.1.1 T2T Pattern 35 3.1.2 Iso-Dense Pattern 36 3.1.3 Hypothetical Hotspot Pattern 37 3.2 Classification System 38 3.2.1 Wrapper and Average-based 38 3.2.2 Binary Geometric Transformation Based 39 3.3 Window-Based Raster Scan 40 3.3.1 Scanning algorithm 40 3.4 Classifier Design 42 3.4.1 Training Phase 43 3.4.2 Discriminant Coefficient Function 44 3.4.3 SigmaDi 45 3.4.4 Maximum Posterior Probability 45 3.4.5 Classifier Model Block 46 3.5 Weka 3.8 47 3.6 Average-based Influence 49 3.7 BGT Based Model 50 Chapter 4: Results 55 4.1 Wrapper and Average-based LDA classifier 55 4.2 BGT Based LDA with SigmaDi Classifier 56 4.3 Estimation Output 57 4.4 Probability Function 58 Chapter 5: Conclusion 59 5.1 Conclusions 59 5.2 Future Research 60 Bibliography 61 Selbstständigkeitserklärung 63
146

Machine Learning in Detecting Auditory Sequences in Magnetoencephalography Data: Research Project in Computational Modelling and Simulation

Shaikh, Mohd Faraz 17 November 2022 (has links)
Spielt Ihr Gehirn Ihre letzten Lebenserfahrungen ab, während Sie sich ausruhen? Eine offene Frage in den Neurowissenschaften ist, welche Ereignisse unser Gehirn wiederholt und gibt es eine Korrelation zwischen der Wiederholung und der Dauer des Ereignisses? In dieser Studie habe ich versucht, dieser Frage nachzugehen, indem ich Magnetenzephalographie-Daten aus einem Experiment zum aktiven Hören verwendet habe. Die Magnetenzephalographie (MEG) ist ein nicht-invasives Neuroimaging-Verfahren, das verwendet wird, um die Gehirnaktivität zu untersuchen und die Gehirndynamik bei Wahrnehmungs- und kognitiven Aufgaben insbesondere in den Bereichen Sprache und Hören zu verstehen. Es zeichnet das in unserem Gehirn erzeugte Magnetfeld auf, um die Gehirnaktivität zu erkennen. Ich baue eine Pipeline für maschinelles Lernen, die einen Teil der Experimentdaten verwendet, um die Klangmuster zu lernen und dann das Vorhandensein von Geräuschen im späteren Teil der Aufnahmen vorhersagt, in denen die Teilnehmer untätig sitzen mussten und kein Ton zugeführt wurde. Das Ziel der Untersuchung der Testwiedergabe von gelernten Klangsequenzen in der Nachhörphase. Ich habe ein Klassifikationsschema verwendet, um Muster zu identifizieren, wenn MEG auf verschiedene Tonsequenzen in der Zeit nach der Aufgabe reagiert. Die Studie kam zu dem Schluss, dass die Lautfolgen über dem theoretischen Zufallsniveau identifiziert und unterschieden werden können und bewies damit die Gültigkeit unseres Klassifikators. Darüber hinaus könnte der Klassifikator die Geräuschsequenzen in der Nachhörzeit mit sehr hoher Wahrscheinlichkeit vorhersagen, aber um die Modellergebnisse über die Nachhörzeit zu validieren, sind mehr Beweise erforderlich. / Does your brain replay your recent life experiences while you are resting? An open question in neuroscience is which events does our brain replay and is there any correlation between the replay and duration of the event? In this study I tried to investigate this question by using Magnetoencephalography data from an active listening experiment. Magnetoencephalography (MEG) is a non-invasive neuroimaging technique used to study the brain activity and understand brain dynamics in perception and cognitive tasks particularly in the fields of speech and hearing. It records the magnetic field generated in our brains to detect the brain activity. I build a machine learning pipeline which uses part of the experiment data to learn the sound patterns and then predicts the presence of sound in the later part of the recordings in which the participants were made to sit idle and no sound was fed. The aim of the study of test replay of learned sound sequences in the post listening period. I have used classification scheme to identify patterns if MEG responses to different sound sequences in the post task period. The study concluded that the sound sequences can be identified and distinguished above theoretical chance level and hence proved the validity of our classifier. Further, the classifier could predict the sound sequences in the post-listening period with very high probability but in order to validate the model results on post listening period, more evidence is needed.
147

Local Learning Strategies for Data Management Components

Woltmann, Lucas 18 December 2023 (has links)
In a world with an ever-increasing amount of data processed, providing tools for highquality and fast data processing is imperative. Database Management Systems (DBMSs) are complex adaptive systems supplying reliable and fast data analysis and storage capabilities. To boost the usability of DBMSs even further, a core research area of databases is performance optimization, especially for query processing. With the successful application of Artificial Intelligence (AI) and Machine Learning (ML) in other research areas, the question arises in the database community if ML can also be beneficial for better data processing in DBMSs. This question has spawned various works successfully replacing DBMS components with ML models. However, these global models have four common drawbacks due to their large, complex, and inflexible one-size-fits-all structures. These drawbacks are the high complexity of model architectures, the lower prediction quality, the slow training, and the slow forward passes. All these drawbacks stem from the core expectation to solve a certain problem with one large model at once. The full potential of ML models as DBMS components cannot be reached with a global model because the model’s complexity is outmatched by the problem’s complexity. Therefore, we present a novel general strategy for using ML models to solve data management problems and to replace DBMS components. The novel strategy is based on four advantages derived from the four disadvantages of global learning strategies. In essence, our local learning strategy utilizes divide-and-conquer to place less complex but more expressive models specializing in sub-problems of a data management problem. It splits the problem space into less complex parts that can be solved with lightweight models. This circumvents the one-size-fits-all characteristics and drawbacks of global models. We will show that this approach and the lesser complexity of the specialized local models lead to better problem-solving qualities and DBMS performance. The local learning strategy is applied and evaluated in three crucial use cases to replace DBMS components with ML models. These are cardinality estimation, query optimizer hinting, and integer algorithm selection. In all three applications, the benefits of the local learning strategy are demonstrated and compared to related work. We also generalize the strategy’s usability for a broader application and formulate best practices with instructions for others.
148

Communication-based UAV Swarm Missions

Yang, Huan 30 October 2023 (has links)
Unmanned aerial vehicles have developed rapidly in recent years due to technological advances. UAV technology can be applied to a wide range of applications in surveillance, rescue, agriculture and transport. The problems that can exist in these areas can be mitigated by combining clusters of drones with several technologies. For example, when a swarm of drones is under attack, it may not be able to obtain the position feedback provided by the Global Positioning System (GPS). This poses a new challenge for the UAV swarm to fulfill a specific mission. This thesis intends to use as few sensors as possible on the UAVs and to design the smallest possible information transfer between the UAVs to maintain the shape of the UAV formation in flight and to follow a predetermined trajectory. This thesis presents Extended Kalman Filter methods to navigate autonomously in a GPS-denied environment. The UAV formation control and distributed communication methods are also discussed and given in detail.
149

Reconstructing Dynamical Systems From Stochastic Differential Equations to Machine Learning

Hassanibesheli, Forough 28 March 2023 (has links)
Die Modellierung komplexer Systeme mit einer großen Anzahl von Freiheitsgraden ist in den letzten Jahrzehnten zu einer großen Herausforderung geworden. In der Regel werden nur einige wenige Variablen komplexer Systeme in Form von gemessenen Zeitreihen beobachtet, während die meisten von ihnen - die möglicherweise mit den beobachteten Variablen interagieren - verborgen bleiben. In dieser Arbeit befassen wir uns mit dem Problem der Rekonstruktion und Vorhersage der zugrunde liegenden Dynamik komplexer Systeme mit Hilfe verschiedener datengestützter Ansätze. Im ersten Teil befassen wir uns mit dem umgekehrten Problem der Ableitung einer unbekannten Netzwerkstruktur komplexer Systeme, die Ausbreitungsphänomene widerspiegelt, aus beobachteten Ereignisreihen. Wir untersuchen die paarweise statistische Ähnlichkeit zwischen den Sequenzen von Ereigniszeitpunkten an allen Knotenpunkten durch Ereignissynchronisation (ES) und Ereignis-Koinzidenz-Analyse (ECA), wobei wir uns auf die Idee stützen, dass funktionale Konnektivität als Stellvertreter für strukturelle Konnektivität dienen kann. Im zweiten Teil konzentrieren wir uns auf die Rekonstruktion der zugrunde liegenden Dynamik komplexer Systeme anhand ihrer dominanten makroskopischen Variablen unter Verwendung verschiedener stochastischer Differentialgleichungen (SDEs). In dieser Arbeit untersuchen wir die Leistung von drei verschiedenen SDEs - der Langevin-Gleichung (LE), der verallgemeinerten Langevin-Gleichung (GLE) und dem Ansatz der empirischen Modellreduktion (EMR). Unsere Ergebnisse zeigen, dass die LE bessere Ergebnisse für Systeme mit schwachem Gedächtnis zeigt, während sie die zugrunde liegende Dynamik von Systemen mit Gedächtniseffekten und farbigem Rauschen nicht rekonstruieren kann. In diesen Situationen sind GLE und EMR besser geeignet, da die Wechselwirkungen zwischen beobachteten und unbeobachteten Variablen in Form von Speichereffekten berücksichtigt werden. Im letzten Teil dieser Arbeit entwickeln wir ein Modell, das auf dem Echo State Network (ESN) basiert und mit der PNF-Methode (Past Noise Forecasting) kombiniert wird, um komplexe Systeme in der realen Welt vorherzusagen. Unsere Ergebnisse zeigen, dass das vorgeschlagene Modell die entscheidenden Merkmale der zugrunde liegenden Dynamik der Klimavariabilität erfasst. / Modeling complex systems with large numbers of degrees of freedom have become a grand challenge over the past decades. Typically, only a few variables of complex systems are observed in terms of measured time series, while the majority of them – which potentially interact with the observed ones - remain hidden. Throughout this thesis, we tackle the problem of reconstructing and predicting the underlying dynamics of complex systems using different data-driven approaches. In the first part, we address the inverse problem of inferring an unknown network structure of complex systems, reflecting spreading phenomena, from observed event series. We study the pairwise statistical similarity between the sequences of event timings at all nodes through event synchronization (ES) and event coincidence analysis (ECA), relying on the idea that functional connectivity can serve as a proxy for structural connectivity. In the second part, we focus on reconstructing the underlying dynamics of complex systems from their dominant macroscopic variables using different Stochastic Differential Equations (SDEs). We investigate the performance of three different SDEs – the Langevin Equation (LE), Generalized Langevin Equation (GLE), and the Empirical Model Reduction (EMR) approach in this thesis. Our results reveal that LE demonstrates better results for systems with weak memory while it fails to reconstruct underlying dynamics of systems with memory effects and colored-noise forcing. In these situations, the GLE and EMR are more suitable candidates since the interactions between observed and unobserved variables are considered in terms of memory effects. In the last part of this thesis, we develop a model based on the Echo State Network (ESN), combined with the past noise forecasting (PNF) method, to predict real-world complex systems. Our results show that the proposed model captures the crucial features of the underlying dynamics of climate variability.
150

Exploration maschineller Verfahren zur Entwicklung eines methodischen Frameworks zur Evaluierung wissenschaftlicher Texte im Forschungsmanagement

Baumgart, Matthias 26 February 2024 (has links)
Die Komplexität des Forschungsmanagements an Universitäten und Hochschulen für Angewandte Wissenschaften hat in den letzten Jahren zugenommen, sowohl auf Seiten der Wissenschaftler als auch auf administrativer Ebene. Insbesondere die Texterstellung und -verarbeitung für Forschungsanträge, Publikationen und andere wissenschaftliche Dokumente erfordern erheblichen Aufwand. Gleichzeitig existieren Methoden und Technologien in den Bereichen Information Retrieval, Maschinelles Lernen und Semantischer Technologien, die für die Analyse und Bewertung dieser Texte geeignet sind. Diese Arbeit zielt darauf ab, Aufwände im Lebenszyklus von öffentlich geförderten Forschungsprojekten zu optimieren. Sie identifiziert aktuelle Entwicklungen und Technologien, um Kriterien für eine Gesamtarchitektur abzuleiten, die wissenschaftliche Texte qualitativ annotiert, trainiert und evaluiert. Das resultierende Framework namens FELIX dient als prototypisches System für die computergestützte Assistenz zur Evaluation wissenschaftlicher Texte. Datenkorpora aus Forschungsanträgen und Publikationen wurden für explorative Experimente verwendet, die u. a. auf Methoden des Maschinellen Lernens basieren. FELIX ermöglicht die Analyse von Texten und Metadaten, die Klassifizierung nach definierten Kriterien und die Vorhersage der Bewilligung von Forschungsanträgen. Die Konzeption und Evaluierung von FELIX führte zu wissenschaftlichen und praktischen Implikationen zur Optimierung des Forschungsmanagements.:1. MOTIVATION 2. THEORETISCHE FUNDIERUNG DES DIGITALEN FORSCHUNGSMANAGEMENTS 3. TECHNOLOGISCHE METHODEN UND STRATEGIEN 4. KONZEPTION EINER SYSTEMARCHITEKTUR 5. EXPLORATIVE STUDIE ZUR COMPUTERGESTÜTZTEN ASSISTENZ ZUR EVALUATION WISSENSCHAFTLICHER TEXTE 6. ZUSAMMENFASSUNG UND AUSBLICK ANHANG

Page generated in 0.1918 seconds