• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 210
  • 73
  • 14
  • 2
  • Tagged with
  • 298
  • 298
  • 206
  • 184
  • 175
  • 131
  • 121
  • 121
  • 60
  • 36
  • 34
  • 30
  • 29
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

New Physics Probes at Present/Future Hadron Colliders via Vh Production

Englert, Philipp 26 April 2023 (has links)
In dieser Arbeit nutzen wir Effektive Feldtheorien, genauer gesagt die SMEFT, um BSM-Effekte modellunabhängig zu parametrisieren. Wir demonstrieren die Relevanz von Präzisionsmessungen sowohl an aktuellen als auch an zukünftigen Hadronenbeschleunigern durch die Untersuchung von Vh-Dibosonen-Prozessen. Diese Prozesse ermöglichen uns die Untersuchung einer Reihe von Dimension-6-Operatoren, die BSM-Effekte erzeugen, die mit der Schwerpunktsenergie wachsen. Im Besonderen betrachten wir die leptonischen Zerfallskanäle der Vektorbosonen und zwei verschiedene Zerfallsmodi des Higgs-Bosons, den Diphoton-Kanal und den h->bb-Kanal. Der Diphoton-Kanal zeichnet sich durch eine saubere Signatur aus, die mit relativ einfachen Mitteln sehr gut von den relevanten Hintergründen unterschieden werden kann. Aufgrund der geringen Rate dieses Higgs-Zerfallskanals werden diese Prozesse allerdings erst für die Untersuchung von BSM-Effekten am FCC-hh relevant. Dank des großen h->bb Verzweigungsverhältnisse liefert der Vh(->bb)-Kanal bereits eine kompetitive Sensitivität für BSM-Effekte am LHC. Jedoch leidet dieser Kanal unter großen QCD-induzierten Hintergründen, weswegen ausgefeiltere Analysetechniken nötig sind, um dieses Niveau an BSM-Sensitivität zu erreichen. Wir leiten die erwarteten Schranken für die zuvor erwähnten Operatoren für den Vh(->gamma gamma)-Kanal am FCC-hh und für den Vh(->bb)-Kanal am LHC Run 3, HL-LHC und FCC-hh her. Unsere Studie des Vh(->bb)-Kanals zeigt, dass die Extraktion von Schranken für BSM-Operatoren an Hadronenbeschleunigern eine höchst nicht-triviale Aufgabe sein kann. Algorithmen des Maschinellen Lernens können potenziell nützlich zur Analyse solch komplexer Event-Strukturen sein. Wir leiten Schranken her, indem wir Boosted Decision Trees zur Signal-Hintergrund Klassifizierung benutzen und und vergleichen sie mit den Schranken aus der zuvor diskutierten Cut-and-Count Analyse. Wir finden eine leichte Verbesserung von O(einige %) für die verschiedenen Operatoren. / In this thesis, we utilise the framework of Effective Field Theories, more specifically the Standard Model Effective Field Theory, to parameterise New-Physics effects in a model-independent way. We demonstrate the relevance of precision measurements both at current and future hadron colliders by studying Vh-diboson-production processes. These processes allow us to probe a set of dimension-6 operators that generate BSM effects growing with the center-of-mass energy. More specifically, we consider the leptonic decay channels of the vector bosons and two different decay modes of the Higgs boson, the diphoton channel and the hadronic h->bb channel. The diphoton channel is characterised by a clean signature that can be separated very well from the relevant backgrounds with relatively simple methods. However, due to the small rate of this Higgs-decay channel, these processes will only become viable to probe New-Physics effects at the FCC-hh. Thanks to the large h->bb branching ratio, the Vh(->bb) channel already provides competitive sensitivity to BSM effects at the LHC. However, it suffers from large QCD-induced backgrounds that require us to use more sophisticated analysis techniques to achieve this level of BSM sensitivity. We derive the expected bounds on the previously mentioned dimension-6 operators from the Vh(->gamma gamma) channel at the FCC-hh and from the Vh(->bb) channel at the LHC Run 3, HL-LHC and FCC-hh. Our study of the Vh(->bb) channel demonstrates that extracting bounds on BSM operators at hadron colliders can be a highly non-trivial task. Machine-Learning algorithms can potentially be useful for the analysis of such complex event structures. We derive bounds using Boosted Decision Trees for the signal-background classification and compare them with the ones from the previously discussed cut-and-count analysis. We find a mild improvement of O(few %) across the different operators.
262

Explainable deep learning classifiers for disease detection based on structural brain MRI data

Eitel, Fabian 14 November 2022 (has links)
In dieser Doktorarbeit wird die Frage untersucht, wie erfolgreich deep learning bei der Diagnostik von neurodegenerativen Erkrankungen unterstützen kann. In 5 experimentellen Studien wird die Anwendung von Convolutional Neural Networks (CNNs) auf Daten der Magnetresonanztomographie (MRT) untersucht. Ein Schwerpunkt wird dabei auf die Erklärbarkeit der eigentlich intransparenten Modelle gelegt. Mit Hilfe von Methoden der erklärbaren künstlichen Intelligenz (KI) werden Heatmaps erstellt, die die Relevanz einzelner Bildbereiche für das Modell darstellen. Die 5 Studien dieser Dissertation zeigen das Potenzial von CNNs zur Krankheitserkennung auf neurologischen MRT, insbesondere bei der Kombination mit Methoden der erklärbaren KI. Mehrere Herausforderungen wurden in den Studien aufgezeigt und Lösungsansätze in den Experimenten evaluiert. Über alle Studien hinweg haben CNNs gute Klassifikationsgenauigkeiten erzielt und konnten durch den Vergleich von Heatmaps zur klinischen Literatur validiert werden. Weiterhin wurde eine neue CNN Architektur entwickelt, spezialisiert auf die räumlichen Eigenschaften von Gehirn MRT Bildern. / Deep learning and especially convolutional neural networks (CNNs) have a high potential of being implemented into clinical decision support software for tasks such as diagnosis and prediction of disease courses. This thesis has studied the application of CNNs on structural MRI data for diagnosing neurological diseases. Specifically, multiple sclerosis and Alzheimer’s disease were used as classification targets due to their high prevalence, data availability and apparent biomarkers in structural MRI data. The classification task is challenging since pathology can be highly individual and difficult for human experts to detect and due to small sample sizes, which are caused by the high acquisition cost and sensitivity of medical imaging data. A roadblock in adopting CNNs to clinical practice is their lack of interpretability. Therefore, after optimizing the machine learning models for predictive performance (e.g. balanced accuracy), we have employed explainability methods to study the reliability and validity of the trained models. The deep learning models achieved good predictive performance of over 87% balanced accuracy on all tasks and the explainability heatmaps showed coherence with known clinical biomarkers for both disorders. Explainability methods were compared quantitatively using brain atlases and shortcomings regarding their robustness were revealed. Further investigations showed clear benefits of transfer-learning and image registration on the model performance. Lastly, a new CNN layer type was introduced, which incorporates a prior on the spatial homogeneity of neuro-MRI data. CNNs excel when used on natural images which possess spatial heterogeneity, and even though MRI data and natural images share computational similarities, the composition and orientation of neuro-MRI is very distinct. The introduced patch-individual filter (PIF) layer breaks the assumption of spatial invariance of CNNs and reduces convergence time on different data sets without reducing predictive performance. The presented work highlights many challenges that CNNs for disease diagnosis face on MRI data and defines as well as tests strategies to overcome those.
263

Modelling land use and land cover change on the Mongolian Plateau

Batunacun 08 December 2020 (has links)
Der Bezirk Xilingol wurde als geeignetes Beispiel ausgewählt, weil es zu einem großen Flächenanteil von Grassteppe bedeckt ist und fast alle Phasen der Umweltpolitik Chinas durchlaufen hat. Es wurden zwei deutlich voneinander abgrenzbare Phasen identifiziert, von 1975 bis 2000 und von 2000 bis 2015. Während der ersten Phase, bis 2000, war Landdegradation der dominante Landnutzungswandelprozess, der 11.4 % der Gesamtfläche betraf. In dieser Phase war die menschliche Einflussnahme der Hauptfaktor in acht Landkreisen, die sich ändernden Wasserverhältnisse war es in sechs Landkreisen. Während der zweiten Phase, ab 2000, setzte ein spürbare Erholung des Zustandes auf 12 % des Gesamtgebietes ein, während die Degradation jedoch weiter voranschritt und zusätzliche 9,5 % des Landes veränderte. Während dieser Phase wurde die Städtebildung zum dominanten Treiber für die Landdegradierung in sieben Landkreisen, während der Einfluss menschlicher Störungen und der Wasserverfügbarkeit wieder zurückging. Nach der Identifizierung der Haupttreiber für die Landdegradation, wurde die komplexe Beziehung zwischen verschiedenen Treibern und der Grassteppen-Degradation untersucht. Die Ergebnisse zeigten, dass die Beziehung zwischen dicht bedeckter, moderat bedeckter, und spärlich bedeckter Grassteppe und die Dichte des Schafbesatzes für die Degradationsdynamik in der Grassteppe verantwortlich waren. In dieser Arbeit wurden die Methoden der Clusteranalyse, der Partial-Order-Theorie, und der Hasse Diagramme eingesetzt, um die Haupttreiber der Landdegradation auf Landkreisebene zu identifizieren. Dann wurde ein Ansatz aus dem maschinellen Lernen, XGBoost (eXtreme Gradient Boosting) verwendet, um die Dynamik der Grassteppen-Degradation vorauszusagen. Darüber hinaus wurde SHAP (SHapley Additive exPlanations) eingesetzt, um das von XGBoost erstellte Black-Box-Modell zu in seine Bestanteile zu zerlegen und für jedes Degradations-Pixel in der Karte den Haupttreiber zu extrahieren. / The aims of this thesis are to gain an integrated and systematic understanding of the processes and determinants of land degradation on the Mongolian Plateau. Xilingol was chosen as a suitable example, mainly since it is covered by vast grassland, and has experienced almost all ecological policies that have been implemented in China. Two distinct phases were identified in this region: 1975-2000 and 2000-2015. During the first phase (up to 2000), land degradation was the dominant land use change process, accounting for 11.4% of the total area. During this phase, human disturbance was the major driver in eight counties, whereas the water condition was the dominant driver in six counties. During the second phase (post-2000), land restoration increased (12.0% of the total area), whereas degradation continued, resulting in a further 9.5% of degraded land. During this phase, urbanisation became the dominant driver of land degradation in seven counties, while effects resulting from human disturbance and water availability decreased after 2000. After identifying the major drivers of degradation, the complex relationships between drivers and grassland degradation were captured. The results indicated that the distance to dense, moderately dense grass and sparse grass and sheep density were responsible for the grassland degradation dynamics. In this thesis, a clustering method, partial order theory and Hasse diagram techniques were first used to identify the major drivers of land degradation at the county level. Subsequently, an approach from machine learning, XGBoost (eXtreme Gradient Boosting), was used to predict the dynamics of grassland degradation. Moreover, SHAP (SHapley Additive exPlanations) values were used to open up the black box model, and the primary driver was extracted for each pixel showing degradation.
264

Physics-based Machine Learning Approaches to Complex Systems and Climate Analysis

Gelbrecht, Maximilian 20 July 2021 (has links)
Komplexe Systeme wie das Klima der Erde bestehen aus vielen Komponenten, die durch eine komplizierte Kopplungsstruktur miteinander verbunden sind. Für die Analyse solcher Systeme erscheint es daher naheliegend, Methoden aus der Netzwerktheorie, der Theorie dynamischer Systeme und dem maschinellen Lernen zusammenzubringen. Durch die Kombination verschiedener Konzepte aus diesen Bereichen werden in dieser Arbeit drei neuartige Ansätze zur Untersuchung komplexer Systeme betrachtet. Im ersten Teil wird eine Methode zur Konstruktion komplexer Netzwerke vorgestellt, die in der Lage ist, Windpfade des südamerikanischen Monsunsystems zu identifizieren. Diese Analyse weist u.a. auf den Einfluss der Rossby-Wellenzüge auf das Monsunsystem hin. Dies wird weiter untersucht, indem gezeigt wird, dass der Niederschlag mit den Rossby-Wellen phasenkohärent ist. So zeigt der erste Teil dieser Arbeit, wie komplexe Netzwerke verwendet werden können, um räumlich-zeitliche Variabilitätsmuster zu identifizieren, die dann mit Methoden der nichtlinearen Dynamik weiter analysiert werden können. Die meisten komplexen Systeme weisen eine große Anzahl von möglichen asymptotischen Zuständen auf. Um solche Zustände zu beschreiben, wird im zweiten Teil die Monte Carlo Basin Bifurcation Analyse (MCBB), eine neuartige numerische Methode, vorgestellt. Angesiedelt zwischen der klassischen Analyse mit Ordnungsparametern und einer gründlicheren, detaillierteren Bifurkationsanalyse, kombiniert MCBB Zufallsstichproben mit Clustering, um die verschiedenen Zustände und ihre Einzugsgebiete zu identifizieren. Bei von Vorhersagen von komplexen Systemen ist es nicht immer einfach, wie Vorwissen in datengetriebenen Methoden integriert werden kann. Eine Möglichkeit hierzu ist die Verwendung von Neuronalen Partiellen Differentialgleichungen. Hier wird im letzten Teil der Arbeit gezeigt, wie hochdimensionale räumlich-zeitlich chaotische Systeme mit einem solchen Ansatz modelliert und vorhergesagt werden können. / Complex systems such as the Earth's climate are comprised of many constituents that are interlinked through an intricate coupling structure. For the analysis of such systems it therefore seems natural to bring together methods from network theory, dynamical systems theory and machine learning. By combining different concepts from these fields three novel approaches for the study of complex systems are considered throughout this thesis. In the first part, a novel complex network construction method is introduced that is able to identify the most important wind paths of the South American Monsoon system. Aside from the importance of cross-equatorial flows, this analysis points to the impact Rossby Wave trains have both on the precipitation and low-level circulation. This connection is then further explored by showing that the precipitation is phase coherent to the Rossby Wave. As such, the first part of this thesis demonstrates how complex networks can be used to identify spatiotemporal variability patterns within large amounts of data, that are then further analysed with methods from nonlinear dynamics. Most complex systems exhibit a large number of possible asymptotic states. To investigate and track such states, Monte Carlo Basin Bifurcation analysis (MCBB), a novel numerical method is introduced in the second part. Situated between the classical analysis with macroscopic order parameters and a more thorough, detailed bifurcation analysis, MCBB combines random sampling with clustering methods to identify and characterise the different asymptotic states and their basins of attraction. Forecasts of complex system are the next logical step. When doing so, it is not always straightforward how prior knowledge in data-driven methods. One possibility to do is by using Neural Partial Differential Equations. Here, it is demonstrated how high-dimensional spatiotemporally chaotic systems can be modelled and predicted with such an approach in the last part of the thesis.
265

Prediction of designer-recombinases for DNA editing with generative deep learning

Schmitt, Lukas Theo 17 January 2024 (has links)
Site-specific tyrosine-type recombinases are effective tools for genome engineering, with the first engineered variants having demonstrated therapeutic potential. So far, adaptation to new DNA target site selectivity of designer-recombinases has been achieved mostly through iterative cycles of directed molecular evolution. While effective, directed molecular evolution methods are laborious and time consuming. To accelerate the development of designer-recombinases I evaluated two sequencing approaches and gathered the sequence information of over two million Cre-like recombinase sequences evolved for 89 different target sites. With this information I first investigated the sequence compositions and residue changes of the recombinases to further our understanding of their target site selectivity. The complexity of the data led me to a generative deep learning approach. Using the sequence data I trained a conditional variational autoencoder called RecGen (Recombinase Generator) that is capable of generating novel recombinases for a given target site. With computational evaluation of the sequences I revealed that known recombinases functional on the desired target site are generally more similar to the RecGen predicted recombinases than other recombinase libraries. Additionally, I could experimentally show that predicted recombinases for known target sites are at least as active as the evolved recombinases. Finally, I also experimentally show that 4 out of 10 recombinases predicted for novel target sites are capable of excising their respective target sites. As a bonus to RecGen I also developed a new method capable of accurate sequencing of recombinases with nanopore sequencing while simultaneously counting DNA editing events. The data of this method should enable the next development iteration of RecGen.
266

Mapping rill soil erosion in agricultural fields with UAV-borne remote sensing data

Malinowski, Radek, Heckrath, Goswin, Rybicki, Marcin, Eltner, Anette 27 February 2024 (has links)
Soil erosion by water is a main form of land degradation worldwide. The problem has been addressed, among others, in the United Nations Sustainability Goals. However, for mitigation of erosion consequences and adequate management of affected areas, reliable information on the magnitude and spatial patterns of erosion is needed. Although such need is often addressed by erosion modelling, precise erosion monitoring is necessary for the calibration and validation of erosion models and to study erosion patterns in landscapes. Conventional methods for quantification of rill erosion are based on labour-intensive field measurements. In contrast, remote sensing techniques promise fast, non-invasive, systematic and larger-scale surveying. Thus, the main objective of this study was to develop and evaluate automated and transferable methodologies for mapping the spatial extent of erosion rills from a single acquisition of remote sensing data. Data collected by an uncrewed aerial vehicle was used to deliver a highly detailed digital elevation model (DEM) of the analysed area. Rills were classified by two methods with different settings. One approach was based on a series of decision rules applied on DEM-derived geomorphological terrain attributes. The second approach utilized the random forest machine learning algorithm. The methods were tested on three agricultural fields representing different erosion patterns and vegetation covers. Our study showed that the proposed methods can ensure recognition of rills with accuracies between 80 and 90% depending on rill characteristics. In some cases, however, the methods were sensitive to very small rill incisions and to similar geometry of rills to other features. Additionally, their performance was influenced by the vegetation structure and cover. Besides these challenges, the introduced approach was capable of mapping rills fully automatically at the field scale and can, therefore, support a fast and flexible assessment of erosion magnitudes.
267

OMOP CDM Can Facilitate Data-Driven Studies for Cancer Prediction: A Systematic Review

Ahmadi, Najia, Peng, Yuan, Wolfien, Markus, Zoch, Michéle, Sedlmayr, Martin 22 January 2024 (has links)
The current generation of sequencing technologies has led to significant advances in identifying novel disease-associated mutations and generated large amounts of data in a highthroughput manner. Such data in conjunction with clinical routine data are proven to be highly useful in deriving population-level and patient-level predictions, especially in the field of cancer precision medicine. However, data harmonization across multiple national and international clinical sites is an essential step for the assessment of events and outcomes associated with patients, which is currently not adequately addressed. The Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) is an internationally established research data repository introduced by the Observational Health Data Science and Informatics (OHDSI) community to overcome this issue. To address the needs of cancer research, the genomic vocabulary extension was introduced in 2020 to support the standardization of subsequent data analysis. In this review, we evaluate the current potential of the OMOP CDM to be applicable in cancer prediction and how comprehensively the genomic vocabulary extension of the OMOP can serve current needs of AI-based predictions. For this, we systematically screened the literature for articles that use the OMOP CDM in predictive analyses in cancer and investigated the underlying predictive models/tools. Interestingly, we found 248 articles, of which most use the OMOP for harmonizing their data, but only 5 make use of predictive algorithms on OMOP-based data and fulfill our criteria. The studies present multicentric investigations, in which the OMOP played an essential role in discovering and optimizing machine learning (ML)-based models. Ultimately, the use of the OMOP CDM leads to standardized data-driven studies for multiple clinical sites and enables a more solid basis utilizing, e.g., ML models that can be reused and combined in early prediction, diagnosis, and improvement of personalized cancer care and biomarker discovery.
268

Leben mit Python

Piko Koch, Dorothea 28 May 2024 (has links)
Dies ist ein kurzer Überblick über Python-Projekte abseits von Einsatzmöglichkeiten im Beruf.:1. Einleitung 2. Python unterrichten 3. Mit Python promovieren 4. Mit Python chatten lassen 4.1. Implementation 4.2. Literaturwissenschaftlicher Hintergrund 5. Mit Python leben 6. Mit Python basteln Literatur
269

State-of-health estimation by virtual experiments using recurrent decoder-encoder based lithium-ion digital battery twins trained on unstructured battery data

Schmitt, Jakob, Horstkötter, Ivo, Bäker, Bernard 15 March 2024 (has links)
Due to the large share of production costs, the lifespan of an electric vehicle’s (EV) lithium-ion traction battery should be as long as possible. The optimisation of the EV’s operating strategy with regard to battery life requires a regular evaluation of the battery’s state-of-health (SOH). Yet the SOH, the remaining battery capacity, cannot be measured directly through sensors but requires the elaborate conduction of special characterisation tests. Considering the limited number of test facilities as well as the rapidly growing number of EVs, time-efficient and scalable SOH estimation methods are urgently needed and are the object of investigation in this work. The developed virtual SOH experiment originates from the incremental capacity measurement and solely relies on the commonly logged battery management system (BMS) signals to train the digital battery twins. The first examined dataset with identical load profiles for new and aged battery state serves as proof of concept. The successful SOH estimation based on the second dataset that consists of varying load profiles with increased complexity constitutes a step towards the application on real driving cycles. Assuming that the load cycles contain pauses and start from the fully charged battery state, the SOH estimation succeeds either through a steady shift of the load sequences (variant one) with an average deviation of 0.36% or by random alignment of the dataset’s subsequences (variant two) with 1.04%. In contrast to continuous capacity tests, the presented framework does not impose restrictions to small currents. It is entirely independent of the prevailing and unknown ageing condition due to the application of battery models based on the novel encoder–decoder architecture and thus provides the cornerstone for a scalable and robust estimation of battery capacity on a pure data basis.
270

Abilities and Disabilities—Applying Machine Learning to Disentangle the Role of Intelligence in Diagnosing Autism Spectrum Disorders

Wolff, Nicole, Eberlein, Matthias, Stroth, Sanna, Poustka, Luise, Roepke, Stefan, Kamp-Becker, Inge, Roessner, Veit 22 April 2024 (has links)
Objective: Although autism spectrum disorder (ASD) is a relatively common, well-known but heterogeneous neuropsychiatric disorder, specific knowledge about characteristics of this heterogeneity is scarce. There is consensus that IQ contributes to this heterogeneity as well as complicates diagnostics and treatment planning. In this study, we assessed the accuracy of the Autism Diagnostic Observation Schedule (ADOS/2) in the whole and IQ-defined subsamples, and analyzed if the ADOS/2 accuracy may be increased by the application of machine learning (ML) algorithms that processed additional information including the IQ level. Methods: The study included 1,084 individuals: 440 individuals with ASD (with a mean IQ level of 3.3 ± 1.5) and 644 individuals without ASD (with a mean IQ level of 3.2 ± 1.2). We applied and analyzed Random Forest (RF) and Decision Tree (DT) to the ADOS/2 data, compared their accuracy to ADOS/2 cutoff algorithms, and examined most relevant items to distinguish between ASD and Non-ASD. In sum, we included 49 individual features, independently of the applied ADOS module. Results: In DT analyses, we observed that for the decision ASD/Non-ASD, solely one to four items are sufficient to differentiate between groups with high accuracy. In addition, in sub-cohorts of individuals with (a) below (IQ level ≥4)/ID and (b) above average intelligence (IQ level ≤ 2), the ADOS/2 cutoff showed reduced accuracy. This reduced accuracy results in (a) a three times higher risk of false-positive diagnoses or (b) a 1.7 higher risk for false-negative diagnoses; both errors could be significantly decreased by the application of the alternative ML algorithms. Conclusions: Using ML algorithms showed that a small set of ADOS/2 items could help clinicians to more accurately detect ASD in clinical practice across all IQ levels and to increase diagnostic accuracy especially in individuals with below and above average IQ level.

Page generated in 0.1505 seconds