• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
741

Optical Medieval Music Recognition / Optical Medieval Music Recognition

Wick, Christoph January 2020 (has links) (PDF)
In recent years, great progress has been made in the area of Artificial Intelligence (AI) due to the possibilities of Deep Learning which steadily yielded new state-of-the-art results especially in many image recognition tasks. Currently, in some areas, human performance is achieved or already exceeded. This great development already had an impact on the area of Optical Music Recognition (OMR) as several novel methods relying on Deep Learning succeeded in specific tasks. Musicologists are interested in large-scale musical analysis and in publishing digital transcriptions in a collection enabling to develop tools for searching and data retrieving. The application of OMR promises to simplify and thus speed-up the transcription process by either providing fully-automatic or semi-automatic approaches. This thesis focuses on the automatic transcription of Medieval music with a focus on square notation which poses a challenging task due to complex layouts, highly varying handwritten notations, and degradation. However, since handwritten music notations are quite complex to read, even for an experienced musicologist, it is to be expected that even with new techniques of OMR manual corrections are required to obtain the transcriptions. This thesis presents several new approaches and open source software solutions for layout analysis and Automatic Text Recognition (ATR) for early documents and for OMR of Medieval manuscripts providing state-of-the-art technology. Fully Convolutional Networks (FCN) are applied for the segmentation of historical manuscripts and early printed books, to detect staff lines, and to recognize neume notations. The ATR engine Calamari is presented which allows for ATR of early prints and also the recognition of lyrics. Configurable CNN/LSTM-network architectures which are trained with the segmentation-free CTC-loss are applied to the sequential recognition of text but also monophonic music. Finally, a syllable-to-neume assignment algorithm is presented which represents the final step to obtain a complete transcription of the music. The evaluations show that the performances of any algorithm is highly depending on the material at hand and the number of training instances. The presented staff line detection correctly identifies staff lines and staves with an $F_1$-score of above $99.5\%$. The symbol recognition yields a diplomatic Symbol Accuracy Rate (dSAR) of above $90\%$ by counting the number of correct predictions in the symbols sequence normalized by its length. The ATR of lyrics achieved a Character Error Rate (CAR) (equivalently the number of correct predictions normalized by the sentence length) of above $93\%$ trained on 771 lyric lines of Medieval manuscripts and of 99.89\% when training on around 3.5 million lines of contemporary printed fonts. The assignment of syllables and their corresponding neumes reached $F_1$-scores of up to $99.2\%$. A direct comparison to previously published performances is difficult due to different materials and metrics. However, estimations show that the reported values of this thesis exceed the state-of-the-art in the area of square notation. A further goal of this thesis is to enable musicologists without technical background to apply the developed algorithms in a complete workflow by providing a user-friendly and comfortable Graphical User Interface (GUI) encapsulating the technical details. For this purpose, this thesis presents the web-application OMMR4all. Its fully-functional workflow includes the proposed state-of-the-art machine-learning algorithms and optionally allows for a manual intervention at any stage to correct the output preventing error propagation. To simplify the manual (post-) correction, OMMR4all provides an overlay-editor that superimposes the annotations with a scan of the original manuscripts so that errors can easily be spotted. The workflow is designed to be iteratively improvable by training better models as soon as new Ground Truth (GT) is available. / In den letzten Jahre wurden aufgrund der Möglichkeiten durch Deep Learning, was insbesondere in vielen Bildbearbeitungsaufgaben stetig neue Bestwerte erzielte, große Fortschritte im Bereich der künstlichen Intelligenz (KI) gemacht. Derzeit wird in vielen Gebieten menschliche Performanz erreicht oder mittlerweile sogar übertroffen. Diese großen Entwicklungen hatten einen Einfluss auf den Forschungsbereich der optischen Musikerkennung (OMR), da verschiedenste Methodiken, die auf Deep Learning basierten in spezifischen Aufgaben erfolgreich waren. Musikwissenschaftler sind in großangelegter Musikanalyse und in das Veröffentlichen von digitalen Transkriptionen als Sammlungen interessiert, was eine Entwicklung von Werkzeugen zur Suche und Datenakquise ermöglicht. Die Anwendung von OMR verspricht diesen Transkriptionsprozess zu vereinfachen und zu beschleunigen indem vollautomatische oder semiautomatische Ansätze bereitgestellt werden. Diese Arbeit legt den Schwerpunkt auf die automatische Transkription von mittelalterlicher Musik mit einem Fokus auf Quadratnotation, die eine komplexe Aufgabe aufgrund der komplexen Layouts, der stark variierenden Notationen und der Alterungsprozesse der Originalmanuskripte darstellt. Da jedoch die handgeschriebenen Musiknotationen selbst für erfahrene Musikwissenschaftler aufgrund der Komplexität schwer zu lesen sind, ist davon auszugehen, dass selbst mit den neuesten OMR-Techniken manuelle Korrekturen erforderlich sind, um die Transkription zu erhalten. Diese Arbeit präsentiert mehrere neue Ansätze und Open-Source-Software-Lösungen zur Layoutanalyse und zur automatischen Texterkennung (ATR) von frühen Dokumenten und für OMR von Mittelalterlichen Mauskripten, die auf dem Stand der aktuellen Technik sind. Fully Convolutional Networks (FCN) werden zur Segmentierung der historischen Manuskripte und frühen Buchdrucke, zur Detektion von Notenlinien und zur Erkennung von Neumennotationen eingesetzt. Die ATR-Engine Calamari, die eine ATR von frühen Buchdrucken und ebenso eine Erkennung von Liedtexten ermöglicht wird vorgestellt. Konfigurierbare CNN/LSTM-Netzwerkarchitekturen, die mit dem segmentierungsfreien CTC-loss trainiert werden, werden zur sequentiellen Texterkennung, aber auch einstimmiger Musik, eingesetzt. Abschließend wird ein Silben-zu-Neumen-Algorithmus vorgestellt, der dem letzten Schritt entspricht eine vollständige Transkription der Musik zu erhalten. Die Evaluationen zeigen, dass die Performanz eines jeden Algorithmus hochgradig abhängig vom vorliegenden Material und der Anzahl der Trainingsbeispiele ist. Die vorgestellte Notenliniendetektion erkennt Notenlinien und -zeilen mit einem $F_1$-Wert von über 99,5%. Die Symbolerkennung erreichte eine diplomatische Symbolerkennungsrate (dSAR), die die Anzahl der korrekten Vorhersagen in der Symbolsequenz zählt und mit der Länge normalisiert, von über 90%. Die ATR von Liedtext erzielte eine Zeichengenauigkeit (CAR) (äquivalent zur Anzahl der korrekten Vorhersagen normalisiert durch die Sequenzlänge) von über 93% bei einem Training auf 771 Liedtextzeilen von mittelalterlichen Manuskripten und von 99,89%, wenn auf 3,5 Millionen Zeilen von moderner gedruckter Schrift trainiert wird. Die Zuordnung von Silben und den zugehörigen Neumen erreicht $F_1$-werte von über 99,2%. Ein direkter Vergleich zu bereits veröffentlichten Performanzen ist hierbei jedoch schwer, da mit verschiedenen Material und Metriken evaluiert wurde. Jedoch zeigen Abschätzungen, dass die Werte dieser Arbeit den aktuellen Stand der Technik darstellen. Ein weiteres Ziel dieser Arbeit war es, Musikwissenschaftlern ohne technischen Hintergrund das Anwenden der entwickelten Algorithmen in einem vollständigen Workflow zu ermöglichen, indem eine benutzerfreundliche und komfortable graphische Benutzerschnittstelle (GUI) bereitgestellt wird, die die technischen Details kapselt. Zu diesem Zweck präsentiert diese Arbeit die Web-Applikation OMMR4all. Ihr voll funktionsfähiger Workflow inkludiert die vorgestellten Algorithmen gemäß dem aktuellen Stand der Technik und erlaubt optional manuell zu jedem Schritt einzugreifen, um die Ausgabe zur Vermeidung von Folgefehlern zu korrigieren. Zur Vereinfachung der manuellen (Nach-)Korrektur stellt OMMR4all einen Overlay-Editor zur Verfügung, der die Annotationen mit dem Scan des Originalmanuskripts überlagert, wodurch Fehler leicht erkannt werden können. Das Design des Workflows erlaubt iterative Verbesserungen, indem neue performantere Modelle trainiert werden können, sobald neue Ground Truth (GT) verfügbar ist.
742

DEEP LEARNING FOR DETECTING AND CLASSIFYING THE GROWTH STAGES OF WEEDS ON FIELDS

Almalky, Abeer Matar 01 May 2023 (has links) (PDF)
Due to the current and anticipated massive increase of world population, expanding the agriculture cycle is necessary for accommodating the expected human’s demand. However, weeds invasion, which is a detrimental factor for agricultural production and quality, is a challenge for such agricultural expansion. Therefore, controlling weeds on fields by accurate,automatic, low-cost, environment-friendly, and real-time weeds detection technique is required. Additionally, automating the process of detecting, classifying, and counting of weeds per their growth stages is vital for using appropriate weeds controlling techniques. The literature review shows that there is a gap in the research efforts that handle the automation of weeds’ growth stages classification using DL models. Accordingly, in this thesis, a dataset of four weed (Consolida Regalis) growth stages was collected using unnamed arial vehicle. In addition, we developed and trained one-stage and two-stages deep learning models: YOLOv5, RetinaNet (with Resnet-101-FPN, Resnet-50-FPN backbones), and Faster R-CNN (with Resnet-101-DC5, Resnet-101-FPN, Resnet-50-FPN backbones) respectively. Comparing the results of all trained models, we concluded that, in one hand, the Yolov5-small model succeeds in detecting weeds and classifying the weed’s growth stages in the shortest inference time in real-time with the highest recall of 0.794 and succeeds in counting the instances of weeds per the four growth stages in real-time with counting time of 0.033 millisecond per frame. On the other hand, RetinaNet with ResNet-101-FPN backbone shows accurate and precise results in the testing phase (average precision of 87.457). Even though the Yolov5-large model showed the highest precision value in classifying almost all weed’s growth stages in training phase, Yolov5-large could not detect all objects in tested images. As a whole, RetinaNet with ResNet-101-FPN backbone shows accurate and high precision, while Yolov5-small has the shortest real inference time of detection and growth stages classification. Farmers can use the resulted deep learning model to detect, classify, and count weeds per growth stages automatically and as a result decrease not only the needed time and labor cost, but also the use of chemicals to control weeds on fields.
743

PSF Sampling in Fluorescence Image Deconvolution

Inman, Eric A 01 March 2023 (has links) (PDF)
All microscope imaging is largely affected by inherent resolution limitations because of out-of-focus light and diffraction effects. The traditional approach to restoring the image resolution is to use a deconvolution algorithm to “invert” the effect of convolving the volume with the point spread function. However, these algorithms fall short in several areas such as noise amplification and stopping criterion. In this paper, we try to reconstruct an explicit volumetric representation of the fluorescence density in the sample and fit a neural network to the target z-stack to properly minimize a reconstruction cost function for an optimal result. Additionally, we do a weighted sampling of the point spread function to avoid unnecessary computations and prioritize non-zero signals. In a baseline comparison against the Richardson-Lucy method, our algorithm outperforms RL for images affected with high levels of noise.
744

Influences of Curing Conditions and Organic Matter on Characteristics of Cement-treated Soil for the Wet Method of Deep Mixing

Ju, Hwanik 14 July 2023 (has links)
The wet method of deep mixing constructs binder-treated soil columns by mixing a binder-water slurry with soft soil in-situ to improve the engineering properties of the soil. The strength of binder-treated soil is affected by characteristics of the in-situ soil and binder, mixing conditions, and curing conditions.The study presented herein aims to investigate the influences of curing time, curing temperature, mix design proportion, organic matter in the soil, and curing stress on the strength of cement-treated soil. Fabricated and natural soft soils were mixed with a cement-water slurry to mimic soil improved by the wet method of deep mixing. Laboratory-size samples were cured under various curing conditions and tested for unconfined compressive strength (UCS).The experimental test results showed that (1) a higher curing temperature and longer curing time generally increase the strength; (2) organic matter in cement-treated soil decrease and/or delay the strength development; and (3) curing stress affects the strength but its effect is influenced by drainage conditions. Based on the test results, strength-predicting correlations for cement-treated soil that account for various curing conditions and organic contents were proposed and validated.This research contributes to advancing the knowledge about the effects of strength-controlling factors of soil improved by cement and to improving the reliability of strength predictions with the proposed correlations. Therefore, the number of sample batches that need to be prepared and tested in a deep mixing project can be reduced, thereby saving the project's time and costs while achieving the target strength of the improved soil. / Doctor of Philosophy / The deep mixing method has gained popularity in the U.S. as a ground improvement technique since the late 1990s. This method involves blending the native soil that needs to be improved with a binder such as cement and/or lime. Two types of deep mixing methods are available, depending on how to add binder to the soil: the wet method injects a binder-water slurry, while the dry method uses a powder form of binder.The binder reacts with water and soil thereby enhancing the engineering properties of the soil. The strength of binder-treated soil is influenced by many factors: (1) characteristics of native soil and binder; (2) mixing conditions (e.g., the amount of binder added and mixing energy); and (3) curing conditions (e.g., curing time, temperature, and stress). In this dissertation, the effects of curing conditions and organic matter in the soil on the strength of cement-treated soil were investigated. Fabricated and natural soils were mixed with cement-water slurry to simulate the wet method, and the prepared samples were cured under various conditions. The strength results of cured samples showed that the characteristics of cement-treated soil are significantly affected by the amount of cement in the mixture, curing time, curing temperature, organic matter in soil, and curing stress. The test results were also used to derive correlations that account for the influences of curing conditions and organic matter.The findings and strength-predicting correlations presented in this research are expected to improve the knowledge about the deep mixing method and the reliability of strength prediction in a deep mixing project. This research, eventually, contributes to reducing time and cost of the project.
745

Efficient CNN-based Object IDAssociation Model for Multiple ObjectTracking

Danesh, Parisasadat January 2023 (has links)
No description available.
746

Deep Time Series Modeling: From Distribution Regularity to Distribution Shift

Fan, Wei 01 January 2023 (has links) (PDF)
Time series data, as a pervasive kind of data format, have played one key role in numerous realworld scenarios. Effective time series modeling can help with accurate forecasting, resource optimization, risk management, etc. Considering its great importance, how can we model the nature of the pervasive time series data? Existing works have used adopted statistics analysis, state space models, Bayesian models, or other machine learning models for time series modeling. However, these methods usually follow certain assumptions and don't reveal the core and underlying rules of time series. Moreover, the recent advancement of deep learning has made neural networks a powerful tool for pattern recognition. This dissertation will target the problem of time series modeling using deep learning techniques to achieve accurate forecasting of time series. I will propose a principled approach for deep time series modeling from a novel distribution perspective. After in-depth exploration, I categorize and study two essential characteristics of time series, i.e., the distribution regularity and the distribution shift, respectively. I will investigate how can time series data involving the two characteristics be analyzed by distribution extraction, distribution scaling, and distribution transformation. By applying more recent deep learning techniques to distribution learning for time series, this defense aims to achieve more effective and efficient forecasting and decision-making. I will carefully illustrate of proposed methods of three themes and summarize the key findings and improvements achieved through experiments. Finally, I will present my future research plan and discuss how to broaden my research of deep time series modeling into a more general Data-Centric AI system for more generalized, reliable, fair, effective, and efficient decision-making.
747

Optimizing Deep Neural Networks Performance: Efficient Techniques For Training and Inference

Sharma, Ankit 01 January 2023 (has links) (PDF)
Recent advances in computer vision tasks are mainly due to the success of large deep neural networks. The current state-of-the-art models have high computational costs during inference and suffer from a high memory footprint. Therefore, deploying these large networks on edge devices remains a serious concern. Furthermore, training these over-parameterized networks is computationally expensive and requires a longer training time. Thus, there is a demand to develop techniques that can efficiently reduce training costs and also be able to deploy neural networks on mobile and embedded devices. This dissertation presents practices like designing a lightweight network architecture and increasing network resource utilization. These solutions improve the efficiency of large networks during training and inference. We first propose an efficient micro-architecture (slim modules) to construct a light-weight Slim-CNN to predicting face attributes. Slim modules uses depthwise separable convolutions with pointwise convolutions, making them computationally efficient for embedded applications. Next, we investigate the problem of obtaining a compact pruned model from an untrained original network in a single-stage process. We introduce our RAPID framework that distills knowledge to a pruned student model from a teacher model under online settings. Next, we analyze the phenomena of inactive channels in a trained neural network. We take a deep dive into the gradient updates of these channels and discover that these channels have no weight update after a few early epochs. Thus, we present our channel regeneration technique that reinitializes batch normalization gamma values of all inactive channels. The gradient updates of these channels improve after the regeneration step, resulting in an increase in the contribution of these channels to the network performance. Finally, we introduce a method to improve computational efficiency in pre-trained vision transformers by reducing redundancy in visual data. Our method selects image windows or regions with high objectness measures, as these regions may contain an object of any class. Across all works in this dissertation, we extensively evaluate our proposed methods and demonstrate that our techniques improve the computational efficiency of deep neural networks during training and inference.
748

Quantifying the Effects of Permafrost Degradation in Arctic Coastal Environments via Satellite Earth Observation / Quantifizierung der Effekte von Permafrost Degradation in Arktischen Küstenregionen mittels Satelliten-gestützter Erdbeobachtung

Philipp, Marius Balthasar January 2023 (has links) (PDF)
Permafrost degradation is observed all over the world as a consequence of climate change and the associated Arctic amplification, which has severe implications for the environment. Landslides, increased rates of surface deformation, rising likelihood of infrastructure damage, amplified coastal erosion rates, and the potential turnover of permafrost from a carbon sink to a carbon source are thereby exemplary implications linked to the thawing of frozen ground material. In this context, satellite earth observation is a potent tool for the identification and continuous monitoring of relevant processes and features on a cheap, long-term, spatially explicit, and operational basis as well as up to a circumpolar scale. A total of 325 articles published in 30 different international journals during the past two decades were investigated on the basis of studied environmental foci, remote sensing platforms, sensor combinations, applied spatio-temporal resolutions, and study locations in an extensive review on past achievements, current trends, as well as future potentials and challenges of satellite earth observation for permafrost related analyses. The development of analysed environmental subjects, utilized sensors and platforms, and the number of annually published articles over time are addressed in detail. Studies linked to atmospheric features and processes, such as the release of greenhouse gas emissions, appear to be strongly under-represented. Investigations on the spatial distribution of study locations revealed distinct study clusters across the Arctic. At the same time, large sections of the continuous permafrost domain are only poorly covered and remain to be investigated in detail. A general trend towards increasing attention in satellite earth observation of permafrost and related processes and features was observed. The overall amount of published articles hereby more than doubled since the year 2015. New sources of satellite data, such as the Sentinel satellites and the Methane Remote Sensing LiDAR Mission (Merlin), as well as novel methodological approaches, such as data fusion and deep learning, will thereby likely improve our understanding of the thermal state and distribution of permafrost, and the effects of its degradation. Furthermore, cloud-based big data processing platforms (e.g. Google Earth Engine (GEE)) will further enable sophisticated and long-term analyses on increasingly larger scales and at high spatial resolutions. In this thesis, a specific focus was put on Arctic permafrost coasts, which feature increasing vulnerability to environmental parameters, such as the thawing of frozen ground, and are therefore associated with amplified erosion rates. In particular, a novel monitoring framework for quantifying Arctic coastal erosion rates within the permafrost domain at high spatial resolution and on a circum-Arctic scale is presented within this thesis. Challenging illumination conditions and frequent cloud cover restrict the applicability of optical satellite imagery in Arctic regions. In order to overcome these limitations, Synthetic Aperture RADAR (SAR) data derived from Sentinel-1 (S1), which is largely independent from sun illumination and weather conditions, was utilized. Annual SAR composites covering the months June–September were combined with a Deep Learning (DL) framework and a Change Vector Analysis (CVA) approach to generate both a high-quality and circum-Arctic coastline product as well as a coastal change product that highlights areas of erosion and build-up. Annual composites in the form of standard deviation (sd) and median backscatter were computed and used as inputs for both the DL framework and the CVA coastal change quantification. The final DL-based coastline product covered a total of 161,600 km of Arctic coastline and featured a median accuracy of ±6.3 m to the manually digitized reference data. Annual coastal change quantification between 2017–2021 indicated erosion rates of up to 67 m per year for some areas based on 400 m coastal segments. In total, 12.24% of the investigated coastline featured an average erosion rate of 3.8 m per year, which corresponds to 17.83 km2 of annually eroded land area. Multiple quality layers associated to both products, the generated DL-coastline and the coastal change rates, are provided on a pixel basis to further assess the accuracy and applicability of the proposed data, methods, and products. Lastly, the extracted circum-Arctic erosion rates were utilized as a basis in an experimental framework for estimating the amount of permafrost and carbon loss as a result of eroding permafrost coastlines. Information on permafrost fraction, Active Layer Thickness (ALT), soil carbon content, and surface elevation were thereby combined with the aforementioned erosion rates. While the proposed experimental framework provides a valuable outline for quantifying the volume loss of frozen ground and carbon release, extensive validation of the utilized environmental products and resulting volume loss numbers based on 200 m segments are necessary. Furthermore, data of higher spatial resolution and information of carbon content for deeper soil depths are required for more accurate estimates. / Als Folge des Klimawandels und der damit verbundenen „Arctic Amplification“ wird weltweit eine Degradation des Dauerfrostbodens (Permafrost) beobachtet, welche schwerwiegende Auswirkungen auf die Umwelt hat. Erdrutsche, erhöhte Oberflächen- verformungsraten, eine zunehmende Wahrscheinlichkeit von Infrastrukturschäden, verstärkte Küstenerosionsraten und die potenzielle Umwandlung von Permafrost von einer Kohlenstoffsenke in eine Kohlenstoffquelle sind dabei beispielhafte Auswirkun- gen im Zusammenhang mit dem Auftauen von gefrorenem Bodenmaterial. In diesem Kontext ist die Satelliten-gestützte Erdbeobachtung ein wirkmächtiges Werkzeug zur Identifizierung und kontinuierlichen Überwachung relevanter Prozesse und Merkmale auf einer kostengünstigen, langfristigen, räumlich expliziten und operativen Basis und auf einem zirkumpolaren Maßstab. Insgesamt 325 Artikel, die in den letzten zwei Jahrzehnten in 30 verschiedenen internationalen Zeitschriften veröffentlicht wurden, wurden auf Basis der adressierten Umweltschwerpunkte, Fernerkundungsplattformen, Sensorkombinationen, angewand- ten raum-zeitlichen Auflösungen und den Studienorten in einem umfassenden Überblick über vergangene Errungenschaften und aktuelle Trends untersucht. Zusätzlich wur- den zukünftige Potenziale und Herausforderungen der Satelliten-Erdbeobachtung für Permafrost-bezogene Analysen diskutiert. Auf die zeitliche Entwicklung der un- tersuchten Umweltthemen, eingesetzten Sensoren und Satelliten-Plattformen sowie die Zahl der jährlich erscheinenden Artikel wurde detailliert eingegangen. Studien zu atmosphärischen Eigenschaften und Prozessen, wie etwa der Freisetzung von Treibhaus- gasemissionen, waren stark unterrepräsentiert. Deutliche geografische Schlüssel-Gebiete, auf welche sich der Großteil der Studien konzentrierte, konnten in Untersuchungen zur räumlichen Verteilung der Studienorte identifiziert werden. Gleichzeitig sind große Teile des kontinuierlichen Permafrost-Gebiets nur spärlich abgedeckt und müssen noch im Detail untersucht werden. Es wurde ein allgemeiner Trend zu einer zunehmenden Aufmerksamkeit bezüglich der Satelliten-gestützten Erdbeobachtung von Permafrost und verwandten Prozessen und Merkmalen beobachtet. Die Gesamtzahl der veröf- fentlichten Artikel hat sich dabei seit dem Jahr 2015 mehr als verdoppelt. Neue Quellen für Satellitendaten, wie beispielweise die Sentinel-Satelliten und die Methane Remote Sensing LiDAR Mission (Merlin), sowie neuartige methodische Ansätze, wie Datenfusion und Deep Learning, werden dabei voraussichtlich unser Verständnis bzgl. des thermischen Zustands und der Verteilung von Permafrost-Vorkommen sowie die Auswirkungen seines Auftauens verbessern. Darüber hinaus werden Cloud-basierte Big-Data-Verarbeitungsplattformen (z.B. Google Earth Engine (GEE)) anspruchsvolle und langfristige Analysen in immer größeren Maßstäben und mit hoher räumlicher Auflösung erleichtern. In dieser Arbeit wurde ein besonderer Fokus auf arktische Permafrost-Küsten gelegt, die eine zunehmende Vulnerabilität gegenüber Umweltparametern wie dem Auftauen von gefrorenem Boden aufweisen und daher von verstärkten Erosionsraten betroffen sind. Ein neuartiger Ansatz zur Quantifizierung der arktischen Küstene- rosion innerhalb des Permafrost-Gebiets mit hoher räumlicher Auflösung und auf zirkum-arktischem Maßstab wird in dieser Dissertation präsentiert. Schwierige Be- leuchtungsbedingungen und häufige Bewölkung schränken die Anwendbarkeit optischer Satellitenbilder in arktischen Regionen ein. Um diese Einschränkungen zu überwinden, wurden Synthetic Aperture RADAR (SAR) Daten von Sentinel-1 (S1) verwendet, die weitgehend unabhängig von Sonneneinstrahlung und Wetterbedingungen sind. Jährli- che SAR-Komposite, welche die Monate Juni bis September abdecken, wurden mit einem Deep Learning (DL)-Ansatz und einer Change Vector Analysis (CVA)-Methode kombiniert, um sowohl ein qualitativ hochwertiges und zirkum-arktisches Küstenli- nienprodukt als auch ein Produkt für die Änderungsraten (Erosion und küstennahe Aggregation von Sedimenten) der Küste zu generieren. Jährliche Satelliten-Komposite in Form von der Standardabweichung (sd) und des Medians der SAR Rückstreuung wurden hierbei berechnet und als Eingabedaten sowohl für den DL-Ansatz als auch für die Quantifizierung der CVA-basierten Küstenänderung verwendet. Das endgül- tige DL-basierte Küstenlinienprodukt deckt insgesamt 161.600 km der arktischen Küstenlinie ab und wies eine Median-Abweichung von ±6,3 m gegenüber den ma- nuell digitalisierten Referenzdaten auf. Im Zuge der Quantifizierung von jährlichen Küstenveränderungen zwischen 2017 und 2021 konnten Erosionsraten von bis zu 67 m pro Jahr und basierend auf 400 m Küstenabschnitten identifiziert werden. Insgesamt wiesen 12,24% der untersuchten Küstenlinie eine durchschnittliche Erosionsrate von 3,8 m pro Jahr auf, was einer jährlichen erodierten Landfläche von 17,83 km2 entspricht. Mehrere Qualitäts-Datensätze, die beiden Produkten zugeordnet sind, wurden auf Pixelbasis bereitgestellt, um die Genauigkeit und Anwendbarkeit der präsentierten Daten, Methoden und Produkte weiter einordnen zu können. Darüber hinaus wurden die extrahierten zirkum-arktischen Erosionsraten als Grund- lage in einem experimentellen Ansatz verwendet, um die Menge an Permafrost-Verlust und Kohlenstofffreistzung als Konsequenz der erodierten Permafrost-Küsten abzu- schätzen. Dabei wurden Informationen zu Permafrost-Anteil, Active Layer Thickness (ALT), Höhenmodellen und der Menge an im Boden gespeichertem Kohlenstoff mit den oben genannten Erosionsraten kombiniert. Während der präsentierte experimentelle Ansatz einen wertvollen Ausgangspunkt für die Quantifizierung des Volumenverlusts von gefrorenem Boden und der Kohlenstofffreisetzung darstellt, ist eine umfassende Validierung der verwendeten Umweltprodukte und der resultierenden Volumenzah- len erforderlich. Zusätzlich werden für genauere Abschätzungen Daten mit höherer räumlicher Auflösung und Informationen zum Kohlenstoffgehalt für tiefere Bodentiefen benötigt.
749

Parallel Radiofrequency Transmission for Safe Magnetic Resonance Imaging of Deep Brain Stimulation Patients at 3 Tesla

Yang, Benson January 2023 (has links)
Deep brain stimulation (DBS) improves the quality of life for patients suffering from neurological disorders such as Parkinson’s disease and, more recently, psychiatric/cognitive disorders such as depression and addiction. This treatment option involves the implantation of an implantable pulse generator (or neurostimulator) and leads (or electrodes) implanted deep within the human brain. Magnetic resonance imaging (MRI) is a powerful diagnostic tool that offers superior soft tissue contrast and is routinely used in clinics for neuroimaging applications. MRI is advantageous in DBS pre-surgical planning as precise lead placement within the brain is essential for optimal treatment outcomes. DBS patients can also benefit from post-surgery MRI, and studies have shown that DBS patients are more likely to require MRI within 5-10 years post-surgery. However, imaging DBS patients is restricted by substantial safety concerns that arise from localized electric charge accumulation along the implanted device during resonant radiofrequency (RF) excitation, which can potentially lead to tissue heating and bodily damage. With the technological advancement of ultra-high field (UHF) MRI systems and a growing DBS patient population, DBS MRI safety will become increasingly problematic in the future and needs to be addressed. Parallel RF transmission (pTx) is a promising technology that utilizes multiple transmit channels to generate a desired electromagnetic profile during MRI RF excitation. Several proof-of-concept studies successfully demonstrated its efficacy in creating a "safe mode" of imaging that minimizes the localized RF heating effects. However, pTx MRI systems are not easily accessible and are often custom-built and integrated onto existing MRI systems. Consequently, it adds system characterization and verification complexity to the DBS MRI safety problem. System channel count is also an important consideration as implementation costs can be very high, and the impact of system transmit channel count remains unexplored. Furthermore, in practice, DBS patients with motor-related disorders will impact the pTx MRI system’s ability to precisely generate these safe mode electromagnetic profiles. Commercial DBS devices (i.e., the neurostimulator and leads) are manufactured with fixed dimensions, and the caring surgeon typically manages the surgical orientation of the implanted DBS device and leads. Therefore, lead trajectories can vary hospital-to-hospital. As a result, standard phantoms, i.e., the ASTM International Standard, used in safety verification experiments may not be suitable for DBS MRI applications. To advance DBS patient safety in MRI, this thesis studied the implant heating effects of pTx system uncertainty, system channel count, patient motion on a novel pTx MRI research platform and its associated safe mode of imaging. It developed a new anthropomorphic heterogeneous phantom to improve safety verification experiments. / Dissertation / Doctor of Philosophy (PhD)
750

Deep learning role in scoliosis detection and treatment

Guanche, Luis 29 January 2024 (has links)
Scoliosis is a common skeletal condition in which a curvature forms along the coronal plane of the spine. Although scoliosis has been long recognized, its pathophysiology and best mode of treatment are still debated. Currently, definitive diagnosis of scoliosis and its progression are performed through anterior-posterior (AP) radiographs by measuring the angle of coronal curvature, referred to as Cobb angle. Cobb angle measurements can be performed by Deep Learning algorithms and are currently being investigated as a possible diagnostic tool for clinicians. This thesis focuses on the role of Deep Learning in the diagnosis and treatment of Scoliosis and proposes a study design using the algorithms to continue to better understand and classify the disease.

Page generated in 0.0805 seconds