• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1850
  • 57
  • 54
  • 38
  • 37
  • 37
  • 19
  • 13
  • 10
  • 7
  • 4
  • 4
  • 2
  • 2
  • 1
  • Tagged with
  • 2668
  • 2668
  • 1104
  • 955
  • 832
  • 608
  • 579
  • 488
  • 487
  • 463
  • 438
  • 432
  • 411
  • 410
  • 373
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
371

Parkinson's Disease Automated Hand Tremor Analysis from Spiral Images

DeSipio, Rebecca E. 05 1900 (has links)
Parkinson’s Disease is a neurological degenerative disease affecting more than six million people worldwide. It is a progressive disease, impacting a person’s movements and thought processes. In recent years, computer vision and machine learning researchers have been developing techniques to aid in the diagnosis. This thesis is motivated by the exploration of hand tremor symptoms in Parkinson’s patients from the Archimedean Spiral test, a paper-and-pencil test used to evaluate hand tremors. This work presents a novel Fourier Domain analysis technique that transforms the pencil content of hand-drawn spiral images into frequency features. Our technique is applied to an image dataset consisting of spirals drawn by healthy individuals and people with Parkinson’s Disease. The Fourier Domain analysis technique achieves 81.5% accuracy predicting images drawn by someone with Parkinson’s, a result 6% higher than previous methods. We compared this method against the results using extracted features from the ResNet-50 and VGG16 pre-trained deep network models. The VGG16 extracted features achieve 95.4% accuracy classifying images drawn by people with Parkinson’s Disease. The extracted features of both methods were also used to develop a tremor severity rating system which scores the spiral images on a scale from 0 (no tremor) to 1 (severe tremor). The results show correlation to the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) developed by the International Parkinson and Movement Disorder Society. These results can be useful for aiding in early detection of tremors, the medical treatment process, and symptom tracking to monitor the progression of Parkinson’s Disease. / M.S. / Parkinson’s Disease is a neurological degenerative disease affecting more than six million people worldwide. It is a progressive disease, impacting a person’s movements and thought processes. In recent years, computer vision and machine learning researchers have been developing techniques to aid in the diagnosis. This thesis is motivated by the exploration of hand tremor symptoms in Parkinson’s patients from the Archimedean Spiral test, a paper-and-pencil test used to evaluate hand tremors. This work presents a novel spiral analysis technique that converts the pencil content of hand-drawn spirals into numeric values, called features. The features measure spiral smoothness. Our technique is applied to an image dataset consisting of spirals drawn by healthy and Parkinson’s individuals. The spiral analysis technique achieves 81.5% accuracy predicting images drawn by someone with Parkinson’s. We compared this method against the results using extracted features from pre-trained deep network models. The VGG16 pre-trained model extracted features achieve 95.4% accuracy classifying images drawn by people with Parkinson’s Disease. The extracted features of both methods were also used to develop a tremor severity rating system which scores the spiral images on a scale from 0 (no tremor) to 1 (severe tremor). The results show a similar trend to the tremor evaluations rated by the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS) developed by the International Parkinson and Movement Disorder Society. These results can be useful for aiding in early detection of tremors, the medical treatment process, and symptom tracking to monitor the progression of Parkinson’s Disease.
372

Accelerating Conceptual Design Analysis of Marine Vehicles through Deep Learning

Jones, Matthew Cecil 02 May 2019 (has links)
Evaluation of the flow field imparted by a marine vehicle reveals the underlying efficiency and performance. However, the relationship between precise design features and their impact on the flow field is not well characterized. The goal of this work is first, to investigate the thermally-stratified near field of a self-propelled marine vehicle to identify the significance of propulsion and hull-form design decisions, and second, to develop a functional mapping between an arbitrary vehicle design and its associated flow field to accelerate the design analysis process. The unsteady Reynolds-Averaged Navier-Stokes equations are solved to compute near-field wake profiles, showing good agreement to experimental data and providing a balance between simulation fidelity and numerical cost, given the database of cases considered. Machine learning through convolutional networks is employed to discover the relationship between vehicle geometries and their associated flow fields with two distinct deep-learning networks. The first network directly maps explicitly-specified geometric design parameters to their corresponding flow fields. The second network considers the vehicle geometries themselves as tensors of geometric volume fractions to implicitly-learn the underlying parameter space. Once trained, both networks effectively generate realistic flow fields, accelerating the design analysis from a process that takes days to one that takes a fraction of a second. The implicit-parameter network successfully learns the underlying parameter space for geometries within the scope of the training data, showing comparable performance to the explicit-parameter network. With additions to the size and variability of the training database, this network has the potential to abstractly generalize the design space for arbitrary geometric inputs, even those beyond the scope of the training data. / Doctor of Philosophy / Evaluation of the flow field of a marine vehicle reveals the underlying performance, however, the exact relationship between design features and their impact on the flow field is not well established. The goal of this work is first, to investigate the flow surrounding a self–propelled marine vehicle to identify the significance of various design decisions, and second, to develop a functional relationship between an arbitrary vehicle design and its flow field, thereby accelerating the design analysis process. Near–field wake profiles are computed through simulation, showing good agreement to experimental data. Machine learning is employed to discover the relationship between vehicle geometries and their associated flow fields with two distinct approaches. The first approach directly maps explicitly–specified geometric design parameters to their corresponding flow fields. The second approach considers the vehicle geometries themselves to implicitly–learn the underlying relationships. Once trained, both approaches generate a realistic flow field corresponding to a user–provided vehicle geometry, accelerating the design analysis from a multi–day process to one that takes a fraction of a second. The implicit–parameter approach successfully learns from the underlying geometric features, showing comparable performance to the explicit–parameter approach. With a larger and more–diverse training database, this network has the potential to abstractly learn the design space relationships for arbitrary marine vehicle geometries, even those beyond the scope of the training database.
373

End-To-End Text Detection Using Deep Learning

Ibrahim, Ahmed Sobhy Elnady 19 December 2017 (has links)
Text detection in the wild is the problem of locating text in images of everyday scenes. It is a challenging problem due to the complexity of everyday scenes. This problem possesses a great importance for many trending applications, such as self-driving cars. Previous research in text detection has been dominated by multi-stage sequential approaches which suffer from many limitations including error propagation from one stage to the next. Another line of work is the use of deep learning techniques. Some of the deep methods used for text detection are box detection models and fully convolutional models. Box detection models suffer from the nature of the annotations, which may be too coarse to provide detailed supervision. Fully convolutional models learn to generate pixel-wise maps that represent the location of text instances in the input image. These models suffer from the inability to create accurate word level annotations without heavy post processing. To overcome these aforementioned problems we propose a novel end-to-end system based on a mix of novel deep learning techniques. The proposed system consists of an attention model, based on a new deep architecture proposed in this dissertation, followed by a deep network based on Faster-RCNN. The attention model produces a high-resolution map that indicates likely locations of text instances. A novel aspect of the system is an early fusion step that merges the attention map directly with the input image prior to word-box prediction. This approach suppresses but does not eliminate contextual information from consideration. Progressively larger models were trained in 3 separate phases. The resulting system has demonstrated an ability to detect text under difficult conditions related to illumination, resolution, and legibility. The system has exceeded the state of the art on the ICDAR 2013 and COCO-Text benchmarks with F-measure values of 0.875 and 0.533, respectively. / Ph. D.
374

CloudCV: Deep Learning and Computer Vision on the Cloud

Agrawal, Harsh 20 June 2016 (has links)
We are witnessing a proliferation of massive visual data. Visual content is arguably the fastest growing data on the web. Photo-sharing websites like Flickr and Facebook now host more than 6 and 90 billion photos, respectively. Unfortunately, scaling existing computer vision algorithms to large datasets leaves researchers repeatedly solving the same algorithmic and infrastructural problems. Designing and implementing efficient and provably correct computer vision algorithms is extremely challenging. Researchers must repeatedly solve the same low-level problems: building and maintaining a cluster of machines, formulating each component of the computer vision pipeline, designing new deep learning layers, writing custom hardware wrappers, etc. This thesis introduces CloudCV, an ambitious system that contain algorithms for end-to-end processing of visual content. The goal of the project is to democratize computer vision; one should not have to be a computer vision, big data and deep learning expert to have access to state-of-the-art distributed computer vision algorithms. We provide researchers, students and developers access to state-of-art distributed computer vision and deep learning algorithms as a cloud service through web interface and APIs. / Master of Science
375

Deep Learning Neural Network-based Sinogram Interpolation for Sparse-View CT Reconstruction

Vekhande, Swapnil Sudhir 14 June 2019 (has links)
Computed Tomography (CT) finds applications across domains like medical diagnosis, security screening, and scientific research. In medical imaging, CT allows physicians to diagnose injuries and disease more quickly and accurately than other imaging techniques. However, CT is one of the most significant contributors of radiation dose to the general population and the required radiation dose for scanning could lead to cancer. On the other hand, a shallow radiation dose could sacrifice image quality causing misdiagnosis. To reduce the radiation dose, sparse-view CT, which includes capturing a smaller number of projections, becomes a promising alternative. However, the image reconstructed from linearly interpolated views possesses severe artifacts. Recently, Deep Learning-based methods are increasingly being used to interpret the missing data by learning the nature of the image formation process. The current methods are promising but operate mostly in the image domain presumably due to lack of projection data. Another limitation is the use of simulated data with less sparsity (up to 75%). This research aims to interpolate the missing sparse-view CT in the sinogram domain using deep learning. To this end, a residual U-Net architecture has been trained with patch-wise projection data to minimize Euclidean distance between the ground truth and the interpolated sinogram. The model can generate highly sparse missing projection data. The results show improvement in SSIM and RMSE by 14% and 52% respectively with respect to the linear interpolation-based methods. Thus, experimental sparse-view CT data with 90% sparsity has been successfully interpolated while improving CT image quality. / Master of Science / Computed Tomography is a commonly used imaging technique due to the remarkable ability to visualize internal organs, bones, soft tissues, and blood vessels. It involves exposing the subject to X-ray radiation, which could lead to cancer. On the other hand, the radiation dose is critical for the image quality and subsequent diagnosis. Thus, image reconstruction using only a small number of projection data is an open research problem. Deep learning techniques have already revolutionized various Computer Vision applications. Here, we have used a method which fills missing highly sparse CT data. The results show that the deep learning-based method outperforms standard linear interpolation-based methods while improving the image quality.
376

Revealing the Determinants of Acoustic Aesthetic Judgment Through Algorithmic

Jenkins, Spencer Daniel 03 July 2019 (has links)
This project represents an important first step in determining the fundamental aesthetically relevant features of sound. Though there has been much effort in revealing the features learned by a deep neural network (DNN) trained on visual data, little effort in applying these techniques to a network trained on audio data has been performed. Importantly, these efforts in the audio domain often impose strong biases about relevant features (e.g., musical structure). In this project, a DNN is trained to mimic the acoustic aesthetic judgment of a professional composer. A unique corpus of sounds and corresponding professional aesthetic judgments is leveraged for this purpose. By applying a variation of Google's "DeepDream" algorithm to this trained DNN, and limiting the assumptions introduced, we can begin to listen to and examine the features of sound fundamental for aesthetic judgment. / Master of Science / The question of what makes a sound aesthetically “interesting” is of great importance to many, including biologists, philosophers of aesthetics, and musicians. This project serves as an important first step in determining the fundamental aesthetically relevant features of sound. First, a computer is trained to mimic the aesthetic judgments of a professional composer; if the composer would deem a sound “interesting,” then so would the computer. During this training, the computer learns for itself what features of sound are important for this classification. Then, a variation of Google’s “DeepDream” algorithm is applied to allow these learned features to be heard. By carefully considering the manner in which the computer is trained, this algorithmic “dreaming” allows us to begin to hear aesthetically salient features of sound.
377

Fernerkundung und maschinelles Lernen zur Erfassung von urbanem Grün - Eine Analyse am Beispiel der Verteilungsgerechtigkeit in Deutschland / Remote Sensing and Machine Learning to Capture Urban Green – An Analysis Using the Example of Distributive Justice in Germany

Weigand, Matthias Johann January 2024 (has links) (PDF)
Grünflächen stellen einen der wichtigsten Umwelteinflüsse in der Wohnumwelt der Menschen dar. Einerseits wirken sie sich positiv auf die physische und mentale Gesundheit der Menschen aus, andererseits können Grünflächen auch negative Wirkungen anderer Faktoren abmildern, wie beispielsweise die im Laufe des Klimawandels zunehmenden Hitzeereignisse. Dennoch sind Grünflächen nicht für die gesamte Bevölkerung gleichermaßen zugänglich. Bestehende Forschung im Kontext der Umweltgerechtigkeit (UG) konnte bereits aufzeigen, dass unterschiedliche sozio-ökonomische und demographische Gruppen der deutschen Bevölkerung unterschiedlichen Zugriff auf Grünflächen haben. An bestehenden Analysen von Umwelteinflüssen im Kontext der UG wird kritisiert, dass die Auswertung geographischer Daten häufig auf zu stark aggregiertem Level geschieht, wodurch lokal spezifische Expositionen nicht mehr genau abgebildet werden. Dies trifft insbesondere für großflächig angelegte Studien zu. So werden wichtige räumliche Informationen verloren. Doch moderne Erdbeobachtungs- und Geodaten sind so detailliert wie nie und Methoden des maschinellen Lernens ermöglichen die effiziente Verarbeitung zur Ableitung höherwertiger Informationen. Das übergeordnete Ziel dieser Arbeit besteht darin, am Beispiel von Grünflächen in Deutschland methodische Schritte der systematischen Umwandlung umfassender Geodaten in relevante Geoinformationen für die großflächige und hochaufgelöste Analyse von Umwelteigenschaften aufzuzeigen und durchzuführen. An der Schnittstelle der Disziplinen Fernerkundung, Geoinformatik, Sozialgeographie und Umweltgerechtigkeitsforschung sollen Potenziale moderner Methoden für die Verbesserung der räumlichen und semantischen Auflösung von Geoinformationen erforscht werden. Hierfür werden Methoden des maschinellen Lernens eingesetzt, um Landbedeckung und -nutzung auf nationaler Ebene zu erfassen. Diese Entwicklungen sollen dazu beitragen bestehende Datenlücken zu schließen und Aufschluss über die Verteilungsgerechtigkeit von Grünflächen zu bieten. Diese Dissertation gliedert sich in drei konzeptionelle Teilschritte. Im ersten Studienteil werden Erdbeobachtungsdaten der Sentinel-2 Satelliten zur deutschlandweiten Klassifikation von Landbedeckungsinformationen verwendet. In Kombination mit punktuellen Referenzdaten der europaweiten Erfassung für Landbedeckungs- und Landnutzungsinformationen des Land Use and Coverage Area Frame Survey (LUCAS) wird ein maschinelles Lernverfahren trainiert. In diesem Kontext werden verschiedene Vorverarbeitungsschritte der LUCAS-Daten und deren Einfluss auf die Klassifikationsgenauigkeit beleuchtet. Das Klassifikationsverfahren ist in der Lage Landbedeckungsinformationen auch in komplexen urbanen Gebieten mit hoher Genauigkeit abzuleiten. Ein Ergebnis des Studienteils ist eine deutschlandweite Landbedeckungsklassifikation mit einer Gesamtgenauigkeit von 93,07 %, welche im weiteren Verlauf der Arbeit genutzt wird, um grüne Landbedeckung (GLC) räumlich zu quantifizieren. Im zweiten konzeptionellen Teil der Arbeit steht die differenzierte Betrachtung von Grünflächen anhand des Beispiels öffentlicher Grünflächen (PGS), die häufig Gegenstand der UG-Forschung ist, im Vordergrund. Doch eine häufig verwendete Quelle für räumliche Daten zu öffentlichen Grünflächen, der European Urban Atlas (EUA), wird bisher nicht flächendeckend für Deutschland erhoben. Dieser Studienteil verfolgt einen datengetriebenen Ansatz, die Verfügbarkeit von öffentlichem Grün auf der räumlichen Ebene von Nachbarschaften für ganz Deutschland zu ermitteln. Hierfür dienen bereits vom EUA erfasste Gebiete als Referenz. Mithilfe einer Kombination von Erdbeobachtungsdaten und Informationen aus dem OpenStreetMap-Projekt wird ein Deep Learning -basiertes Fusionsnetzwerk erstellt, welche die verfügbare Fläche von öffentlichem Grün quantifiziert. Das Ergebnis dieses Schrittes ist ein Modell, welches genutzt wird, um die Menge öffentlicher Grünflächen in der Nachbarschaft zu schätzen (𝑅 2 = 0.952). Der dritte Studienteil greift die Ergebnisse der ersten beiden Studienteile auf und betrachtet die Verteilung von Grünflächen in Deutschland unter Hinzunahme von georeferenzierten Bevölkerungsdaten. Diese exemplarische Analyse unterscheidet dabei Grünflächen nach zwei Typen: GLC und PGS. Zunächst wird mithilfe deskriptiver Statistiken die generelle Grünflächenverteilung in der Bevölkerung Deutschlands beleuchtet. Daraufhin wird die Verteilungsgerechtigkeit anhand gängiger Gerechtigkeitsmetriken bestimmt. Abschließend werden die Zusammenhänge zwischen der demographischen Komposition der Nachbarschaft und der verfügbaren Menge von Grünflächen anhand dreier exemplarischer soziodemographischer Gesellschaftsgruppen untersucht. Die Analyse zeigt starke Unterschiede der Verfügbarkeit von PGS zwischen städtischen und ländlichen Gebieten. Ein höherer Prozentsatz der Stadtbevölkerung hat Zugriff das Mindestmaß von PGS gemessen an der Vorgabe der Weltgesundheitsorganisation. Die Ergebnisse zeigen auch einen deutlichen Unterschied bezüglich der Verteilungsgerechtigkeit zwischen GLC und PGS und verdeutlichen die Relevanz der Unterscheidung von Grünflächentypen für derartige Untersuchungen. Die abschließende Betrachtung verschiedener Bevölkerungsgruppen arbeitet Unterschiede auf soziodemographischer Ebene auf. In der Zusammenschau demonstriert diese Arbeit wie moderne Geodaten und Methoden des maschinellen Lernens genutzt werden können bisherige Limitierungen räumlicher Datensätze zu überwinden. Am Beispiel von Grünflächen in der Wohnumgebung der Bevölkerung Deutschlands wird gezeigt, dass landesweite Analysen zur Umweltgerechtigkeit durch hochaufgelöste und lokal feingliedrige geographische Informationen bereichert werden können. Diese Arbeit verdeutlicht, wie die Methoden der Erdbeobachtung und Geoinformatik einen wichtigen Beitrag leisten können, die Ungleichheit der Wohnumwelt der Menschen zu identifizieren und schlussendlich den nachhaltigen Siedlungsbau in Form von objektiven Informationen zu unterstützen und überwachen. / Green spaces are one of the most important environmental factors for humans in the living environment. On the one hand they provide benefits to people’s physical and mental health, on the other hand they allow for the mitigation of negative impacts of environmental stressors like heat waves which are increasing as a result of climate change. Yet, green spaces are not equally accessible to all people. Existing literature in the context of Environmental Justice (EJ) research has shown that the access to green space varies among different socio-economic and demographic groups in Germany. However, previous studies in the context of EJ were criticized for using strongly spatially aggregated data for their analyses resulting in a loss of spatial detail on local environmental exposure metrics. This is especially true for large-scale studies where important spatial information often get lost. In this context, modern earth observation and geospatial data are more detailed than ever, and machine learning methods enable efficient processing to derive higher value information for diverse applications. The overall objective of this work is to demonstrate and implement methodological steps that allow for the transformation of vast geodata into relevant geoinformation for the large-scale and high-resolution analysis of environmental characteristics using the example of green spaces in Germany. By bridging the disciplines remote sensing, geoinformatics, social geography and environmental justice research, potentials of modern methods for the improvement of spatial and semantic resolution of geoinformation are explored. For this purpose, machine learning methods are used to map land cover and land use on a national scale. These developments will help to close existing data gaps and provide information on the distributional equity of green spaces. This dissertation comprises three conceptual steps. In the first part of the study, earth observation data from the Sentinel-2 satellites are used to derive land cover information across Germany. In combination with point reference data on land cover and land use from the paneuropean Land Use and Coverage Area Frame Survey (LUCAS) a machine learning model is trained. Therein, different preprocessing steps of the LUCAS data and their influence on the classification accuracy are highlighted. The classification model derives land cover information with high accuracy even in complex urban areas. One result of the study is a Germany-wide land cover classification with an overall accuracy of 93.07 % which is used in the further course of the dissertation to spatially quantify green land cover (GLC). The second conceptual part of this study focuses on the semantic differentiation of green spaces using the example of public green spaces (PGS), which is often the subject of EJ research. A frequently used source of spatial data on public green spaces, the European Urban Atlas (EUA),however, is not available for all of Germany. This part of the study takes a data-driven approach to determine the availability of public green space at the spatial level of neighborhoods for all of Germany. For this purpose, areas already covered by the EUA serve as a reference. Using a combination of earth observation data and information from the OpenStreetMap project, a Deep Learning -based fusion network is created that quantifies the available area of public green space. The result of this step is a model that is utilized to estimate the amount of public green space in the neighborhood (𝑅 2 = 0.952). The third part of this dissertation builds upon the results of the first two parts and integrates georeferenced population data to study the socio-spatial distribution of green spaces in Germany. This exemplary analysis distinguishes green spaces according to two types: GLC and PGS. In this,first, descriptive statistics are used to examine the overall distribution of green spaces available to the German population. Then, the distributional equality is determined using established equality metrics. Finally, the relationships between the demographic composition of the neighborhood and the available amount of green space are examined using three exemplary sociodemographic groups. The analysis reveals strong differences in PGS availability between urban and rural areas. Compared to the rural population, a higher percentage of the urban population has access to the minimum level of PGS defined as a target by the World Health Organization (WHO). The results also show a clear deviation in terms of distributive equality between GLC and PGS, highlighting the relevance of distinguishing green space types for such studies. The final analysis of certain population groups addresses differences at the sociodemographic level. In summary, this dissertation demonstrates how previous limitations of spatial datasets can be overcome through a combination of modern geospatial data and machine learning methods. Using the example of green spaces in the residential environment of the population in Germany,it is shown that nationwide analyses of environmental justice can be enriched by high-resolution and locally fine-grained geographic information. This study illustrates how earth observation and methods of geoinformatics can make an important contribution to identifying inequalities in people’s living environment. Such objective information can ultimately be deployed to support and monitor sustainable urban development.
378

Synthetic Electronic Medical Record Generation using Generative Adversarial Networks

Beyki, Mohammad Reza 13 August 2021 (has links)
It has been a while that computers have replaced our record books, and medical records are no exception. Electronic Health Records (EHR) are digital version of a patient's medical records. EHRs are available to authorized users, and they contain the medical records of the patient, which should help doctors understand a patient's condition quickly. In recent years, Deep Learning models have proved their value and have become state-of-the-art in computer vision, natural language processing, speech and other areas. The private nature of EHR data has prevented public access to EHR datasets. There are many obstacles to create a deep learning model with EHR data. Because EHR data are primarily consisting of huge sparse matrices, these challenges are mostly unique to this field. Due to this, research in this area is limited, and we can improve existing research substantially. In this study, we focus on high-performance synthetic data generation in EHR datasets. Artificial data generation can help reduce privacy leakage for dataset owners as it is proven that de-identification methods are prone to re-identification attacks. We propose a novel approach we call Improved Correlation Capturing Wasserstein Generative Adversarial Network (SCorGAN) to create EHR data. This work, leverages Deep Convolutional Neural Networks to extract and understand spatial dependencies in EHR data. To improve our model's performance, we focus on our Deep Convolutional AutoEncoder to better map our real EHR data to our latent space where we train the Generator. To assess our model's performance, we demonstrate that our generative model can create excellent data that are statistically close to the input dataset. Additionally, we evaluate our synthetic dataset against the original data using our previous work that focused on GAN Performance Evaluation. This work is publicly available at https://github.com/mohibeyki/SCorGAN / Master of Science / Artificial Intelligence (AI) systems have improved greatly in recent years. They are being used to understand all kinds of data. A practical use case for AI systems is to leverage their power to identify illnesses and find correlations between different conditions. To train AI and Machine Learning systems, we need to feed them huge datasets, and in the training process, we need to guide them so that they learn different features in our data. The more data an intelligent system has seen, the better it performs. However, health records are private, and we cannot share real people's health records with the public, whether they are a researcher or not. This study provides a novel approach to synthetic data generation that others can use with intelligent systems. Then these systems can work with actual health records can give us accurate feedback on people's health conditions. We then show that our synthetic dataset is a good substitute for real datasets to train intelligent systems. Lastly, we present an intelligent system that we have trained using synthetic datasets to identify illnesses in a real dataset with high accuracy and precision.
379

Color Invariant Skin Segmentation

Xu, Han 25 March 2022 (has links)
This work addresses the problem of automatically detecting human skin in images without reliance on color information. Unlike previous methods, we present a new approach that performs well in the absence of such information. A key aspect of the work is that color-space augmentation is applied strategically during the training, with the goal of reducing the influence of features that are based entirely on color and increasing more semantic understanding. The resulting system exhibits a dramatic improvement in performance for images in which color details are diminished. We have demonstrated the concept using the U-Net architecture, and experimental results show improvements in evaluations for all Fitzpatrick skin tones in the ECU dataset. We further tested the system with RFW dataset to show that the proposed method is consistent across different ethnicities and reduces bias to any skin tones. Therefore, this work has strong potential to aid in mitigating bias in automated systems that can be applied to many applications including surveillance and biometrics. / Master of Science / Skin segmentation deals with the classification of skin and non-skin pixels and regions in a image containing these information. Although most previous skin-detection methods have used color cues almost exclusively, they are vulnerable to external factors (e.g., poor or unnatural illumination and skin tones). In this work, we present a new approach based on U-Net that performs well in the absence of color information. To be specific, we apply a new color space augmentation into the training stage to improve the performance of skin segmentation system over the illumination and skin tone diverse. The system was trained and tested with both original and color changed ECU dataset. We also test our system with RFW dataset, a larger dataset with four human races with different skin tones. The experimental results show improvements in evaluations for skin tones and complex illuminations.
380

Novel methods for information extraction and geological product generation from radar sounder data

Hoyo Garcia, Miguel 25 March 2024 (has links)
This Ph.D. thesis presents advancements in the analysis of radar sounder data. Radar sounders (RSs) are remote sensors that transmit an electromagnetic (EM) wave at the nadir direction that penetrates the subsurface. The backscattered echoes captured by the RS antenna are coherently summed to generate an image of the subsurface profile known as a radargram. The first focus of this work is to automate the segmentation of radargrams using deep learning methodologies while minimizing the need for labeled training data. The surge in radar sounding data volume necessitates efficient automated methods. However, the amount of training labeled data in this field is strongly limited. This first work introduces a transfer learning framework based on deep learning tailored for radar sounder data that minimizes the training data requirements. This method automatically identifies and segments geological units within radargrams acquired in the cryosphere. With the cryosphere being a critical indicator of climate change, understanding its dynamics is paramount. Geological details within radargrams, such as the basal interface or the inland and floating ice, are key to this understanding. Our work shifts the focus to uncharted territory: the coastal areas of Antarctica. Novel targets such as floating ice and crevasses add complexity to the data, but the transfer learning framework minimizes the need for extensive labeled training data. The results, based on data from Antarctica, confirm the effectiveness of the approach, promising adaptability to other targets and radar data from existing and future planetary missions like RIME and SRS. The second focus of this thesis explores the generation of novel and improved geological data products by harnessing the unique characteristics of radar sounder data, including subsurface information and so-called “unwanted” clutter. The thesis introduces two methods that use RS data to generate geological products. The first contribution proposes a global high-frequency radar image of Mars. This product delivers a novel, comprehensive global radar image of Mars, capturing both surface and shallow subsurface structures. The method unlocks the potential to explore concealed Martian geology and further understand Martian geological features like dust, revealing possible candidate large dust deposits that were unknown until now. Furthermore, this method can potentially offer insights into celestial bodies beyond Mars, such as the detection of new lunar facets and Venusian geological formations. The third contribution aims to generate Digital Elevation Models (DEM) from single swath radargrams. The activity addresses the challenge of precise bed DEM estimations in Antarctica. Bed topography is critical in ice modeling and mass balance calculations, yet existing methods face limitations. To overcome these, we employ a generative adversarial network (GAN) approach that utilizes clutter information from single radargrams. This innovative technique promises to refine bed DEMs and enhance our understanding of glacier erosion and ice dynamics. The proposed methodologies were validated with data acquired on both Earth and Mars, showing promising results and confirming their effectiveness.

Page generated in 0.0948 seconds