• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 4
  • 1
  • Tagged with
  • 92
  • 92
  • 92
  • 29
  • 19
  • 19
  • 17
  • 15
  • 14
  • 13
  • 12
  • 11
  • 10
  • 10
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

High performance Deep Learning based Digital Pre-distorters for RF Power Amplifiers

Kudupudi, Rajesh 25 January 2022 (has links)
In this work, we present different deep learning-based digital pre-distorters and compare them based on their performance towards improving the linearity of highly non-linear power amplifiers. The simulation results show that BiLSTM based DPDs work the best in terms of improving the linearity performance. We also compare two methodologies of direct learning and indirect learning to develop deep learning-based digital pre-distorters (DL-DPDs) models and evaluate their improvement on the linearity of Power Amplifiers (PA). We carry out a theoretical analysis on the differences between these training methodologies and verify their performance with simulation results on class-AB and class-F⁻¹ PAs. The simulation results show that both the learning methods lead to an improvement of more than 12 dB and 11dB in the linearity of class-AB and class-F⁻¹ PAs respectively, with indirect learning DL-DPD offering marginally better performance. Moreover, we compare the DL-DPD with memory polynomial models and show that using the former gives a significant improvement over the memory polynomials. Furthermore, we discuss the advantages of exploiting a BiLSTM based neural network architecture for designing direct/indirect DPDs. We demonstrate that BiLSTM DPD can be used to pre distort signals of any size without the drop in linearity. Moreover, based on the insights we develop a frequency domain loss using which further increased the linearity of the PA. / Master of Science / Wireless communication devices have fundamentally changed the way we interact with people. This increased the user's reliance on communication devices and significantly grew the need for higher data rates and faster internet speeds. But one major obstacle inside the transmitter chain (antenna) with increasing the data rates is the power amplifier, which distorts the signals at these higher powers. This distortion will reduce the efficiency and reliability of communication systems, greatly decreasing the quality of communication. So, we developed a high-performance DPD using deep learning to combat this issue. In this paper, we compare different deep learning-based DPDs and analyze which offers better performance. We also contrast two training methodologies to learn these DL-DPDs, theoretically and with simulation to arrive at which method offers better performing DPDs. We do these experiments on two different types of power amplifiers, and signals of any length. We design a new loss function, such that optimizing it leads to better DL-DPDs.
2

Synthesizing Realistic Data for Vision Based Drone-to-Drone Detection

Yellapantula, Sudha Ravali 15 July 2019 (has links)
In the thesis, we aimed at building a robust UAV(drone) detection algorithm through which, one drone could detect another drone in flight. Though this was a straight forward object detection problem, the biggest challenge we faced for drone detection is the limited amount of drone images for training. To address this issue, we used Generative Adversarial Networks, CycleGAN to be precise, for the generation of realistic looking fake images which were indistinguishable from real data. CycleGAN is a classic example of Image to Image Translation technique, and we this applied in our situation where synthetic images from one domain were transformed into another domain, containing real data. The model, once trained, was capable of generating realistic looking images from synthetic data without the presence of real images. Following this, we employed a state of the art object detection model, YOLO(You Only Look Once), to build a Drone Detection model that was trained on the generated images. Finally, the performance of this model was compared against different datasets in order to evaluate its performance. / Master of Science / In the recent years, technologies like Deep Learning and Machine Learning have seen many rapid developments. Among the many applications they have, object detection is one of the widely used application and well established problems. In our thesis, we deal with a scenario where we have a swarm of drones and our aim is for one drone to recognize another drone in its field of vision. As there was no drone image dataset readily available, we explored different ways of generating realistic data to address this issue. Finally, we proposed a solution to generate realistic images using Deep Learning techniques and trained an object detection model on it where we evaluated how well it has performed against other models.
3

Artificial intelligence system for continuous affect estimation from naturalistic human expressions

Abd Gaus, Yona Falinie January 2018 (has links)
The analysis and automatic affect estimation system from human expression has been acknowledged as an active research topic in computer vision community. Most reported affect recognition systems, however, only consider subjects performing well-defined acted expression, in a very controlled condition, so they are not robust enough for real-life recognition tasks with subject variation, acoustic surrounding and illumination change. In this thesis, an artificial intelligence system is proposed to continuously (represented along a continuum e.g., from -1 to +1) estimate affect behaviour in terms of latent dimensions (e.g., arousal and valence) from naturalistic human expressions. To tackle the issues, feature representation and machine learning strategies are addressed. In feature representation, human expression is represented by modalities such as audio, video, physiological signal and text modality. Hand- crafted features is extracted from each modality per frame, in order to match with consecutive affect label. However, the features extracted maybe missing information due to several factors such as background noise or lighting condition. Haar Wavelet Transform is employed to determine if noise cancellation mechanism in feature space should be considered in the design of affect estimation system. Other than hand-crafted features, deep learning features are also analysed in terms of the layer-wise; convolutional and fully connected layer. Convolutional Neural Network such as AlexNet, VGGFace and ResNet has been selected as deep learning architecture to do feature extraction on top of facial expression images. Then, multimodal fusion scheme is applied by fusing deep learning feature and hand-crafted feature together to improve the performance. In machine learning strategies, two-stage regression approach is introduced. In the first stage, baseline regression methods such as Support Vector Regression are applied to estimate each affect per time. Then in the second stage, subsequent model such as Time Delay Neural Network, Long Short-Term Memory and Kalman Filter is proposed to model the temporal relationships between consecutive estimation of each affect. In doing so, the temporal information employed by a subsequent model is not biased by high variability present in consecutive frame and at the same time, it allows the network to exploit the slow changing dynamic between emotional dynamic more efficiently. Following of two-stage regression approach for unimodal affect analysis, fusion information from different modalities is elaborated. Continuous emotion recognition in-the-wild is leveraged by investigating mathematical modelling for each emotion dimension. Linear Regression, Exponent Weighted Decision Fusion and Multi-Gene Genetic Programming are implemented to quantify the relationship between each modality. In summary, the research work presented in this thesis reveals a fundamental approach to automatically estimate affect value continuously from naturalistic human expression. The proposed system, which consists of feature smoothing, deep learning feature, two-stage regression framework and fusion using mathematical equation between modalities is demonstrated. It offers strong basis towards the development artificial intelligent system on estimation continuous affect estimation, and more broadly towards building a real-time emotion recognition system for human-computer interaction.
4

Applying Natural Language Processing and Deep Learning Techniques for Raga Recognition in Indian Classical Music

Peri, Deepthi 27 August 2020 (has links)
In Indian Classical Music (ICM), the Raga is a musical piece's melodic framework. It encompasses the characteristics of a scale, a mode, and a tune, with none of them fully describing it, rendering the Raga a unique concept in ICM. The Raga provides musicians with a melodic fabric, within which all compositions and improvisations must take place. Identifying and categorizing the Raga is challenging due to its dynamism and complex structure as well as the polyphonic nature of ICM. Hence, Raga recognition—identify the constituent Raga in an audio file—has become an important problem in music informatics with several known prior approaches. Advancing the state of the art in Raga recognition paves the way to improving other Music Information Retrieval tasks in ICM, including transcribing notes automatically, recommending music, and organizing large databases. This thesis presents a novel melodic pattern-based approach to recognizing Ragas by representing this task as a document classification problem, solved by applying a deep learning technique. A digital audio excerpt is hierarchically processed and split into subsequences and gamaka sequences to mimic a textual document structure, so our model can learn the resulting tonal and temporal sequence patterns using a Recurrent Neural Network. Although training and testing on these smaller sequences, we predict the Raga for the entire audio excerpt, with the accuracy of 90.3% for the Carnatic Music Dataset and 95.6% for the Hindustani Music Dataset, thus outperforming prior approaches in Raga recognition. / Master of Science / In Indian Classical Music (ICM), the Raga is a musical piece's melodic framework. The Raga is a unique concept in ICM, not fully described by any of the fundamental concepts of Western classical music. The Raga provides musicians with a melodic fabric, within which all compositions and improvisations must take place. Raga recognition refers to identifying the constituent Raga in an audio file, a challenging and important problem with several known prior approaches and applications in Music Information Retrieval. This thesis presents a novel approach to recognizing Ragas by representing this task as a document classification problem, solved by applying a deep learning technique. A digital audio excerpt is processed into a textual document structure, from which the constituent Raga is learned. Based on the evaluation with third-party datasets, our recognition approach achieves high accuracy, thus outperforming prior approaches.
5

Multiscale Modeling with Meshfree Methods

Xu, Wentao January 2023 (has links)
Multiscale modeling has become an important tool in material mechanics because material behavior can exhibit varied properties across different length scales. The use of multiscale modeling is essential for accurately capturing these characteristics and predicting material behavior. Mesh-free methods have also been gaining attention in recent years due to their innate ability to handle complex geometries and large deformations. These methods provide greater flexibility and efficiency in modeling complex material behavior, especially for problems involving discontinuities, such as fractures and cracks. Moreover, mesh-free methods can be easily extended to multiple lengths and time scales, making them particularly suitable for multiscale modeling. The thesis focuses on two specific problems of multiscale modeling with mesh-free methods. The first problem is the atomistically informed constitutive model for the study of high-pressure induced densification of silica glass. Molecular Dynamics (MD) simulations are carried out to study the atomistic level responses of fused silica under different pressure and strain-rate levels, Based on the data obtained from the MD simulations, a novel continuum-based multiplicative hyper-elasto-plasticity model that accounts for the anomalous densification behavior is developed and then parameterized using polynomial regression and deep learning techniques. To incorporate dynamic damage evolution, a plasticity-damage variable that controls the shrinkage of the yield surface is introduced and integrated into the elasto-plasticity model. The resulting coupled elasto-plasticity-damage model is reformulated to a non-ordinary state-based peridynamics (NOSB-PD) model for the computational efficiency of impact simulations. The developed peridynamics (PD) model reproduces coarse-scale quantities of interest found in MD simulations and can simulate at a component level. Finally, the proposed atomistically-informed multiplicative hyper-elasto-plasticity-damage model has been validated against limited available experimental results for the simulation of hyper-velocity impact simulation of projectiles on silica glass targets. The second problem addressed in the thesis involves the upscaling approach for multi-porosity media, analyzed using the so-called MultiSPH method, which is a sequential SPH (Smoothed Particle Hydrodynamics) solver across multiple scales. Multi-porosity media is commonly found in natural and industrial materials, and their behavior is not easily captured with traditional numerical methods. The upscaling approach presented in the thesis is demonstrated on a porous medium consisting of three scales, it involves using SPH methods to characterize the behavior of individual pores at the microscopic scale and then using a homogenization technique to upscale to the meso and macroscopic level. The accuracy of the MultiSPH approach is confirmed by comparing the results with analytical solutions for simple microstructures, as well as detailed single-scale SPH simulations and experimental data for more complex microstructures.
6

Deep Self-Modeling for Robotic Systems

Kwiatkowski, Robert January 2022 (has links)
As self-awareness is important to human higher level cognition so too is the ability to self-model important to performing complex behaviors. The power of these self-models is one that I demonstrate grows with the complexity of problems being solved, and thus provides the framework for higher level cognition. I demonstrate that self-models can be used to effectively control and improve on existing control algorithms to allow agents to perform complex tasks. I further investigate new ways in which these self-models can be learned and applied to increase their efficacy and improve the ability of these models to generalize across tasks and bodies. Finally, I demonstrate the overall power of these self-models to allow for complex tasks to be completed with little data across a variety of bodies and using a number of algorithms.
7

Figure Extraction from Scanned Electronic Theses and Dissertations

Kahu, Sampanna Yashwant 29 September 2020 (has links)
The ability to extract figures and tables from scientific documents can solve key use-cases such as their semantic parsing, summarization, or indexing. Although a few methods have been developed to extract figures and tables from scientific documents, their performance on scanned counterparts is considerably lower than on born-digital ones. To facilitate this, we propose methods to effectively extract figures and tables from Electronic Theses and Dissertations (ETDs), that out-perform existing methods by a considerable margin. Our contribution towards this goal is three-fold. (a) We propose a system/model for improving the performance of existing methods on scanned scientific documents for figure and table extraction. (b) We release a new dataset containing 10,182 labelled page-images spanning across 70 scanned ETDs with 3.3k manually annotated bounding boxes for figures and tables. (c) Lastly, we release our entire code and the trained model weights to enable further research (https://github.com/SampannaKahu/deepfigures-open). / Master of Science / Portable Document Format (PDF) is one of the most popular document formats. However, parsing PDF files is not a trivial task. One use-case of parsing PDF files is the search functionality on websites hosting scholarly documents (i.e., IEEE Xplore, etc.). Having the ability to extract figures and tables from a scholarly document helps this use-case, among others. Methods using deep learning exist which extract figures from scholarly documents. However, a large number of scholarly documents, especially the ones published before the advent of computers, have been scanned from hard paper copies into PDF. In particular, we focus on scanned PDF versions of long documents, such as Electronic Theses and Dissertations (ETDs). No experiments have been done yet that evaluate the efficacy of the above-mentioned methods on this scanned corpus. This work explores and attempts to improve the performance of these existing methods on scanned ETDs. A new gold standard dataset is created and released as a part of this work for figure extraction from scanned ETDs. Finally, the entire source code and trained model weights are made open-source to aid further research in this field.
8

Deep Learning for Enhancing Precision Medicine

Oh, Min 07 June 2021 (has links)
Most medical treatments have been developed aiming at the best-on-average efficacy for large populations, resulting in treatments successful for some patients but not for others. It necessitates the need for precision medicine that tailors medical treatment to individual patients. Omics data holds comprehensive genetic information on individual variability at the molecular level and hence the potential to be translated into personalized therapy. However, the attempts to transform omics data-driven insights into clinically actionable models for individual patients have been limited. Meanwhile, advances in deep learning, one of the most promising branches of artificial intelligence, have produced unprecedented performance in various fields. Although several deep learning-based methods have been proposed to predict individual phenotypes, they have not established the state of the practice, due to instability of selected or learned features derived from extremely high dimensional data with low sample sizes, which often results in overfitted models with high variance. To overcome the limitation of omics data, recent advances in deep learning models, including representation learning models, generative models, and interpretable models, can be considered. The goal of the proposed work is to develop deep learning models that can overcome the limitation of omics data to enhance the prediction of personalized medical decisions. To achieve this, three key challenges should be addressed: 1) effectively reducing dimensions of omics data, 2) systematically augmenting omics data, and 3) improving the interpretability of omics data. / Doctor of Philosophy / Most medical treatments have been developed aiming at the best-on-average efficacy for large populations, resulting in treatments successful for some patients but not for others. It necessitates the need for precision medicine that tailors medical treatment to individual patients. Biological data such as DNA sequences and snapshots of genetic activities hold comprehensive information on individual variability and hence the potential to accelerate personalized therapy. However, the attempts to transform data-driven insights into clinical models for individual patients have been limited. Meanwhile, advances in deep learning, one of the most promising branches of artificial intelligence, have produced unprecedented performance in various fields. Although several deep learning-based methods have been proposed to predict individual treatment or outcome, they have not established the state of the practice, due to the complexity of biological data and limited availability, which often result in overfitted models that may work on training data but not on test data or unseen data. To overcome the limitation of biological data, recent advances in deep learning models, including representation learning models, generative models, and interpretable models, can be considered. The goal of the proposed work is to develop deep learning models that can overcome the limitation of omics data to enhance the prediction of personalized medical decisions. To achieve this, three key challenges should be addressed: 1) effectively reducing the complexity of biological data, 2) generating realistic biological data, and 3) improving the interpretability of biological data.
9

Synthetic Electronic Medical Record Generation using Generative Adversarial Networks

Beyki, Mohammad Reza 13 August 2021 (has links)
It has been a while that computers have replaced our record books, and medical records are no exception. Electronic Health Records (EHR) are digital version of a patient's medical records. EHRs are available to authorized users, and they contain the medical records of the patient, which should help doctors understand a patient's condition quickly. In recent years, Deep Learning models have proved their value and have become state-of-the-art in computer vision, natural language processing, speech and other areas. The private nature of EHR data has prevented public access to EHR datasets. There are many obstacles to create a deep learning model with EHR data. Because EHR data are primarily consisting of huge sparse matrices, these challenges are mostly unique to this field. Due to this, research in this area is limited, and we can improve existing research substantially. In this study, we focus on high-performance synthetic data generation in EHR datasets. Artificial data generation can help reduce privacy leakage for dataset owners as it is proven that de-identification methods are prone to re-identification attacks. We propose a novel approach we call Improved Correlation Capturing Wasserstein Generative Adversarial Network (SCorGAN) to create EHR data. This work, leverages Deep Convolutional Neural Networks to extract and understand spatial dependencies in EHR data. To improve our model's performance, we focus on our Deep Convolutional AutoEncoder to better map our real EHR data to our latent space where we train the Generator. To assess our model's performance, we demonstrate that our generative model can create excellent data that are statistically close to the input dataset. Additionally, we evaluate our synthetic dataset against the original data using our previous work that focused on GAN Performance Evaluation. This work is publicly available at https://github.com/mohibeyki/SCorGAN / Master of Science / Artificial Intelligence (AI) systems have improved greatly in recent years. They are being used to understand all kinds of data. A practical use case for AI systems is to leverage their power to identify illnesses and find correlations between different conditions. To train AI and Machine Learning systems, we need to feed them huge datasets, and in the training process, we need to guide them so that they learn different features in our data. The more data an intelligent system has seen, the better it performs. However, health records are private, and we cannot share real people's health records with the public, whether they are a researcher or not. This study provides a novel approach to synthetic data generation that others can use with intelligent systems. Then these systems can work with actual health records can give us accurate feedback on people's health conditions. We then show that our synthetic dataset is a good substitute for real datasets to train intelligent systems. Lastly, we present an intelligent system that we have trained using synthetic datasets to identify illnesses in a real dataset with high accuracy and precision.
10

Color Invariant Skin Segmentation

Xu, Han 25 March 2022 (has links)
This work addresses the problem of automatically detecting human skin in images without reliance on color information. Unlike previous methods, we present a new approach that performs well in the absence of such information. A key aspect of the work is that color-space augmentation is applied strategically during the training, with the goal of reducing the influence of features that are based entirely on color and increasing more semantic understanding. The resulting system exhibits a dramatic improvement in performance for images in which color details are diminished. We have demonstrated the concept using the U-Net architecture, and experimental results show improvements in evaluations for all Fitzpatrick skin tones in the ECU dataset. We further tested the system with RFW dataset to show that the proposed method is consistent across different ethnicities and reduces bias to any skin tones. Therefore, this work has strong potential to aid in mitigating bias in automated systems that can be applied to many applications including surveillance and biometrics. / Master of Science / Skin segmentation deals with the classification of skin and non-skin pixels and regions in a image containing these information. Although most previous skin-detection methods have used color cues almost exclusively, they are vulnerable to external factors (e.g., poor or unnatural illumination and skin tones). In this work, we present a new approach based on U-Net that performs well in the absence of color information. To be specific, we apply a new color space augmentation into the training stage to improve the performance of skin segmentation system over the illumination and skin tone diverse. The system was trained and tested with both original and color changed ECU dataset. We also test our system with RFW dataset, a larger dataset with four human races with different skin tones. The experimental results show improvements in evaluations for skin tones and complex illuminations.

Page generated in 0.1353 seconds