• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2913
  • 276
  • 199
  • 187
  • 160
  • 82
  • 48
  • 29
  • 25
  • 21
  • 19
  • 15
  • 14
  • 12
  • 12
  • Tagged with
  • 4944
  • 2921
  • 1294
  • 1093
  • 1081
  • 808
  • 743
  • 736
  • 551
  • 545
  • 541
  • 501
  • 472
  • 463
  • 456
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
701

A New Era for Wireless Communications Physical Layer: A Data-Driven Learning-Based Approach.

Al-Baidhani, Amer 23 August 2022 (has links)
No description available.
702

Spatio-Temporal Analysis of EEG using Deep Learning

Sudalairaj, Shivchander 22 August 2022 (has links)
No description available.
703

More is Better than One: The Effect of Ensembling on Deep Learning Performance in Biochemical Prediction Problems

Stern, Jacob A. 07 August 2023 (has links) (PDF)
This thesis presents two papers addressing important biochemical prediction challenges. The first paper focuses on accurate protein distance predictions and introduces updates to the ProSPr network. We evaluate its performance in the Critical Assessment of techniques for Protein Structure Prediction (CASP14) competition, investigating its accuracy dependence on sequence length and multiple sequence alignment depth. The ProSPr network, an ensemble of three convolutional neural networks (CNNs), demonstrates superior performance compared to individual networks. The second paper addresses the issue of accurate ligand ranking in virtual screening for drug discovery. We propose MILCDock, a machine learning consensus docking tool that leverages predictions from five traditional molecular docking tools. MILCDock, an ensemble of eight neural networks, outperforms single-network approaches and other consensus docking methods on the DUD-E dataset. However, we find that LIT-PCBA targets remain challenging for all methods tested. Furthermore, we explore the effectiveness of training machine learning tools on the biased DUD-E dataset, emphasizing the importance of mitigating its biases during training. Collectively, this work emphasizes the power of ensembling in deep learning-based biochemical prediction problems, highlighting improved performance through the combination of multiple models. Our findings contribute to the development of robust protein distance prediction tools and more accurate virtual screening methods for drug discovery.
704

Towards Explainable Event Detection and Extraction

Mehta, Sneha 22 July 2021 (has links)
Event extraction refers to extracting specific knowledge of incidents from natural language text and consolidating it into a structured form. Some important applications of event extraction include search, retrieval, question answering and event forecasting. However, before events can be extracted it is imperative to detect events i.e. identify which documents from a large collection contain events of interest and from those extracting the sentences that might contain the event related information. This task is challenging because it is easier to obtain labels at the document level than finegrained annotations at the sentence level. Current approaches for this task are suboptimal because they directly aggregate sentence probabilities estimated by a classifier to obtain document probabilities resulting in error propagation. To alleviate this problem we propose to leverage recent advances in representation learning by using attention mechanisms. Specifically, for event detection we propose a method to compute document embeddings from sentence embeddings by leveraging attention and training a document classifier on those embeddings to mitigate the error propagation problem. However, we find that existing attention mechanisms are inept for this task, because either they are suboptimal or they use a large number of parameters. To address this problem we propose a lean attention mechanism which is effective for event detection. Current approaches for event extraction rely on finegrained labels in specific domains. Extending extraction to new domains is challenging because of difficulty of collecting finegrained data. Machine reading comprehension(MRC) based approaches, that enable zero-shot extraction struggle with syntactically complex sentences and long-range dependencies. To mitigate this problem, we propose a syntactic sentence simplification approach that is guided by MRC model to improve its performance on event extraction. / Doctor of Philosophy / Event extraction is the task of extracting events of societal importance from natural language texts. The task has a wide range of applications from search, retrieval, question answering to forecasting population level events like civil unrest, disease occurrences with reasonable accuracy. Before events can be extracted it is imperative to identify the documents that are likely to contain the events of interest and extract the sentences that mention those events. This is termed as event detection. Current approaches for event detection are suboptimal. They assume that events are neatly partitioned into sentences and obtain document level event probabilities directly from predicted sentence level probabilities. In this dissertation, under the same assumption by leveraging representation learning we mitigate some of the shortcomings of the previous event detection methods. Current approaches to event extraction are only limited to restricted domains and require finegrained labeled corpora for their training. One way to extend event extraction to new domains in by enabling zero-shot extraction. Machine reading comprehension (MRC) based approach provides a promising way forward for zero-shot extraction. However, this approach suffers from the long-range dependency problem and faces difficulty in handling syntactically complex sentences with multiple clauses. To mitigate this problem we propose a syntactic sentence simplification algorithm that is guided by the MRC system to improves its performance.
705

Finite Element Analysis of Deep Excavations

Bentler, David J. 08 October 1998 (has links)
This dissertation describes enhancements made to the finite element program, SAGE, and research on the performance of deep excavations. SAGE was developed at Virginia Tech for analysis of soil-structure interaction problems (Morrison, 1995). The purpose of the work described in this text with SAGE was to increase the capabilities of the program for soil-structure analysis. The purpose of the research on deep excavations was to develop a deeper understanding of the behavior of excavation support systems. The significant changes made to SAGE during this study include implementation of Biot Consolidation, implementation of axisymmetric analysis, and creation of a steady state seepage module. These changes as well as several others are described. A new manual for the program is also included. A review of published studies of deep excavation performance and recent case histories is presented. Factors affecting the performance of excavation support systems are examined, and performance data from recent published case histories is compared to data from Goldberg et al.'s 1976 report to the Federal Highway Administration. The design, construction, and performance of the deep excavation for the Dam Number 2 Hydroelectric Project is described. Finite element analyses of the excavation that were performed with SAGE are presented and discussed. / Ph. D.
706

Visual Analytics for High Dimensional Simulation Ensembles

Dahshan, Mai Mansour Soliman Ismail 10 June 2021 (has links)
Recent advancements in data acquisition, storage, and computing power have enabled scientists from various scientific and engineering domains to simulate more complex and longer phenomena. Scientists are usually interested in understanding the behavior of a phenomenon in different conditions. To do so, they run multiple simulations with different configurations (i.e., parameter settings, boundary/initial conditions, or computational models), resulting in an ensemble dataset. An ensemble empowers scientists to quantify the uncertainty in the simulated phenomenon in terms of the variability between ensemble members, the parameter sensitivity and optimization, and the characteristics and outliers within the ensemble members, which could lead to valuable insight(s) about the simulated model. The size, complexity, and high dimensionality (e.g., simulation input and output parameters) of simulation ensembles pose a great challenge in their analysis and exploration. Ensemble visualization provides a convenient way to convey the main characteristics of the ensemble for enhanced understanding of the simulated model. The majority of the current ensemble visualization techniques are mainly focused on analyzing either the ensemble space or the parameter space. Most of the parameter space visualizations are not designed for high-dimensional data sets or did not show the intrinsic structures in the ensemble. Conversely, ensemble space has been visualized either as a comparative visualization of a limited number of ensemble members or as an aggregation of multiple ensemble members omitting potential details of the original ensemble. Thus, to unfold the full potential of simulation ensembles, we designed and developed an approach to the visual analysis of high-dimensional simulation ensembles that merges sensemaking, human expertise, and intuition with machine learning and statistics. In this work, we explore how semantic interaction and sensemaking could be used for building interactive and intelligent visual analysis tools for simulation ensembles. Specifically, we focus on the complex processes that derive meaningful insights from exploring and iteratively refining the analysis of high dimensional simulation ensembles when prior knowledge about ensemble features and correlations is limited or/and unavailable. We first developed GLEE (Graphically-Linked Ensemble Explorer), an exploratory visualization tool that enables scientists to analyze and explore correlations and relationships between non-spatial ensembles and their parameters. Then, we developed Spatial GLEE, an extension to GLEE that explores spatial data while simultaneously considering spatial characteristics (i.e., autocorrelation and spatial variability) and dimensionality of the ensemble. Finally, we developed Image-based GLEE to explore exascale simulation ensembles produced from in-situ visualization. We collaborated with domain experts to evaluate the effectiveness of GLEE using real-world case studies and experiments from different domains. The core contribution of this work is a visual approach that enables the exploration of parameter and ensemble spaces for 2D/3D high dimensional ensembles simultaneously, three interactive visualization tools that explore search, filter, and make sense of non-spatial, spatial, and image-based ensembles, and usage of real-world cases from different domains to demonstrate the effectiveness of the proposed approach. The aim of the proposed approach is to help scientists gain insights by answering questions or testing hypotheses about the different aspects of the simulated phenomenon or/and facilitate knowledge discovery of complex datasets. / Doctor of Philosophy / Scientists run simulations to understand complex phenomena and processes that are expensive, difficult, or even impossible to reproduce in the real world. Current advancements in high-performance computing have enabled scientists from various domains, such as climate, computational fluid dynamics, and aerodynamics to run more complex simulations than before. However, a single simulation run would not be enough to capture all features in a simulated phenomenon. Therefore, scientists run multiple simulations using perturbed input parameters, initial and boundary conditions, or different models resulting in what is known as an ensemble. An ensemble empowers scientists to understand the model's behavior by studying relationships between and among ensemble members, the optimal parameter settings, and the influence of input parameters on the simulation output, which could lead to useful knowledge and insights about the simulated phenomenon. To effectively analyze and explore simulation ensembles, visualization techniques play a significant role in facilitating knowledge discoveries through graphical representations. Ensemble visualization offers scientists a better way to understand the simulated model. Most of the current ensemble visualization techniques are designed to analyze or/and explore either the ensemble space or the parameter space. Therefore, we designed and developed a visual analysis approach for exploring and analyzing high-dimensional parameter and ensemble spaces simultaneously by integrating machine learning and statistics with sensemaking and human expertise. The contribution of this work is to explore how to use semantic interaction and sensemaking to explore and analyze high-dimensional simulation ensembles. To do so, we designed and developed a visual analysis approach that manifested in an exploratory visualization tool, GLEE (Graphically-Linked Ensemble Explorer), that allowed scientists to explore, search, filter, and make sense of high dimensional 2D/3D simulations ensemble. GLEE's visualization pipeline and interaction techniques used deep learning, feature extraction, spatial regression, and Semantic Interaction (SI) techniques to support the exploration of non-spatial, spatial, and image-based simulation ensembles. GLEE different visualization tools were evaluated with domain experts from different fields using real-world case studies and experiments.
707

Neural Enhancement Strategies for Robust Speech Processing

Nawar, Mohamed Nabih Ali Mohamed 10 March 2023 (has links)
In real-world scenarios, speech signals are often contaminated with environmental noises, and reverberation, which degrades speech quality and intelligibility. Lately, the development of deep learning algorithms has marked milestones in speech- based research fields e.g. speech recognition, spoken language understanding, etc. As one of the crucial topics in the speech processing research area, speech enhancement aims to restore clean speech signals from noisy signals. In the last decades, many conventional speech enhancement statistical-based algorithms had been pro- posed. However, the performance of these approaches is limited in non-stationary noisy conditions. The raising of deep learning-based approaches for speech enhancement has led to revolutionary advances in their performance. In this context, speech enhancement is formulated as a supervised learning problem, which tackles the open challenges introduced by the speech enhancement conventional approaches. In general, deep learning speech enhancement approaches are categorized into frequency-domain and time-domain approaches. In particular, we experiment with the performance of the Wave-U-Net model, a solid and superior time-domain approach for speech enhancement. First, we attempt to improve the performance of back-end speech-based classification tasks in noisy conditions. In detail, we propose a pipeline that integrates the Wave-U-Net (later this model is modified to the Dilated Encoder Wave-U-Net) as a pre-processing stage for noise elimination with a temporal convolution network (TCN) for the intent classification task. Both models are trained independently from each other. Reported experimental results showed that the modified Wave-U-Net model not only improves the speech quality and intelligibility measured in terms of PESQ, and STOI metrics, but also improves the back-end classification accuracy. Later, it was observed that the dis-joint training approach often introduces signal distortion in the output of the speech enhancement module. Thus, it can deteriorate the back-end performance. Motivated by this, we introduce a set of fully time- domain joint training pipelines that combine the Wave-U-Net model with the TCN intent classifier. The difference between these architectures is the interconnections between the front-end and back-end. All architectures are trained with a loss function that combines the MSE loss as the front-end loss with the cross-entropy loss for the classification task. Based on our observations, we claim that the JT architecture with equally balancing both components’ contributions yields better classification accuracy. Lately, the release of large-scale pre-trained feature extraction models has considerably simplified the development of speech classification and recognition algorithms. However, environmental noise and reverberation still negatively affect performance, making robustness in noisy conditions mandatory in real-world applications. One way to mitigate the noise effect is to integrate a speech enhancement front-end that removes artifacts from the desired speech signals. Unlike the state-of-the-art enhancement approaches that operate either on speech spectrogram, or directly on time-domain signals, we study how enhancement can be applied directly on the speech embeddings, extracted using Wav2Vec, and WavLM models. We investigate a variety of training approaches, considering different flavors of joint and disjoint training of the speech enhancement front-end and of the classification/recognition back-end. We perform exhaustive experiments on the Fluent Speech Commands and Google Speech Commands datasets, contaminated with noises from the Microsoft Scalable Noisy Speech Dataset, as well as on LibriSpeech, contaminated with noises from the MUSAN dataset, considering intent classification, keyword spotting, and speech recognition tasks respectively. Results show that enhancing the speech em-bedding is a viable and computationally effective approach, and provide insights about the most promising training approaches.
708

On robustness and explainability of deep learning

Le, Hieu 06 February 2024 (has links)
There has been tremendous progress in machine learning and specifically deep learning in the last few decades. However, due to some inherent nature of deep neural networks, many questions regarding explainability and robustness still remain open. More specifically, as deep learning models are shown to be brittle against malicious changes, when do the models fail and how can we construct a more robust model against these types of attacks are of high interest. This work tries to answer some of the questions regarding explainability and robustness of deep learning by tackling the problem at four different topics. First, real world datasets often contain noise which can badly impact classification model performance. Furthermore, adversarial noise can be crafted to alter classification results. Geometric multi-resolution analysis (GMRA) is capable of capturing and recovering manifolds while preserving geomtric features. We showed that GMRA can be applied to retrieve low dimension representation, which is more robust to noise and simplify classification models. Secondly, I showed that adversarial defense in the image domain can be partially achieved without knowing the specific attacking method by employing preprocessing model trained with the task of denoising. Next, I tackle the problem of adversarial generation in the text domain within the context of real world applications. I devised a new method of crafting adversarial text by using filtered unlabeled data, which is usually more abundant compared to labeled data. Experimental results showed that the new method created more natural and relevant adversarial texts compared with current state of the art methods. Lastly, I presented my work in referring expression generation aiming at creating a more explainable natural language model. The proposed method decomposes the referring expression generation task into two subtasks and experimental results showed that generated expressions are more comprehensive to human readers. I hope that all the approaches proposed here can help further our understanding of the explainability and robustness deep learning models.
709

Foundations of Radio Frequency Transfer Learning

Wong, Lauren Joy 06 February 2024 (has links)
The introduction of Machine Learning (ML) and Deep Learning (DL) techniques into modern radio communications system, a field known as Radio Frequency Machine Learning (RFML), has the potential to provide increased performance and flexibility when compared to traditional signal processing techniques and has broad utility in both the commercial and defense sectors. Existing RFML systems predominately utilize supervised learning solutions in which the training process is performed offline, before deployment, and the learned model remains fixed once deployed. The inflexibility of these systems means that, while they are appropriate for the conditions assumed during offline training, they show limited adaptability to changes in the propagation environment and transmitter/receiver hardware, leading to significant performance degradation. Given the fluidity of modern communication environments, this rigidness has limited the widespread adoption of RFML solutions to date. Transfer Learning (TL) is a means to mitigate such performance degradations by re-using prior knowledge learned from a source domain and task to improve performance on a "similar" target domain and task. However, the benefits of TL have yet to be fully demonstrated and integrated into RFML systems. This dissertation begins by clearly defining the problem space of RF TL through a domain-specific TL taxonomy for RFML that provides common language and terminology with concrete and Radio Frequency (RF)-specific example use- cases. Then, the impacts of the RF domain, characterized by the hardware and channel environment(s), and task, characterized by the application(s) being addressed, on performance are studied, and methods and metrics for predicting and quantifying RF TL performance are examined. In total, this work provides the foundational knowledge to more reliably use TL approaches in RF contexts and opens directions for future work that will improve the robustness and increase the deployability of RFML. / Doctor of Philosophy / The field of Radio Frequency Machine Learning (RFML) introduces Machine Learning (ML) and Deep Learning (DL) techniques into modern radio communications systems, and is expected to be a core component of 6G technologies and beyond. While RFML provides a myriad of benefits over traditional radio communications systems, existing approaches are generally incapable of adapting to changes that will inevitably occur over time, which causes severe performance degradation. Transfer Learning (TL) offers a solution to the inflexibility of current RFML systems, through techniques for re-using and adapting existing models for new, but similar, problems. TL is an approach often used in image and language-based ML/DL systems, but has yet to be commonly used by RFML researchers. This dissertation aims to provide the foundational knowledge necessary to reliably use TL in RFML systems, from the definition and categorization of RF TL techniques to practical guidelines for when to use RF TL in real-world systems. The unique elements of RF TL not present in other modalities are exhaustively studied, and methods and metrics for measuring and predicting RF TL performance are examined.
710

Influence of section depth on the structural behaviour of reinforced concrete continuous deep beams

Yang, Keun-Hyeok, Ashour, Ashraf January 2007 (has links)
Yes / Although the depth of reinforced concrete deep beams is much higher than that of slender beams, extensive existing tests on deep beams have focused on simply supported beams with a scaled depth below 600 mm. In the present paper, test results of 12 two-span reinforced concrete deep beams are reported. The main parameters investigated were the beam depth, which is varied from 400 mm to 720 mm, concrete compressive strength and shear span-tooverall depth ratio. All beams had the same longitudinal top and bottom reinforcement and no web reinforcement to assess the effect of changing the beam depth on the shear strength of such beams. All beams tested failed owing to a significant diagonal crack connecting the edges of the load and intermediate support plates. The influence of beam depth on shear strength was more pronounced on continuous deep beams than simple ones and on beams having higher concrete compressive strength. A numerical technique based on the upper bound analysis of the plasticity theory was developed to assess the load capacity of continuous deep beams. The influence of the beam depth was covered by the effectiveness factor of concrete in compression to cater for size effect. Comparisons between the total capacity from the proposed technique and that experimentally measured in the current investigation and elsewhere show good agreement, even though the section depth of beams is varied.

Page generated in 0.0549 seconds