• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 94
  • 12
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 151
  • 151
  • 151
  • 79
  • 55
  • 54
  • 25
  • 24
  • 24
  • 23
  • 20
  • 20
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Predicting Gene Functions and Phenotypes by combining Deep Learning and Ontologies

Kulmanov, Maxat 08 April 2020 (has links)
The amount of available protein sequences is rapidly increasing, mainly as a consequence of the development and application of high throughput sequencing technologies in the life sciences. It is a key question in the life sciences to identify the functions of proteins, and furthermore to identify the phenotypes that may be associated with a loss (or gain) of function in these proteins. Protein functions are generally determined experimentally, and it is clear that experimental determination of protein functions will not scale to the current { and rapidly increasing { amount of available protein sequences (over 300 million). Furthermore, identifying phenotypes resulting from loss of function is even more challenging as the phenotype is modi ed by whole organism interactions and environmental variables. It is clear that accurate computational prediction of protein functions and loss of function phenotypes would be of signi cant value both to academic research and to the biotechnology industry. We developed and expanded novel methods for representation learning, predicting protein functions and their loss of function phenotypes. We use deep neural network algorithm and combine them with symbolic inference into neural-symbolic algorithms. Our work signi cantly improves previously developed methods for predicting protein functions through methodological advances in machine learning, incorporation of broader data types that may be predictive of functions, and improved systems for neural-symbolic integration. The methods we developed are generic and can be applied to other domains in which similar types of structured and unstructured information exist. In future, our methods can be applied to prediction of protein function for metagenomic samples in order to evaluate the potential for discovery of novel proteins of industrial value. Also our methods can be applied to the prediction of loss of function phenotypes in human genetics and incorporate the results in a variant prioritization tool that can be applied to diagnose patients with Mendelian disorders.
12

Assessing the Impact of Restored Wetlands on Bat Foraging Activity Over Nearby Farmland

Allagas, Philip 01 August 2020 (has links)
Up to 87% of the world’s wetlands have been destroyed, considerably reducing ecosystem services these wetlands once provided. More recently, many wetlands are being restored in an attempt to regain their ecosystem service. This study seeks to determine the effects of restored wetlands on local bat habitat use. Bat activity was found to be significantly higher around the wetlands when compared to distant grassy fields; however, no significant difference was found among the restored wetlands and a remote cattle farm containing multiple water features. Geospatial models of bat distribution and bat foraging were produced using machine learning that showed higher habitat suitability and foraging activity around restored wetlands than around distant grassy fields, suggesting that wetlands provide vital habitat for insectivorous bats. This study demonstrates that restored wetlands promote bat activity and bat foraging, and restoring wetlands may be a useful means of increasing natural pest control over nearby farmlands.
13

Efficient deep networks for real-world interaction

Abhishek Chaurasia (6864272) 16 December 2020 (has links)
<div><p>Deep neural networks are essential in applications such as image categorization, natural language processing, autonomous driving, home automation, and robotics. Most of these applications require instantaneous processing of data and decision making. In general existing neural networks are computationally expensive, and hence they fail to perform in real-time. Models performing semantic segmentation are being extensively used in self-driving vehicles. Autonomous vehicles not only need segmented output, but also control system capable of processing segmented output and deciding actuator outputs such as speed and direction.</p> <p><br></p> <p>In this thesis we propose efficient neural network architectures with fewer operations and parameters as compared to current state-of-the-art algorithms. Our work mainly focuses on designing deep neural network architectures for semantic segmentation. First, we introduce few network modules and concepts which help in reducing model complexity. Later on, we show that in terms of accuracy our proposed networks perform better or at least at par with state-of-the-art neural networks. Apart from that, we also compare our networks' performance on edge devices such as Nvidia TX1. Lastly, we present a control system capable of predicting steering angle and speed of a vehicle based on the neural network output.</p></div>
14

Traffic Signs Detection and Classification

Kanagaraj, Kanimozhi 01 May 2022 (has links)
Traffic sign recognition systems have been introduced to overcome road-safety concerns. These systems are widely adopted by automotive industry whereby safety critical systems are developed for car manufacturers. To develop an automatic TSDR system is a tedious job given the continuous changes in the environment and lighting conditions. Among the other issues that also need to be addressed are partial obscuring, multiple traffic signs appearing at a single time, and blurring and fading of traffic signs, which can also create problem for the detection purpose . For applying the TSDR system in real-time environment, a fast algorithm is needed. As well as dealing with these issues, a recognition system should also avoid erroneous recognition of no signs. TSDR system would detect and classify a collection of 43 individual traffic-signs taken from real-time environment into different classes for recognition. In this project classification of individual traffic signs is done using deep Convolutional Neural Network with VGG-net architecture model to develop an efficient classifier with improved prediction accuracy (using GTSRB dataset).
15

Learning with constraints on processing and supervision

Acar, Durmuş Alp Emre 30 August 2023 (has links)
Collecting a sufficient amount of data and centralizing them are both costly and privacy-concerning operations. These practical concerns arise due to the communication costs between data collecting devices and data being personal such as text messages of an end user. The goal is to train generalizable machine learning models with constraints on data without sharing or transferring the data. In this thesis, we will present solutions to several aspects of learning with data constraints, such as processing and supervision. We focus on federated learning, online learning, and learning generalizable representations and provide setting-specific training recipes. In the first scenario, we tackle a federated learning problem where data is decentralized through different users and should not be centralized. Traditional approaches either ignore the heterogeneity problem or increase communication costs to handle it. Our solution carefully addresses the heterogeneity issue of user data by imposing a dynamic regularizer that adapts to the heterogeneity of each user without extra transmission costs. Theoretically, we establish convergence guarantees. We extend our ideas to personalized federated learning, where the model is customized to each end user, and heterogeneous federated learning, where users support different model architectures. As a next scenario, we consider online meta-learning, where there is only one user, and the data distribution of the user changes over time. The goal is to adapt new data distributions with very few labeled data from each distribution. A naive way is to store data from different distributions to train a model from scratch with sufficient data. Our solution efficiently summarizes the information from each task data so that the memory footprint does not scale with the number of tasks. Lastly, we aim to train generalizable representations given a dataset. We consider a setting where we have access to a powerful teacher (more complex) model. Traditional methods do not distinguish points and force the model to learn all the information from the powerful model. Our proposed method focuses on the learnable input space and carefully distills attainable information from the teacher model by discarding the over-capacity information. We compare our methods with state-of-the-art methods in each setup and show significant performance improvements. Finally, we discuss potential directions for future work.
16

Optimizing Deep Neural Networks Performance: Efficient Techniques For Training and Inference

Sharma, Ankit 01 January 2023 (has links) (PDF)
Recent advances in computer vision tasks are mainly due to the success of large deep neural networks. The current state-of-the-art models have high computational costs during inference and suffer from a high memory footprint. Therefore, deploying these large networks on edge devices remains a serious concern. Furthermore, training these over-parameterized networks is computationally expensive and requires a longer training time. Thus, there is a demand to develop techniques that can efficiently reduce training costs and also be able to deploy neural networks on mobile and embedded devices. This dissertation presents practices like designing a lightweight network architecture and increasing network resource utilization. These solutions improve the efficiency of large networks during training and inference. We first propose an efficient micro-architecture (slim modules) to construct a light-weight Slim-CNN to predicting face attributes. Slim modules uses depthwise separable convolutions with pointwise convolutions, making them computationally efficient for embedded applications. Next, we investigate the problem of obtaining a compact pruned model from an untrained original network in a single-stage process. We introduce our RAPID framework that distills knowledge to a pruned student model from a teacher model under online settings. Next, we analyze the phenomena of inactive channels in a trained neural network. We take a deep dive into the gradient updates of these channels and discover that these channels have no weight update after a few early epochs. Thus, we present our channel regeneration technique that reinitializes batch normalization gamma values of all inactive channels. The gradient updates of these channels improve after the regeneration step, resulting in an increase in the contribution of these channels to the network performance. Finally, we introduce a method to improve computational efficiency in pre-trained vision transformers by reducing redundancy in visual data. Our method selects image windows or regions with high objectness measures, as these regions may contain an object of any class. Across all works in this dissertation, we extensively evaluate our proposed methods and demonstrate that our techniques improve the computational efficiency of deep neural networks during training and inference.
17

Synthetic Data Generation and Sampling for Online Training of DNN in Manufacturing Supervised Learning Problems

Thiyagarajan, Prithivrajan 29 May 2024 (has links)
The deployment of Industrial Internet offers abundant passive data from manufacturing systems and networks, which enables data-driven modeling with high-data-demand, advanced statistical models such as Deep Neural Networks (DNNs). Deep Neural Networks (DNNs) have proven to be remarkably effective in supervised learning in critical manufacturing applications, such as AI-enabled automatic inspection, quality modeling, etc. However, there is a lack of performance guarantee of DNN models primarily due to data class imbalance, shifting distribution, multi-modality variables (e.g., time series and images) in training and testing datasets collected in manufacturing. Moreover, implementing these models on the manufacturing shop floor is difficult due to limitations in human-machine interaction. Inspired by active data generation through Design of Experiments (DoE) and passive observational data collection for manufacturing data analytics, we propose a SynthetIc Data gEneration and Sampling (SIDES) framework with a Graphical User Interface named SIDESync. This framework is designed to streamline SIDES execution within manufacturing environments, to provide adequate DNN model performance through the improvement of training data preparation and enhancing human-machine interaction. In the SIDES framework, a bi-level Hierarchical Contextual Bandits is proposed to provide a scientific way to integrate DoE and observational data sampling, which optimizes DNNs' online learning performance. Multimodality-aligned variational Autoencoder transforms the multimodal predictors from manufacturing into a shared low-dimensional latent space for controlled data generation from DoE and effective sampling from observational data. The SIDESync Graphical User Interface (GUI), developed using the Streamlit library in Python, simplifies the configuration, monitoring, and analysis of SIDES experiments. This streamlined approach facilitates access to the SIDES framework and enhances human-machine interaction capabilities. The merits of SIDES are evaluated by a real case study of printed electronics with a binary multimodal data classification problem. Results show the advantages of the cost-effective integration of DoE in improving the DNNs' online learning performance. / Master of Science / The Industrial Internet's growth has brought in a massive amount of data from manufacturing systems leading to advanced data analysis methods using techniques like Deep Neural Networks (DNNs). These powerful models have shown great promise in critical manufacturing tasks, such as AI-driven quality control. However, challenges remain in ensuring these models perform well. For example, the lack of good data results in models with poor performance. Furthermore, deploying these models on the manufacturing shop floor poses challenges due to limited human-machine interaction capabilities. To tackle these challenges, we introduce the SynthetIc Data gEneration and Sampling (SIDES) framework with a user-friendly interface called SIDESync to enhance the human-machine interaction. This framework will improve how training data is prepared, ultimately boosting the performance of DNN models. Within this framework, we proposed a method called bi-level Hierarchical Contextual Bandits that combines real-world data sampling with a technique called Design of Experiments (DoE) to help Deep Neural Networks (DNNs) learn more effectively as they operate. We also used a tool called a Multimodality-Aligned Variational Autoencoder, which helps convert various types of manufacturing data (like sensor readings and images) into a standard format. This conversion makes it easier to generate new data from experiments and efficiently use real-world data samples. The SIDESync Graphical User Interface (GUI) is created using Python's Streamlit library. It makes setting up, monitoring, and analyzing SIDES experiments much easier. This user-friendly system improves access to the SIDES framework and boosts interactions between humans and machines. To prove how effective SIDES is, we conducted a real case study of data collected from printed electronics manufacturing. We focused on a problem where we needed to classify the final product quality using in-situ data with DNN model prediction. Our results clearly showed that integrating DoE improved how DNNs learned online, all while keeping costs in check. This work opens up exciting possibilities for making data-driven decisions in manufacturing smarter and more efficient.
18

Active Learning Under Limited Interaction with Data Labeler

Chen, Si January 2021 (has links)
Active learning (AL) aims at reducing labeling effort by identifying the most valuable unlabeled data points from a large pool. Traditional AL frameworks have two limitations: First, they perform data selection in a multi-round manner, which is time-consuming and impractical. Second, they usually assume that there are a small amount of labeled data points available in the same domain as the data in the unlabeled pool. In this thesis, we initiate the study of one-round active learning to solve the first issue. We propose DULO, a general framework for one-round setting based on the notion of data utility functions, which map a set of data points to some performance measure of the model trained on the set. We formulate the one-round active learning problem as data utility function maximization. We then propose D²ULO on the basis of DULO as a solution that solves both issues. Specifically, D²ULO leverages the idea of domain adaptation (DA) to train a data utility model on source labeled data. The trained utility model can then be used to select high-utility data in the target domain and at the same time, provide an estimate for the utility of the selected data. Our experiments show that the proposed frameworks achieves better performance compared with state-of-the-art baselines in the same setting. Particularly, D²ULO is applicable to the scenario where the source and target labels have mismatches, which is not supported by the existing works. / M.S. / Machine Learning (ML) has achieved huge success in recent years. Machine Learning technologies such as recommendation system, speech recognition and image recognition play an important role on human daily life. This success mainly build upon the use of large amount of labeled data: Compared with traditional programming, a ML algorithm does not rely on explicit instructions from human; instead, it takes the data along with the label as input, and aims to learn a function that can correctly map data to the label space by itself. However, data labeling requires human effort and could be time-consuming and expensive especially for datasets that contain domain-specific knowledge (e.g., disease prediction etc.) Active Learning (AL) is one of the solution to reduce data labeling effort. Specifically, the learning algorithm actively selects data points that provide more information for the model, hence a better model can be achieved with less labeled data. While traditional AL strategies do achieve good performance, it requires a small amount of labeled data as initialization and performs data selection in multi-round, which pose great challenge to its application, as there is no platform provide timely online interaction with data labeler and the interaction is often time inefficient. To deal with the limitations, we first propose DULO which a new setting of AL is studied: data selection is only allowed to be performed once. To further broaden the application of our method, we propose D²ULO which is built upon DULO and Domain Adaptation techniques to avoid the use of initial labeled data. Our experiments show that both of the proposed two frameworks achieve better performance compared with state-of-the-art baselines.
19

Towards Interpretable and Reliable Deep Neural Networks for Visual Intelligence

Xie, Ning 06 August 2020 (has links)
No description available.
20

Probabilistic Graphical Models: an Application in Synchronization and Localization

Goodarzi, Meysam 16 June 2023 (has links)
Die Lokalisierung von mobilen Nutzern (MU) in sehr dichten Netzen erfordert häufig die Synchronisierung der Access Points (APs) untereinander. Erstens konzentriert sich diese Arbeit auf die Lösung des Problems der Zeitsynchronisation in 5G-Netzwerken, indem ein hybrider Bayesischer Ansatz für die Schätzung des Taktversatzes und des Versatzes verwendet wird. Wir untersuchen und demonstrieren den beträchtlichen Nutzen der Belief Propagation (BP), die auf factor graphs läuft, um eine präzise netzwerkweite Synchronisation zu erreichen. Darüber hinaus nutzen wir die Vorteile der Bayesischen Rekursiven Filterung (BRF), um den Zeitstempel-Fehler bei der paarweisen Synchronisierung zu verringern. Schließlich zeigen wir die Vorzüge der hybriden Synchronisation auf, indem wir ein großes Netzwerk in gemeinsame und lokale Synchronisationsdomänen unterteilen und so den am besten geeigneten Synchronisationsalgorithmus (BP- oder BRF-basiert) auf jede Domäne anwenden können. Zweitens schlagen wir einen Deep Neural Network (DNN)-gestützten Particle Filter-basierten (DePF)-Ansatz vor, um das gemeinsame MU-Sync&loc-Problem zu lösen. Insbesondere setzt DePF einen asymmetrischen Zeitstempel-Austauschmechanismus zwischen den MUs und den APs ein, der Informationen über den Taktversatz, die Zeitverschiebung der MUs, und die AP-MU Abstand liefert. Zur Schätzung des Ankunftswinkels des empfangenen Synchronisierungspakets nutzt DePF den multiple signal classification Algorithmus, der durch die Channel Impulse Response (CIR) der Synchronisierungspakete gespeist wird. Die CIR wird auch genutzt, um den Verbindungszustand zu bestimmen, d. h. Line-of-Sight (LoS) oder Non-LoS (NLoS). Schließlich nutzt DePF particle Gaussian mixtures, die eine hybride partikelbasierte und parametrische BRF-Fusion der vorgenannten Informationen ermöglichen und die Position und die Taktparameter der MUs gemeinsam schätzen. / Mobile User (MU) localization in ultra dense networks often requires, on one hand, the Access Points (APs) to be synchronized among each other, and, on the other hand, the MU-AP synchronization. In this work, we firstly address the former, which eventually provides a basis for the latter, i.e., for the joint MU synchronization and localization (sync&loc). In particular, firstly, this work focuses on tackling the time synchronization problem in 5G networks by adopting a hybrid Bayesian approach for clock offset and skew estimation. Specifically, we investigate and demonstrate the substantial benefit of Belief Propagation (BP) running on Factor Graphs (FGs) in achieving precise network-wide synchronization. Moreover, we take advantage of Bayesian Recursive Filtering (BRF) to mitigate the time-stamping error in pairwise synchronization. Finally, we reveal the merit of hybrid synchronization by dividing a large-scale network into common and local synchronization domains, thereby being able to apply the most suitable synchronization algorithm (BP- or BRF-based) on each domain. Secondly, we propose a Deep Neural Network (DNN)-assisted Particle Filter-based (DePF) approach to address the MU joint sync&loc problem. In particular, DePF deploys an asymmetric time-stamp exchange mechanism between the MUs and the APs, which provides information about the MUs' clock offset, skew, and AP-MU distance. In addition, to estimate the Angle of Arrival (AoA) of the received synchronization packet, DePF draws on the Multiple Signal Classification (MUSIC) algorithm that is fed by the Channel Impulse Response (CIR) experienced by the sync packets. The CIR is also leveraged on to determine the link condition, i.e. Line-of-Sight (LoS) or Non-LoS (NLoS). Finally DePF capitalizes on particle Gaussian mixtures which allow for a hybrid particle-based and parametric BRF fusion of the aforementioned pieces of information and jointly estimate the position and clock parameters of the MUs.

Page generated in 0.0432 seconds