• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 32
  • 2
  • 1
  • Tagged with
  • 47
  • 47
  • 47
  • 17
  • 13
  • 12
  • 11
  • 11
  • 10
  • 10
  • 9
  • 9
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Multi-Task Learning and Its Applications to Biomedical Informatics

January 2014 (has links)
abstract: In many fields one needs to build predictive models for a set of related machine learning tasks, such as information retrieval, computer vision and biomedical informatics. Traditionally these tasks are treated independently and the inference is done separately for each task, which ignores important connections among the tasks. Multi-task learning aims at simultaneously building models for all tasks in order to improve the generalization performance, leveraging inherent relatedness of these tasks. In this thesis, I firstly propose a clustered multi-task learning (CMTL) formulation, which simultaneously learns task models and performs task clustering. I provide theoretical analysis to establish the equivalence between the CMTL formulation and the alternating structure optimization, which learns a shared low-dimensional hypothesis space for different tasks. Then I present two real-world biomedical informatics applications which can benefit from multi-task learning. In the first application, I study the disease progression problem and present multi-task learning formulations for disease progression. In the formulations, the prediction at each point is a regression task and multiple tasks at different time points are learned simultaneously, leveraging the temporal smoothness among the tasks. The proposed formulations have been tested extensively on predicting the progression of the Alzheimer's disease, and experimental results demonstrate the effectiveness of the proposed models. In the second application, I present a novel data-driven framework for densifying the electronic medical records (EMR) to overcome the sparsity problem in predictive modeling using EMR. The densification of each patient is a learning task, and the proposed algorithm simultaneously densify all patients. As such, the densification of one patient leverages useful information from other patients. / Dissertation/Thesis / Ph.D. Computer Science 2014
2

Scalable Multi-Task Learning R-CNN for Classification and Localization in Autonomous Vehicle Technology

Rinchen, Sonam 28 April 2023 (has links)
Multi-task learning (MTL) is a rapidly growing field in the world of autonomous vehicles, particularly in the area of computer vision. Autonomous vehicles are heavily reliant on computer vision technology for tasks such as object detection, object segmentation, and object tracking. The complexity of sensor data and the multiple tasks involved in autonomous driving can make it challenging to design effective systems. MTL addresses these challenges by training a single model to perform multiple tasks simultaneously, utilizing shared representations to learn common concepts between a group of related tasks, and improving data efficiency. In this thesis, we proposed a scalable MTL system for object detection that can be used to construct any MTL network with different scales and shapes. The proposed system is an extension to the state-of-art algorithm called Mask RCNN. It is designed to overcome the limitations of learning multiple objects in multi-label learning. To demonstrate the effectiveness of the proposed system, we built three different networks using it and evaluated their performance on the state-of-the-art BDD100k dataset. Our experimental results demonstrate that the proposed MTL networks outperform a base single-task network, Mask RCNN, in terms of mean average precision at 50 (mAP50). Specifically, the proposed MTL networks achieved a mAP50 of 66%, while the base network only achieved 53%. Furthermore, we also conducted comparisons between the proposed MTL networks to determine the most efficient way to group tasks together in order to create an optimal MTL network for object detection on the BDD100k dataset.
3

Multi-Task Learning via Structured Regularization: Formulations, Algorithms, and Applications

January 2011 (has links)
abstract: Multi-task learning (MTL) aims to improve the generalization performance (of the resulting classifiers) by learning multiple related tasks simultaneously. Specifically, MTL exploits the intrinsic task relatedness, based on which the informative domain knowledge from each task can be shared across multiple tasks and thus facilitate the individual task learning. It is particularly desirable to share the domain knowledge (among the tasks) when there are a number of related tasks but only limited training data is available for each task. Modeling the relationship of multiple tasks is critical to the generalization performance of the MTL algorithms. In this dissertation, I propose a series of MTL approaches which assume that multiple tasks are intrinsically related via a shared low-dimensional feature space. The proposed MTL approaches are developed to deal with different scenarios and settings; they are respectively formulated as mathematical optimization problems of minimizing the empirical loss regularized by different structures. For all proposed MTL formulations, I develop the associated optimization algorithms to find their globally optimal solution efficiently. I also conduct theoretical analysis for certain MTL approaches by deriving the globally optimal solution recovery condition and the performance bound. To demonstrate the practical performance, I apply the proposed MTL approaches on different real-world applications: (1) Automated annotation of the Drosophila gene expression pattern images; (2) Categorization of the Yahoo web pages. Our experimental results demonstrate the efficiency and effectiveness of the proposed algorithms. / Dissertation/Thesis / Ph.D. Computer Science 2011
4

AI-augmented analysis onto the impact of the containment strategies and climate change to pandemic

Dong, Shihao January 2023 (has links)
This thesis uses a multi-tasking long short-term memory (LSTM) model to investigate the correlation between containment strategies, climate change, and the number of COVID-19 transmissions and deaths. The study focuses on examining the accuracy of different factors in predicting the number of daily confirmed cases and deaths cases to further explore the correlation between different factors and cases. The initial assessment results suggest that containment strategies, specifically vaccination policies, have a more significant impact on the accuracy of predicting daily confirmed cases and deaths from COVID-19 compared to climate factors such as the daily average surface 2-meter temperature. Additionally, the study reveals that there are unpredictable effects on predictive accuracy resulting from the interactions among certain impact factors. However, the lack of interpretability of deep learning models poses a significant challenge for real-world applications. This study provides valuable insights into understanding the correlation between the number of daily confirmed cases, daily deaths, containment strategies, and climate change, and highlights areas for further research. It is important to note that while the study reveals a correlation, it does not imply causation, and further research is needed to understand the trends of the pandemic.
5

Data Filtering and Modeling for Smart Manufacturing Network

Li, Yifu 13 August 2020 (has links)
A smart manufacturing network connects machines via sensing, communication, and actuation networks. The data generated from the networks are used in data-driven modeling and decision-making to improve quality, productivity, and flexibility while reducing the cost. This dissertation focuses on improving the data-driven modeling of the quality-process relationship in smart manufacturing networks. The quality-process variable relationships are important to understand for guiding the quality improvement by optimizing the process variables. However, several challenges emerge. First, the big data sets generated from the manufacturing network may be information-poor for modeling, which may lead to high data transmission and computational loads and redundant data storage. Second, the data generated from connected machines often contain inexplicit similarities due to similar product designs and manufacturing processes. Modeling such inexplicit similarities remains challenging. Third, it is unclear how to select representative data sets for modeling in a manufacturing network setting, considering inexplicit similarities. In this dissertation, a data filtering method is proposed to select a relatively small and informative data subset. Multi-task learning is combined with latent variable decomposition to model multiple connected manufacturing processes that are similar-but-non-identical. A data filtering and modeling framework is also proposed to filter the manufacturing data for manufacturing network modeling adaptively. The proposed methodologies have been validated through simulation and the applications to real manufacturing case studies. / Doctor of Philosophy / The advancement of the Internet-of-Things (IoT) integrates manufacturing processes and equipment into a network. Practitioners analyze and apply the data generated from the network to model the manufacturing network to improve product quality. The data quality directly affects the modeling performance and decision effectiveness. However, the data quality is not well controlled in a manufacturing network setting. In this dissertation, we propose a data quality assurance method, referred to as data filtering. The proposed method selects a data subset from raw data collected from the manufacturing network. The proposed method reduces the complexity in modeling while supporting decision effectiveness. To model the data from multiple similar-but-non-identical manufacturing processes, we propose a latent variable decomposition-based multi-task learning model to study the relationships between the process variables and product quality variable. Lastly, to adaptively determine the appropriate data subset for modeling each process in the manufacturing network, we further proposed an integrated data filtering and modeling framework. The proposed integrated framework improved the modeling performance of data generated by babycare manufacturing and semiconductor manufacturing.
6

Predicting Performance Run-time Metrics in Fog Manufacturing using Multi-task Learning

Nallendran, Vignesh Raja 26 February 2021 (has links)
The integration of Fog-Cloud computing in manufacturing has given rise to a new paradigm called Fog manufacturing. Fog manufacturing is a form of distributed computing platform that integrates Fog-Cloud collaborative computing strategy to facilitate responsive, scalable, and reliable data analysis in manufacturing networks. The computation services provided by Fog-Cloud computing can effectively support quality prediction, process monitoring, and diagnosis efforts in a timely manner for manufacturing processes. However, the communication and computation resources for Fog-Cloud computing are limited in Fog manufacturing. Therefore, it is significant to effectively utilize the computation services based on the optimal computation task offloading, scheduling, and hardware autoscaling strategies to finish the computation tasks on time without compromising on the quality of the computation service. A prerequisite for adapting such optimal strategies is to accurately predict the run-time metrics (e.g., Time-latency) of the Fog nodes by capturing their inherent stochastic nature in real-time. It is because these run-time metrics are directly related to the performance of the computation service in Fog manufacturing. Specifically, since the computation flow and the data querying activities vary between the Fog nodes in practice. The run-time metrics that reflect the performance in the Fog nodes are heterogenous in nature and the performance cannot be effectively modeled through traditional predictive analysis. In this thesis, a multi-task learning methodology is adopted to predict the run-time metrics that reflect performance in Fog manufacturing by addressing the heterogeneities among the Fog nodes. A Fog manufacturing testbed is employed to evaluate the prediction accuracies of the proposed model and benchmark models. The proposed model can be further extended in computation tasks offloading and architecture optimization in Fog manufacturing to minimize the time-latency and improve the robustness of the system. / Master of Science / Smart manufacturing aims at utilizing Internet of things (IoT), data analytics, cloud computing, etc. to handle varying market demand without compromising the productivity or quality in a manufacturing plant. To support these efforts, Fog manufacturing has been identified as a suitable computing architecture to handle the surge of data generated from the IoT devices. In Fog manufacturing computational tasks are completed locally through the means of interconnected computing devices called Fog nodes. However, the communication and computation resources in Fog manufacturing are limited. Therefore, its effective utilization requires optimal strategies to schedule the computational tasks and assign the computational tasks to the Fog nodes. A prerequisite for adapting such strategies is to accurately predict the performance of the Fog nodes. In this thesis, a multi-task learning methodology is adopted to predict the performance in Fog manufacturing. Specifically, since the computation flow and the data querying activities vary between the Fog nodes in practice. The metrics that reflect the performance in the Fog nodes are heterogenous in nature and cannot be effectively modeled through conventional predictive analysis. A Fog manufacturing testbed is employed to evaluate the prediction accuracies of the proposed model and benchmark models. The results show that the multi-task learning model has better prediction accuracy than the benchmarks and that it can model the heterogeneities among the Fog nodes. The proposed model can further be incorporated in scheduling and assignment strategies to effectively utilize Fog manufacturing's computational services.
7

Non-parametric Bayesian Learning with Incomplete Data

Wang, Chunping January 2010 (has links)
<p>In most machine learning approaches, it is usually assumed that data are complete. When data are partially missing due to various reasons, for example, the failure of a subset of sensors, image corruption or inadequate medical measurements, many learning methods designed for complete data cannot be directly applied. In this dissertation we treat two kinds of problems with incomplete data using non-parametric Bayesian approaches: classification with incomplete features and analysis of low-rank matrices with missing entries.</p><p>Incomplete data in classification problems are handled by assuming input features to be generated from a mixture-of-experts model, with each individual expert (classifier) defined by a local Gaussian in feature space. With a linear classifier associated with each Gaussian component, nonlinear classification boundaries are achievable without the introduction of kernels. Within the proposed model, the number of components is theoretically ``infinite'' as defined by a Dirichlet process construction, with the actual number of mixture components (experts) needed inferred based upon the data under test. With a higher-level DP we further extend the classifier for analysis of multiple related tasks (multi-task learning), where model components may be shared across tasks. Available data could be augmented by this way of information transfer even when tasks are only similar in some local regions of feature space, which is particularly critical for cases with scarce incomplete training samples from each task. The proposed algorithms are implemented using efficient variational Bayesian inference and robust performance is demonstrated on synthetic data, benchmark data sets, and real data with natural missing values.</p><p>Another scenario of interest is to complete a data matrix with entries missing. The recovery of missing matrix entries is not possible without additional assumptions on the matrix under test, and here we employ the common assumption that the matrix is low-rank. Unlike methods with a preset fixed rank, we propose a non-parametric Bayesian alternative based on the singular value decomposition (SVD), where missing entries are handled naturally, and the number of underlying factors is imposed to be small and inferred in the light of observed entries. Although we assume missing at random, the proposed model is generalized to incorporate auxiliary information including missingness features. We also make a first attempt in the matrix-completion community to acquire new entries actively. By introducing a probit link function, we are able to handle counting matrices with the decomposed low-rank matrices latent. The basic model and its extensions are validated on</p><p>synthetic data, a movie-rating benchmark and a new data set presented for the first time.</p> / Dissertation
8

Function And Appearance-based Emergence Of Object Concepts Through Affordances

Atil, Ilkay 01 November 2010 (has links) (PDF)
One view to cognition is that the symbol manipulating brain interprets the symbols of language based on the sensori-motor experiences of the agent. Such symbols, for example, what we refer to as nouns and verbs, are generalizations that the agent discovers through interactions with the environment. Given that an important subset of nouns correspond to objects (and object concepts), in this thesis, how function and appearance-based object concepts can be created through affordances has been studied. For this, a computational system, which is able to create object concepts through simple interactions with the objects in the environment, is proposed. Namely, the robot applies a set of built-in behaviors (such as pushing, lifting, grasping) on a set of objects to learn their aordances, through which objects affording similar functions are grouped into object concepts. Moreover, the thesis demonstrates that the discovered object concepts are beneficial for learning new tasks by analyzing the learning performance of learning a new task with and without object concepts.
9

Multi-task learning with Gaussian processes

Chai, Kian Ming January 2010 (has links)
Multi-task learning refers to learning multiple tasks simultaneously, in order to avoid tabula rasa learning and to share information between similar tasks during learning. We consider a multi-task Gaussian process regression model that learns related functions by inducing correlations between tasks directly. Using this model as a reference for three other multi-task models, we provide a broad unifying view of multi-task learning. This is possible because, unlike the other models, the multi-task Gaussian process model encodes task relatedness explicitly. Each multi-task learning model generally assumes that learning multiple tasks together is beneficial. We analyze how and the extent to which multi-task learning helps improve the generalization of supervised learning. Our analysis is conducted for the average-case on the multi-task Gaussian process model, and we concentrate mainly on the case of two tasks, called the primary task and the secondary task. The main parameters are the degree of relatedness ρ between the two tasks, and πS, the fraction of the total training observations from the secondary task. Among other results, we show that asymmetric multitask learning, where the secondary task is to help the learning of the primary task, can decrease a lower bound on the average generalization error by a factor of up to ρ2πS. When there are no observations for the primary task, there is also an intrinsic limit to which observations for the secondary task can help the primary task. For symmetric multi-task learning, where the two tasks are to help each other to learn, we find the learning to be characterized by the term πS(1 − πS)(1 − ρ2). As far as we are aware, our analysis contributes to an understanding of multi-task learning that is orthogonal to the existing PAC-based results on multi-task learning. For more than two tasks, we provide an understanding of the multi-task Gaussian process model through structures in the predictive means and variances given certain configurations of training observations. These results generalize existing ones in the geostatistics literature, and may have practical applications in that domain. We evaluate the multi-task Gaussian process model on the inverse dynamics problem for a robot manipulator. The inverse dynamics problem is to compute the torques needed at the joints to drive the manipulator along a given trajectory, and there are advantages to learning this function for adaptive control. A robot manipulator will often need to be controlled while holding different loads in its end effector, giving rise to a multi-context or multi-load learning problem, and we treat predicting the inverse dynamics for a context/load as a task. We view the learning of the inverse dynamics as a function approximation problem and place Gaussian process priors over the space of functions. We first show that this is effective for learning the inverse dynamics for a single context. Then, by placing independent Gaussian process priors over the latent functions of the inverse dynamics, we obtain a multi-task Gaussian process prior for handling multiple loads, where the inter-context similarity depends on the underlying inertial parameters of the manipulator. Experiments demonstrate that this multi-task formulation is effective in sharing information among the various loads, and generally improves performance over either learning only on single contexts or pooling the data over all contexts. In addition to the experimental results, one of the contributions of this study is showing that the multi-task Gaussian process model follows naturally from the physics of the inverse dynamics.
10

Transfer learning with Gaussian processes

Skolidis, Grigorios January 2012 (has links)
Transfer Learning is an emerging framework for learning from data that aims at intelligently transferring information between tasks. This is achieved by developing algorithms that can perform multiple tasks simultaneously, as well as translating previously acquired knowledge to novel learning problems. In this thesis, we investigate the application of Gaussian Processes to various forms of transfer learning with a focus on classification problems. This process initiates with a thorough introduction to the framework of Transfer learning, providing a clear taxonomy of the areas of research. Following that, we continue by reviewing the recent advances on Multi-task learning for regression with Gaussian processes, and compare the performance of some of these methods on a real data set. This review gives insights about the strengths and weaknesses of each method, which acts as a point of reference to apply these methods to other forms of transfer learning. The main contributions of this thesis are reported in the three following chapters. The third chapter investigates the application of Multi-task Gaussian processes to classification problems. We extend a previously proposed model to the classification scenario, providing three inference methods due to the non-Gaussian likelihood the classification paradigm imposes. The forth chapter extends the multi-task scenario to the semi-supervised case. Using labeled and unlabeled data, we construct a novel covariance function that is able to capture the geometry of the distribution of each task. This setup allows unlabeled data to be utilised to infer the level of correlation between the tasks. Moreover, we also discuss the potential use of this model to situations where no labeled data are available for certain tasks. The fifth chapter investigates a novel form of transfer learning called meta-generalising. The question at hand is if, after training on a sufficient number of tasks, it is possible to make predictions on a novel task. In this situation, the predictor is embedded in an environment of multiple tasks but has no information about the origins of the test task. This elevates the concept of generalising from the level of data to the level of tasks. We employ a model based on a hierarchy of Gaussian processes, in a mixtures of expert sense, to make predictions based on the relation between the distributions of the novel and the training tasks. Each chapter is accompanied with a thorough experimental part giving insights about the potentials and the limits of the proposed methods.

Page generated in 0.7296 seconds