• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 35
  • 2
  • 1
  • Tagged with
  • 50
  • 50
  • 47
  • 17
  • 13
  • 13
  • 13
  • 12
  • 10
  • 10
  • 9
  • 9
  • 8
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Learning under differing training and test distributions

Bickel, Steffen January 2008 (has links)
One of the main problems in machine learning is to train a predictive model from training data and to make predictions on test data. Most predictive models are constructed under the assumption that the training data is governed by the exact same distribution which the model will later be exposed to. In practice, control over the data collection process is often imperfect. A typical scenario is when labels are collected by questionnaires and one does not have access to the test population. For example, parts of the test population are underrepresented in the survey, out of reach, or do not return the questionnaire. In many applications training data from the test distribution are scarce because they are difficult to obtain or very expensive. Data from auxiliary sources drawn from similar distributions are often cheaply available. This thesis centers around learning under differing training and test distributions and covers several problem settings with different assumptions on the relationship between training and test distributions-including multi-task learning and learning under covariate shift and sample selection bias. Several new models are derived that directly characterize the divergence between training and test distributions, without the intermediate step of estimating training and test distributions separately. The integral part of these models are rescaling weights that match the rescaled or resampled training distribution to the test distribution. Integrated models are studied where only one optimization problem needs to be solved for learning under differing distributions. With a two-step approximation to the integrated models almost any supervised learning algorithm can be adopted to biased training data. In case studies on spam filtering, HIV therapy screening, targeted advertising, and other applications the performance of the new models is compared to state-of-the-art reference methods. / Eines der wichtigsten Probleme im Maschinellen Lernen ist das Trainieren von Vorhersagemodellen aus Trainingsdaten und das Ableiten von Vorhersagen für Testdaten. Vorhersagemodelle basieren üblicherweise auf der Annahme, dass Trainingsdaten aus der gleichen Verteilung gezogen werden wie Testdaten. In der Praxis ist diese Annahme oft nicht erfüllt, zum Beispiel, wenn Trainingsdaten durch Fragebögen gesammelt werden. Hier steht meist nur eine verzerrte Zielpopulation zur Verfügung, denn Teile der Population können unterrepräsentiert sein, nicht erreichbar sein, oder ignorieren die Aufforderung zum Ausfüllen des Fragebogens. In vielen Anwendungen stehen nur sehr wenige Trainingsdaten aus der Testverteilung zur Verfügung, weil solche Daten teuer oder aufwändig zu sammeln sind. Daten aus alternativen Quellen, die aus ähnlichen Verteilungen gezogen werden, sind oft viel einfacher und günstiger zu beschaffen. Die vorliegende Arbeit beschäftigt sich mit dem Lernen von Vorhersagemodellen aus Trainingsdaten, deren Verteilung sich von der Testverteilung unterscheidet. Es werden verschiedene Problemstellungen behandelt, die von unterschiedlichen Annahmen über die Beziehung zwischen Trainings- und Testverteilung ausgehen. Darunter fallen auch Multi-Task-Lernen und Lernen unter Covariate Shift und Sample Selection Bias. Es werden mehrere neue Modelle hergeleitet, die direkt den Unterschied zwischen Trainings- und Testverteilung charakterisieren, ohne dass eine einzelne Schätzung der Verteilungen nötig ist. Zentrale Bestandteile der Modelle sind Gewichtungsfaktoren, mit denen die Trainingsverteilung durch Umgewichtung auf die Testverteilung abgebildet wird. Es werden kombinierte Modelle zum Lernen mit verschiedenen Trainings- und Testverteilungen untersucht, für deren Schätzung nur ein einziges Optimierungsproblem gelöst werden muss. Die kombinierten Modelle können mit zwei Optimierungsschritten approximiert werden und dadurch kann fast jedes gängige Vorhersagemodell so erweitert werden, dass verzerrte Trainingsverteilungen korrigiert werden. In Fallstudien zu Email-Spam-Filterung, HIV-Therapieempfehlung, Zielgruppenmarketing und anderen Anwendungen werden die neuen Modelle mit Referenzmethoden verglichen.
32

Nonparametric Learning in High Dimensions

Liu, Han 01 December 2010 (has links)
This thesis develops flexible and principled nonparametric learning algorithms to explore, understand, and predict high dimensional and complex datasets. Such data appear frequently in modern scientific domains and lead to numerous important applications. For example, exploring high dimensional functional magnetic resonance imaging data helps us to better understand brain functionalities; inferring large-scale gene regulatory network is crucial for new drug design and development; detecting anomalies in high dimensional transaction databases is vital for corporate and government security. Our main results include a rigorous theoretical framework and efficient nonparametric learning algorithms that exploit hidden structures to overcome the curse of dimensionality when analyzing massive high dimensional datasets. These algorithms have strong theoretical guarantees and provide high dimensional nonparametric recipes for many important learning tasks, ranging from unsupervised exploratory data analysis to supervised predictive modeling. In this thesis, we address three aspects: 1 Understanding the statistical theories of high dimensional nonparametric inference, including risk, estimation, and model selection consistency; 2 Designing new methods for different data-analysis tasks, including regression, classification, density estimation, graphical model learning, multi-task learning, spatial-temporal adaptive learning; 3 Demonstrating the usefulness of these methods in scientific applications, including functional genomics, cognitive neuroscience, and meteorology. In the last part of this thesis, we also present the future vision of high dimensional and large-scale nonparametric inference.
33

Zkoumání úlohy univerzálního sémantického značkování pomocí neuronových sítí, řešením jiných úloh a vícejazyčným učením / Zkoumání úlohy univerzálního sémantického značkování pomocí neuronových sítí, řešením jiných úloh a vícejazyčným učením

Abdou, Mostafa January 2018 (has links)
July 19, 2018 In this thesis we present an investigation of multi-task and transfer learning using the recently introduced task of semantic tagging. First we employ a number of natural language processing tasks as auxiliaries for semantic tag- ging. Secondly, going in the other direction, we employ seman- tic tagging as an auxiliary task for three di erent NLP tasks: Part-of-Speech Tagging, Universal Dependency parsing, and Natural Language Inference. We compare full neural network sharing, partial neural network sharing, and what we term the learning what to share setting where neg- ative transfer between tasks is less likely. Fi- nally, we investigate multi-lingual learning framed as a special case of multi-task learning. Our ndings show considerable improvements for most experiments, demonstrating a variety of cases where multi-task and transfer learning methods are bene cial. 1 References 2
34

Multi-Task Convolutional Learning for Flame Characterization

Ur Rehman, Obaid January 2020 (has links)
This thesis explores multi-task learning for combustion flame characterization i.e to learn different characteristics of the combustion flame. We propose a multi-task convolutional neural network for two tasks i.e. PFR (Pilot fuel ratio) and fuel type classification based on the images of stable combustion. We utilize transfer learning and adopt VGG16 to develop a multi-task convolutional neural network to jointly learn the aforementioned tasks. We also compare the performance of the individual CNN model for two tasks with multi-task CNN which learns these two tasks jointly by sharing visual knowledge among the tasks. We share the effectiveness of our proposed approach to a private company’s dataset. To the best of our knowledge, this is the first work being done for jointly learning different characteristics of the combustion flame. / <p>This wrok as done with Siemens, and we have applied for a patent which is still pending.</p>
35

Human-Inspired Robot Task Teaching and Learning

Wu, Xianghai 28 October 2009 (has links)
Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups. The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback. The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames. An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction. A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills.
36

Human-Inspired Robot Task Teaching and Learning

Wu, Xianghai 28 October 2009 (has links)
Current methods of robot task teaching and learning have several limitations: highly-trained personnel are usually required to teach robots specific tasks; service-robot systems are limited in learning different types of tasks utilizing the same system; and the teacher’s expertise in the task is not well exploited. A human-inspired robot-task teaching and learning method is developed in this research with the aim of allowing general users to teach different object-manipulation tasks to a service robot, which will be able to adapt its learned tasks to new task setups. The proposed method was developed to be interactive and intuitive to the user. In a closed loop with the robot, the user can intuitively teach the tasks, track the learning states of the robot, direct the robot attention to perceive task-related key state changes, and give timely feedback when the robot is practicing the task, while the robot can reveal its learning progress and refine its knowledge based on the user’s feedback. The human-inspired method consists of six teaching and learning stages: 1) checking and teaching the needed background knowledge of the robot; 2) introduction of the overall task to be taught to the robot: the hierarchical task structure, and the involved objects and robot hand actions; 3) teaching the task step by step, and directing the robot to perceive important state changes; 4) demonstration of the task in whole, and offering vocal subtask-segmentation cues in subtask transitions; 5) robot learning of the taught task using a flexible vote-based algorithm to segment the demonstrated task trajectories, a probabilistic optimization process to assign obtained task trajectory episodes (segments) to the introduced subtasks, and generalization of the taught task trajectories in different reference frames; and 6) robot practicing of the learned task and refinement of its task knowledge according to the teacher’s timely feedback, where the adaptation of the learned task to new task setups is achieved by blending the task trajectories generated from pertinent frames. An agent-based architecture was designed and developed to implement this robot-task teaching and learning method. This system has an interactive human-robot teaching interface subsystem, which is composed of: a) a three-camera stereo vision system to track user hand motion; b) a stereo-camera vision system mounted on the robot end-effector to allow the robot to explore its workspace and identify objects of interest; and c) a speech recognition and text-to-speech system, utilized for the main human-robot interaction. A user study involving ten human subjects was performed using two tasks to evaluate the system based on time spent by the subjects on each teaching stage, efficiency measures of the robot’s understanding of users’ vocal requests, responses, and feedback, and their subjective evaluations. Another set of experiments was done to analyze the ability of the robot to adapt its previously learned tasks to new task setups using measures such as object, target and robot starting-point poses; alignments of objects on targets; and actual robot grasp and release poses relative to the related objects and targets. The results indicate that the system enabled the subjects to naturally and effectively teach the tasks to the robot and give timely feedback on the robot’s practice performance. The robot was able to learn the tasks as expected and adapt its learned tasks to new task setups. The robot properly refined its task knowledge based on the teacher’s feedback and successfully applied the refined task knowledge in subsequent task practices. The robot was able to adapt its learned tasks to new task setups that were considerably different from those in the demonstration. The alignments of objects on the target were quite close to those taught, and the executed grasping and releasing poses of the robot relative to objects and targets were almost identical to the taught poses. The robot-task learning ability was affected by limitations of the vision-based human-robot teleoperation interface used in hand-to-hand teaching and the robot’s capacity to sense its workspace. Future work will investigate robot learning of a variety of different tasks and the use of more robot in-built primitive skills.
37

Uncovering Structure in High-Dimensions: Networks and Multi-task Learning Problems

Kolar, Mladen 01 July 2013 (has links)
Extracting knowledge and providing insights into complex mechanisms underlying noisy high-dimensional data sets is of utmost importance in many scientific domains. Statistical modeling has become ubiquitous in the analysis of high dimensional functional data in search of better understanding of cognition mechanisms, in the exploration of large-scale gene regulatory networks in hope of developing drugs for lethal diseases, and in prediction of volatility in stock market in hope of beating the market. Statistical analysis in these high-dimensional data sets is possible only if an estimation procedure exploits hidden structures underlying data. This thesis develops flexible estimation procedures with provable theoretical guarantees for uncovering unknown hidden structures underlying data generating process. Of particular interest are procedures that can be used on high dimensional data sets where the number of samples n is much smaller than the ambient dimension p. Learning in high-dimensions is difficult due to the curse of dimensionality, however, the special problem structure makes inference possible. Due to its importance for scientific discovery, we put emphasis on consistent structure recovery throughout the thesis. Particular focus is given to two important problems, semi-parametric estimation of networks and feature selection in multi-task learning.
38

Deep Learning Studies for Vision-based Condition Assessment and Attribute Estimation of Civil Infrastructure Systems

Fu-Chen Chen (7484339) 14 January 2021 (has links)
Structural health monitoring and building assessment are crucial to acquire structures’ states and maintain their conditions. Besides human-labor surveys that are subjective, time-consuming, and expensive, autonomous image and video analysis is a faster, more efficient, and non-destructive way. This thesis focuses on crack detection from videos, crack segmentation from images, and building assessment from street view images. For crack detection from videos, three approaches are proposed based on local binary pattern (LBP) and support vector machine (SVM), deep convolution neural network (DCNN), and fully-connected network (FCN). A parametric Naïve Bayes data fusion scheme is introduced that registers video frames in a spatiotemporal coordinate system and fuses information based on Bayesian probability to increase detection precision. For crack segmentation from images, the rotation-invariant property of crack is utilized to enhance the segmentation accuracy. The architectures of several approximately rotation-invariant DCNNs are discussed and compared using several crack datasets. For building assessment from street view images, a framework of multiple DCNNs is proposed to detect buildings and predict their attributes that are crucial for flood risk estimation, including founding heights, foundation types (pier, slab, mobile home, or others), building types (commercial, residential, or mobile home), and building stories. A feature fusion scheme is proposed that combines image feature with meta information to improve the predictions, and a task relation encoding network (TREncNet) is introduced that encodes task relations as network connections to enhance multi-task learning.
39

User Attribute Inference via Mining User-Generated Data

Ding, Shichang 01 December 2020 (has links)
No description available.
40

Can Wizards be Polyglots: Towards a Multilingual Knowledge-grounded Dialogue System

Liu, Evelyn Kai Yan January 2022 (has links)
The research of open-domain, knowledge-grounded dialogue systems has been advancing rapidly due to the paradigm shift introduced by large language models (LLMs). While the strides have improved the performance of the dialogue systems, the scope is mostly monolingual and English-centric. The lack of multilingual in-task dialogue data further discourages research in this direction. This thesis explores the use of transfer learning techniques to extend the English-centric dialogue systems to multiple languages. In particular, this work focuses on five typologically diverse languages, of which well-performing models could generalize to the languages that are part of the language family as the target languages, hence widening the accessibility of the systems to speakers of various languages. I propose two approaches: Multilingual Retrieval-Augmented Dialogue Model (xRAD) and Multilingual Generative Dialogue Model (xGenD). xRAD is adopted from a pre-trained multilingual question answering (QA) system and comprises a neural retriever and a multilingual generation model. Prior to the response generation, the retriever fetches relevant knowledge and conditions the retrievals to the generator as part of the dialogue context. This approach can incorporate knowledge into conversational agents, thus improving the factual accuracy of a dialogue model. In addition, xRAD has advantages over xGenD because of its modularity, which allows the fusion of QA and dialogue systems so long as appropriate pre-trained models are employed. On the other hand, xGenD takes advantage of an existing English dialogue model and performs a zero-shot cross-lingual transfer by training sequentially on English dialogue and multilingual QA datasets. Both automated and human evaluation were carried out to measure the models' performance against the machine translation baseline. The result showed that xRAD outperformed xGenD significantly and surpassed the baseline in most metrics, particularly in terms of relevance and engagingness. Whilst xRAD performance was promising to some extent, a detailed analysis revealed that the generated responses were not actually grounded in the retrieved paragraphs. Suggestions were offered to mitigate the issue, which hopefully could lead to significant progress of multilingual knowledge-grounded dialogue systems in the future.

Page generated in 0.1311 seconds