Spelling suggestions: "subject:"btransfer learning"" "subject:"cotransfer learning""
141 |
Gaze tracking using Recurrent Neural Networks : Hardware agnostic gaze estimation using temporal features, synthetic data and a geometric modelMalmberg, Fredrik January 2022 (has links)
Vision is an important tool for us humans and significant effort has been put into creating solutions that let us measure how we use it. Most common among the techniques to measure gaze direction is to use specialised hardware such as infrared eye trackers. Recently, several Convolutional Neural Network (CNN) based architectures have been suggested yielding impressive results on single Red Green Blue (RGB) images. However, limited research has been done around whether using several sequential images can lead to improved tracking performance. Expanding this research to include low frequency and low quality RGB images can further open up the possibility to improve tracking performance for models using off-the-shelf hardware such as web cameras or smart phone cameras. GazeCapture is a well known dataset used for training RGB based CNN models but it lacks sequences of images and natural eye movements. In this thesis, a geometric gaze estimation model is introduced and synthetic data is generated using Unity to create sequences of images with both RGB input data as well as ground Point of Gaze (POG). To make these images more natural appearing domain adaptation is done using a CycleGAN. The data is then used to train several different models to evaluate whether temporal information can increase accuracy. Even though the improvement when using a Gated Recurrent Unit (GRU) based temporal model is limited over simple sequence averaging, the network achieves smoother tracking than a single image model while still offering faster updates over a saccade (eye movement) compared to averaging. This indicates that temporal features could improve accuracy. There are several promising future areas of related research that could further improve performance such as using real sequential data or further improving the domain adaptation of synthetic data. / Synen är ett viktigt sinne för oss människor och avsevärd energi har lagts ner på att skapa lösningar som låter oss mäta hur vi använder den. Det vanligaste sättet att göra detta idag är att använda specialiserad hårdvara baserad på infrarött ljus för ögonspårning. På senare tid har maskininlärning och modeller baserade på CNN uppnått imponerande resultat för enskilda RGB-bilder men endast begränsad forskning har gjorts kring huruvida användandet av en sekvens av högupplösta bilder kan öka prestandan för dessa modeller ytterligare. Genom att uttöka denna till bildserier med lägre frekvens och kvalitet kan det finnas möjligheter att förbättra prestandan för sekventiella modeller som kan använda data från standard-hårdvara såsom en webbkamera eller kameran i en vanlig telefon. GazeCapture är ett välkänt dataset som kan användas för att träna RGB-baserade CNN-modeller för enskilda bilder. Dock innehåller det inte bildsekvenser eller bilder som fångar naturliga ögonrörelser. För att hantera detta tränades de sekventiella modellerna i denna uppsats med data som skapats från 3D-modeller i Unity. För att den syntetiska datan skulle vara jämförbar med riktiga bilder anpassades den med hjälp av ett CycleGAN. Även om förbättringen som uppnåddes med sekventiella GRU-baserade modeller var begränsad jämfört med en modell som använde medelvärdet för sekvensen så uppnådde den tränade sekventiella modellen jämnare spårning jämfört med enbildsmodeller samtidigt som den uppdateras snabbare vid en sackad (ögonrörelse) än medelvärdesmodellen. Detta indikerar att den tidsmässiga information kan förbättra ögonspårning även för lågfrekventa bildserier med lägre kvalitet. Det finns ett antal intressanta områden att fortsätta undersöka för att ytterligare öka prestandan i liknande system som till exempel användandet av större mängder riktig sekventiell data eller en förbättrad domänanpassning av syntetisk data.
|
142 |
Automatic Analysis of Facial Actions: Learning from Transductive, Supervised and Unsupervised FrameworksChu, Wen-Sheng 01 January 2017 (has links)
Automatic analysis of facial actions (AFA) can reveal a person’s emotion, intention, and physical state, and make possible a wide range of applications. To enable reliable, valid, and efficient AFA, this thesis investigates automatic analysis of facial actions through transductive, supervised and unsupervised learning. Supervised learning for AFA is challenging, in part, because of individual differences among persons in face shape and appearance and variation in video acquisition and context. To improve generalizability across persons, we propose a transductive framework, Selective Transfer Machine (STM), which personalizes generic classifiers through joint sample reweighting and classifier learning. By personalizing classifiers, STM offers improved generalization to unknown persons. As an extension, we develop a variant of STM for use when partially labeled data are available. Additional challenges for supervised learning include learning an optimal representation for classification, variation in base rates of action units (AUs), correlation between AUs and temporal consistency. While these challenges could be partly accommodated with an SVM or STM, a more powerful alternative is afforded by an end-to-end supervised framework (i.e., deep learning). We propose a convolutional network with long short-term memory (LSTM) and multi-label sampling strategies. We compared SVM, STM and deep learning approaches with respect to AU occurrence and intensity in and between BP4D+ [282] and GFT [93] databases, which consist of around 0.6 million annotated frames. Annotated video is not always possible or desirable. We introduce an unsupervised Branch-and-Bound framework to discover correlated facial actions in un-annotated video. We term this approach Common Event Discovery (CED). We evaluate CED in video and motion capture data. CED achieved moderate convergence with supervised approaches and enabled discovery of novel patterns occult to supervised approaches.
|
143 |
Transfer Learning for Medication Adherence Prediction from Social Forums Self-Reported DataKyle Haas (5931056) 17 January 2019 (has links)
<div>
<div>
<div>
<p>Medication non-adherence and non-compliance left unaddressed can compound
into severe medical problems for patients. Identifying patients that are likely to
become non-adherent can help reduce these problems. Despite these benefits, monitoring adherence at scale is cost-prohibitive. Social forums offer an easily accessible,
affordable, and timely alternative to the traditional methods based on claims data.
This study investigates the potential of medication adherence prediction based on
social forum data for diabetes and fibromyalgia therapies by using transfer learning
from the Medical Expenditure Panel Survey (MEPS).
</p><p><br></p>
<p>Predictive adherence models are developed by using both survey and social forums
data and different random forest (RF) techniques. The first of these implementations
uses binned inputs from k-means clustering. The second technique is based on ternary
trees instead of the widely used binary decision trees. These techniques are able to
handle missing data, a prevalent characteristic of social forums data.
</p><p><br></p>
<p>The results of this study show that transfer learning between survey models and
social forum models is possible. Using MEPS survey data and the techniques listed
above to derive RF models, less than 5% difference in accuracy was observed between
the MEPS test dataset and the social forum test dataset. Along with these RF
techniques, another RF implementation with imputed means for the missing values
was developed and shown to predict adherence for social forum patients with an
accuracy >70%.
</p>
</div>
</div>
<div>
<div>
<p><br></p>
</div>
</div>
</div>
<div>
<div>
<div>
<p>This thesis shows that a model trained with verified survey data can be used
to complement traditional medical adherence models by predicting adherence from
unverified, self-reported data in a dynamic and timely manner. Furthermore, this
model provides a method for discovering objective insights from subjective social
reports. Additional investigation is needed to improve the prediction accuracy of the
proposed model and to assess biases that may be inherent to self-reported adherence
measures in social health networks.
</p>
</div>
</div>
</div>
|
144 |
Using Convolutional Neural Networks to Detect People Around Wells in South SudanKastberg, Maria January 2019 (has links)
The organization International Aid Services (IAS) provides people in East Africawith clean water through well drilling. The wells are located in surroundingsfar away for the investors to inspect and therefore IAS wishes to be able to monitortheir wells to get a better overview if different types of improvements needto be made. To see the load on different water sources at different times of theday and during the year, and to know how many people that are visiting thewells, is of particular interest. In this paper, a method is proposed for countingpeople around the wells. The goal is to choose a suitable method for detectinghumans in images and evaluate how it performs. The area of counting humansin images is not a new topic, though it needs to be taken into account that thesituation implies some restrictions. A Raspberry Pi with an associated camerais used, which is a small embedded system that cannot handle large and complexsoftware. There is also a limited amount of data in the project. The methodproposed in this project uses a pre-trained convolutional neural network basedobject detector called the Single Shot Detector, which is adapted to suit smallerdevices and applications. The pre-trained network that it is based on is calledMobileNet, a network that is developed to be used on smaller systems. To see howgood the chosen detector performs it will be compared with some other models.Among them a detector based on the Inception network, a significantly larger networkthan the MobileNet. The base network is modified by transfer learning.Results shows that a fine-tuned and modified network can achieve better result,from a F1-score of 0.49 for a non-fine-tuned model to 0.66 for the fine-tuned one.
|
145 |
Multi-Label Text Classification with Transfer Learning for Policy Documents : The Case of the Sustainable Development GoalsRodríguez Medina, Samuel January 2019 (has links)
We created and analyzed a text classification dataset from freely-available web documents from the United Nation's Sustainable Development Goals. We then used it to train and compare different multi-label text classifiers with the aim of exploring the alternatives for methods that facilitate the search of information of this type of documents. We explored the effectiveness of deep learning and transfer learning in text classification by fine-tuning different pre-trained language representations — Word2Vec, GloVe, ELMo, ULMFiT and BERT. We also compared these approaches against a baseline of more traditional algorithms without using transfer learning. More specifically, we used multinomial Naive Bayes, logistic regression, k-nearest neighbors and Support Vector Machines. We then analyzed the results of our experiments quantitatively and qualitatively. The best results in terms of micro-averaged F1 scores and AUROC are obtained by BERT. However, it is also interesting that the second best classifier in terms of micro-averaged F1 scores is the Support Vector Machines, closely followed by the logistic regression classifier, which both have the advantage of being less computationally expensive than BERT. The results also show a close relation between our dataset size and the effectiveness of the classifiers.
|
146 |
Scaling Up Reinforcement Learning without Sacrificing Optimality by Constraining ExplorationMann, Timothy 1984- 14 March 2013 (has links)
The purpose of this dissertation is to understand how algorithms can efficiently learn to solve new tasks based on previous experience, instead of being explicitly programmed with a solution for each task that we want it to solve. Here a task is a series of decisions, such as a robot vacuum deciding which room to clean next or an intelligent car deciding to stop at a traffic light. In such a case, state-of-the-art learning algorithms are difficult to employ in practice because they often make thou- sands of mistakes before reliably solving a task. However, humans learn solutions to novel tasks, often making fewer mistakes, which suggests that efficient learning algorithms may exist. One advantage that humans have over state- of-the-art learning algorithms is that, while learning a new task, humans can apply knowledge gained from previously solved tasks. The central hypothesis investigated by this dissertation is that learning algorithms can solve new tasks more efficiently when they take into consideration knowledge learned from solving previous tasks. Al- though this hypothesis may appear to be obviously true, what knowledge to use and how to apply that knowledge to new tasks is a challenging, open research problem.
I investigate this hypothesis in three ways. First, I developed a new learning algorithm that is able to use prior knowledge to constrain the exploration space. Second, I extended a powerful theoretical framework in machine learning, called Probably Approximately Correct, so that I can formally compare the efficiency of algorithms that solve only a single task to algorithms that consider knowledge from previously solved tasks. With this framework, I found sufficient conditions for using knowledge from previous tasks to improve efficiency of learning to solve new tasks and also identified conditions where transferring knowledge may impede learning. I present situations where transfer learning can be used to intelligently constrain the exploration space so that optimality loss can be minimized. Finally, I tested the efficiency of my algorithms in various experimental domains.
These theoretical and empirical results provide support for my central hypothesis. The theory and experiments of this dissertation provide a deeper understanding of what makes a learning algorithm efficient so that it can be widely used in practice. Finally, these results also contribute the general goal of creating autonomous machines that can be reliably employed to solve complex tasks.
|
147 |
Assessing Nurse and Medical Assistant Perceived Needs Prior to Implementation of Expanded Web-based Training in Physician ClinicsHopkins, Pamela Jean Clinton 2010 May 1900 (has links)
The purpose of this study was to assess nurse and medical assistant perceived needs prior to implementing an expended web-based training (WBT) program in physician clinics. This case study was conducted with a mixed-data approach using quantitative and descriptive survey data collection. A total of 239 nurses and medical assistants within the Trinity Mother Frances Hospitals and Clinics dispersed throughout east, north east and north central Texas participated.
The participants shared knowledge and behaviors common to the culture of the organization. When new and existing clinical staff traveled to the distant primary campus for training, the operations of the clinic practice was disrupted. Employees are not hired in groups comprising convenient training class sizes, and mandatory training often cannot wait until a class is of a cost effective size.
The data were collected using a 50-item survey evaluating computer access, computer usage, computer knowledge (satisfaction, frustration, and motivation to transfer learning), and WBT preference (employee's support and employee's perception of supervisor's support). Quantitative data were collected in the form of a dichotomous yes or no and ordinal data from two Likert type scales. Descriptive survey data was collected using open-ended questions emphasizing perceived strengths, weaknesses, opportunities and threats (SWOT) of WBT. Demographic data were collected to facilitate comparison of perspectives based on demographic information gathered.
To support reliability and validity of the Clinic WBT Needs Assessment (CWBTNA), exploratory factor analysis, Cronbach's coefficient alpha, and correlations were utilized to validate the survey instrument. Chi-squares, ANOVAs, and
t-tests were conducted. Following the Bonferroni control for Type I error rate (a), four
t-test, two chi-squares, and three ANOVAs demonstrated significance. Descriptive responses generated from descriptive survey items were transcribed into an Excel spreadsheet which allowed coding and sorting.
Themes consistent with order sets of the quantitative survey emerged. Among additional findings, statistical data demonstrated that staff perceived they transferred learning into the work place best when they perceived greater supervisor support. All findings are detailed in the document.
|
148 |
Communication and alignment of grounded symbolic knowledge among heterogeneous robotsKira, Zsolt 05 April 2010 (has links)
Experience forms the basis of learning. It is crucial in the development of human intelligence, and more broadly allows an agent to discover and learn about the world around it. Although experience is fundamental to learning, it is costly and time-consuming to obtain. In order to speed this process up, humans in particular have developed communication abilities so that ideas and knowledge can be shared without requiring first-hand experience.
Consider the same need for knowledge sharing among robots. Based on the recent growth of the field, it is reasonable to assume that in the near future there will be a collection of robots learning to perform tasks and gaining their own experiences in the world. In order to speed this learning up, it would be beneficial for the various robots to share their knowledge with each other. In most cases, however, the communication of knowledge among humans relies on the existence of similar sensory and motor capabilities. Robots, on the other hand, widely vary in perceptual and motor apparatus, ranging from simple light sensors to sophisticated laser and vision sensing.
This dissertation defines the problem of how heterogeneous robots with widely different capabilities can share experiences gained in the world in order to speed up learning. The work focus specifically on differences in sensing and perception, which can be used both for perceptual categorization tasks as well as determining actions based on environmental features. Motivating the problem, experiments first demonstrate that heterogeneity does indeed pose a problem during the transfer of object models from one robot to another. This is true even when using state of the art object recognition algorithms that use SIFT features, designed to be unique and reproducible.
It is then shown that the abstraction of raw sensory data into intermediate categories for multiple object features (such as color, texture, shape, etc.), represented as Gaussian Mixture Models, can alleviate some of these issues and facilitate effective knowledge transfer. Object representation, heterogeneity, and knowledge transfer is framed within Gärdenfors' conceptual spaces, or geometric spaces that utilize similarity measures as the basis of categorization. This representation is used to model object properties (e.g. color or texture) and concepts (object categories and specific objects).
A framework is then proposed to allow heterogeneous robots to build models of their differences with respect to the intermediate representation using joint interaction in the environment. Confusion matrices are used to map property pairs between two heterogeneous robots, and an information-theoretic metric is proposed to model information loss when going from one robot's representation to another. We demonstrate that these metrics allow for cognizant failure, where the robots can ascertain if concepts can or cannot be shared, given their respective capabilities.
After this period of joint interaction, the learned models are used to facilitate communication and knowledge transfer in a manner that is sensitive to the robots' differences. It is shown that heterogeneous robots are able to learn accurate models of their similarities and difference, and to use these models to transfer learned concepts from one robot to another in order to bootstrap the learning of the receiving robot. In addition, several types of communication tasks are used in the experiments. For example, how can a robot communicate a distinguishing property of an object to help another robot differentiate it from its surroundings? Throughout the dissertation, the claims will be validated through both simulation and real-robot experiments.
|
149 |
Adaptive trading agent strategies using market experiencePardoe, David Merrill 22 June 2011 (has links)
Along with the growth of electronic commerce has come an interest in developing autonomous trading agents. Often, such agents must interact directly with
other market participants, and so the behavior of these participants must be taken into account when designing agent strategies. One common approach is to build a
model of the market, but this approach requires the use of historical market data, which may not always be available. This dissertation addresses such a case: that of an agent entering a new market in which it has no previous experience. While the
agent could adapt by learning about the behavior of other market participants, it would need to do so in an online fashion. The agent would not necessarily have to
learn from scratch, however. If the agent had previous experience in similar markets, it could use this experience to tailor its learning approach to its particular
situation.
This dissertation explores methods that a trading agent could use to take advantage of previous market experience when adapting to a new market. Two distinct learning settings are considered. In the first, an agent acting as an auctioneer must adapt the parameters of an auction mechanism in response to bidder behavior, and a reinforcement learning approach is used. The second setting concerns agents that must adapt to the behavior of competitors in two scenarios from the Trading Agent Competition: supply chain management and ad auctions. Here, the
agents use supervised learning to model the market. In both settings, methods of adaptation can be divided into four general categories: i) identifying the most similar previously encountered market, ii) learning from the current market only, iii) learning from the current market but using previous experience to tune the learning
algorithm, and iv) learning from both the current and previous markets. The first contribution of this dissertation is the introduction and experimental validation of a number of novel algorithms for market adaptation fitting these categories. The second contribution is an exploration of the degree to which the quantity and nature of market experience impact the relative performance of methods from these categories. / text
|
150 |
Modélisation multi-échelles de la morphologie urbaine à partir de données carroyées de population et de bâti / Multiscale modelling of urban morphology using gridded dataBaro, Johanna 25 March 2015 (has links)
La question des liens entre forme urbaine et transport se trouve depuis une vingtaine d'années au cœur des réflexions sur la mise en place de politiques d'aménagement durable. L'essor de la diffusion de données sur grille régulière constitue dans ce cadre une nouvelle perspective pour la modélisation de structures urbaines à partir de mesures de densités affranchies de toutes les contraintes des maillages administratifs. A partir de données de densité de population et de surface bâtie disponibles à l'échelle de la France sur des grilles à mailles de 200 mètres de côté, nous proposons deux types de classifications adaptées à l'étude des pratiques de déplacement et du développement urbain : des classifications des tissus urbains et des classifications des morphotypes de développement urbain. La construction de telles images classées se base sur une démarche de modélisation théorique et expérimentale soulevant de forts enjeux méthodologiques quant à la classification d'espaces urbains statistiquement variés. Pour nous adapter au traitement exhaustif de ces espaces, nous avons proposé une méthode de classification des tissus urbains par transfert d'apprentissage supervisé. Cette méthode utilise le formalisme des champs de Markov cachés pour prendre en compte les dépendances présentes dans ces données spatialisées. Les classifications en morphotypes sont ensuite obtenus par un enrichissement de ces premières images classées, formalisé à partir de modèles chorématiques et mis à œuvre par raisonnement spatial qualitatif. L'analyse de ces images classées par des méthodes de raisonnement spatial quantitatif et d'analyses factorielles nous a permis de révéler la diversité morphologique de 50 aires urbaines françaises. Elle nous a permis de mettre en avant la pertinence de ces classifications pour caractériser les espaces urbains en accord avec différents enjeux d'aménagement relatifs à la densité ou à la multipolarité / Since a couple of decades the relationships between urban form and travel patterns are central to reflection on sustainable urban planning and transport policy. The increasing distribution of regular grid data is in this context a new perspective for modeling urban structures from measurements of density freed from the constraints of administrative division. Population density data are now available on 200 meters grids covering France. We complete these data with built area densities in order to propose two types of classified images adapted to the study of travel patterns and urban development: classifications of urban fabrics and classifications of morphotypes of urban development. The construction of such classified images is based on theoretical and experimental which raise methodological issues regarding the classification of a statistically various urban spaces. To proceed exhaustively those spaces, we proposed a per-pixel classification method of urban fabrics by supervised transfer learning. Hidden Markov random fields are used to take into account the dependencies in the spatial data. The classifications of morphotypes are then obtained by broadening the knowledge of urban fabrics. These classifications are formalized from chorematique theoretical models and implemented by qualitative spatial reasoning. The analysis of these classifications by methods of quantitative spatial reasoning and factor analysis allowed us to reveal the morphological diversity of 50 metropolitan areas. It highlights the relevance of these classifications to characterize urban areas in accordance with various development issues related to the density or multipolar development
|
Page generated in 0.3461 seconds