• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Foundations of Vocabulary: Does Statistical Segmentation of Events Contribute to Word Learning?

Levine, Dani Fara January 2017 (has links)
This dissertation evaluates the untested assumption that the individuation of events into units matters for word learning, particularly the learning of terms which map onto relational event units (Gentner & Boroditsky, 2001; Maguire et al., 2006). We predicted that 3-year-old children’s statistical action segmentation abilities would relate to their verb comprehension and to their overall vocabulary knowledge (Research Question 1). We also hypothesized that statistical action segmentation would facilitate children’s learning of novel verbs (Research Question 2). Largely confirming our first prediction, children who were better able to statistically segment novel action sequences into reliable units had more sophisticated overall vocabularies and were quicker to select the correct referents of overall vocabulary items and verb vocabulary items; nevertheless, they did not have larger verb vocabularies. Unexpectedly, statistical action segmentation did not facilitate children’s learning of verbs for statistically consistent action units. However, children showed greater learning of verbs labeling statistical action part-units than verbs labeling statistical action non-units, providing some evidence for our second prediction. In sum, this dissertation takes an important step towards understanding how event segmentation may contribute to vocabulary acquisition. / Psychology
2

[en] A CLUSTER-BASED METHOD FOR ACTION SEGMENTATION USING SPATIO-TEMPORAL AND POSITIONAL ENCODED EMBEDDINGS / [pt] MÉTODO BASEADO EM AGRUPAMENTO PARA A SEGMENTAÇÃO DE AÇÕES UTILIZANDO EMBEDDINGS ESPAÇO-TEMPORAIS E COM CODIFICAÇÃO POSICIONAL

GUILHERME DE AZEVEDO P MARQUES 20 April 2023 (has links)
[pt] Vídeos se tornaram a principal mídia para a comunicação, com um volume massivo de dados criado a cada segundo. Conseguir entender essa quantidade de dados de forma automática se tornou importante e, por conseguinte, métodos de video understanding são cada vez mais necessários. Uma tarefa crucial para o entendimento de vídeos é a classificação e localização no tempo de diferentes ações. Para isso, a segmentação de ações precisa ser realizada. Segmentação de ações é a tarefa que consiste em segmentar temporalmente um vídeo, classificando cada quadro com alguma ação. Neste trabalho, é proposto um método de segmentação de ações que não requer análise prévia do vídeo e nenhum dado anotado. O método envolve a extração de embeddings espaço-temporais dos vídeos com redes de aprendizado profundo pré-treinadas, seguida por uma transformação realizada por um codificador posicional e pela aplicação de um algoritmo de grupamento em que cada cluster gerado corresponde a uma ação diferente. Os experimentos realizados demonstram que o método produz resultados competitivos nos conjuntos de dados Breakfast e Inria Instructional Videos. / [en] The rise of video content as the main media for communication has been creating massive volumes of video data every second. The ability of understanding this huge quantities of data automatically has become increasingly important, therefore better video understanding methods are needed. A crucial task to overall video understanding is the recognition and localisation in time of dierent actions. To address this problem, action segmentation must be achieved. Action segmentation consists of temporally segmenting a video by labeling each frame with a specific action. In this work, we propose a novel action segmentation method that requires no prior video analysis and no annotated data. Our method involves extracting spatio-temporal features from videos using a pre-trained deep network. Data is then transformed using a positional encoder, and finally a clustering algorithm is applied where each cluster presumably corresponds to a dierent single and distinguishable action. In experiments, we show that our method produces competitive results on the Breakfast and Inria Instructional Videos dataset benchmarks.
3

BI-DIRECTIONAL COACHING THROUGH SPARSE HUMAN-ROBOT INTERACTIONS

Mythra Varun Balakuntala Srinivasa Mur (16377864) 15 June 2023 (has links)
<p>Robots have become increasingly common in various sectors, such as manufacturing, healthcare, and service industries. With the growing demand for automation and the expectation for interactive and assistive capabilities, robots must learn to adapt to unpredictable environments like humans can. This necessitates the development of learning methods that can effectively enable robots to collaborate with humans, learn from them, and provide guidance. Human experts commonly teach their collaborators to perform tasks via a few demonstrations, often followed by episodes of coaching that refine the trainee’s performance during practice. Adopting a similar approach that facilitates interactions to teaching robots is highly intuitive and enables task experts to teach the robots directly. Learning from Demonstration (LfD) is a popular method for robots to learn tasks by observing human demonstrations. However, for contact-rich tasks such as cleaning, cutting, or writing, LfD alone is insufficient to achieve a good performance. Further, LfD methods are developed to achieve observed goals while ignoring actions to maximize efficiency. By contrast, we recognize that leveraging human social learning strategies of practice and coaching in conjunction enables learning tasks with improved performance and efficacy. To address the deficiencies of learning from demonstration, we propose a Coaching by Demonstration (CbD) framework that integrates LfD-based practice with sparse coaching interactions from a human expert.</p> <p><br></p> <p>The LfD-based practice in CbD was implemented as an end-to-end off-policy reinforcement learning (RL) agent with the action space and rewards inferred from the demonstration. By modeling the reward as a similarity network trained on expert demonstrations, we eliminate the need for designing task-specific engineered rewards. Representation learning was leveraged to create a novel state feature that captures interaction markers necessary for performing contact-rich skills. This LfD-based practice was combined with coaching, where the human expert can improve or correct the objectives through a series of interactions. The dynamics of interaction in coaching are formalized using a partially observable Markov decision process. The robot aims to learn the true objectives by observing the corrective feedback from the human expert. We provide an approximate solution by reducing this to a policy parameter update using KL divergence between the RL policy and a Gaussian approximation based on coaching. The proposed framework was evaluated on a dataset of 10 contact-rich tasks from the assembly (peg-insertion), service (cleaning, writing, peeling), and medical domains (cricothyroidotomy, sonography). Compared to baselines of behavioral cloning and reinforcement learning algorithms, CbD demonstrates improved performance and efficiency.</p> <p><br></p> <p>During the learning process, the demonstrations and coaching feedback imbue the robot with expert knowledge of the task. To leverage this expertise, we develop a reverse coaching model where the robot can leverage knowledge from demonstrations and coaching corrections to provide guided feedback to human trainees to improve their performance. Providing feedback adapted to individual trainees' "style" is vital to coaching. To this end, we have proposed representing style as objectives in the task null space. Unsupervised clustering of the null-space trajectories using Gaussian mixture models allows the robot to learn different styles of executing the same skill. Given the coaching corrections and style clusters database, a style-conditioned RL agent was developed to provide feedback to human trainees by coaching their execution using virtual fixtures. The reverse coaching model was evaluated on two tasks, a simulated incision and obstacle avoidance through a haptic teleoperation interface. The model improves human trainees’ accuracy and completion time compared to a baseline without corrective feedback. Thus, by taking advantage of different human-social learning strategies, human-robot collaboration can be realized in human-centric environments. </p> <p><br></p>

Page generated in 0.0438 seconds