1 |
COMPARISON OF GENERAL AND HIGH PROBABILITY MOTOR SEQUENCE ATTENTIONAL CUES FOR INCREASING VOCABULARY IDENTIFICATION IN STUDENTS WITH AUTISMObst, Ashleigh G. 01 January 2017 (has links)
The present study assessed if embedding high probability responding (high-p) into an attentional cue, versus a general attentional cue (GA), would result in students with moderate and severe disabilities displaying differential responding for grade level science vocabulary word identification. Using an adapted alternating treatments design, three students with autism spectrum disorder received an intervention involving a GA cue and one with a high-p to determine which is more efficient. Hypothesized results are that the attentional cue with a high-probability motor sequence would be more effective for teaching vocabulary word identification.
|
2 |
Learning descriptive models of objects and activities from egocentric videoFathi, Alireza 29 August 2013 (has links)
Recent advances in camera technology have made it possible to build a comfortable, wearable system which can capture the scene in front of the user throughout the day. Products based on this technology, such as GoPro and Google Glass, have generated substantial interest. In this thesis, I present my work on egocentric vision, which leverages wearable camera technology and provides a new line of attack on classical computer vision problems such as object categorization and activity recognition.
The dominant paradigm for object and activity recognition over the last decade has been based on using the web. In this paradigm, in order to learn a model for an object category like coffee jar, various images of that object type are fetched from the web (e.g. through Google image search), features are extracted and then classifiers are learned. This paradigm has led to great advances in the field and has produced state-of-the-art results for object recognition. However, it has two main shortcomings: a) objects on the web appear in isolation and they miss the context of daily usage; and b) web data does not represent what we see every day.
In this thesis, I demonstrate that egocentric vision can address these limitations as an alternative paradigm. I will demonstrate that contextual cues and the actions of a user can be exploited in an egocentric vision system to learn models of objects under very weak supervision. In addition, I will show that measurements of a subject's gaze during object manipulation tasks can provide novel feature representations to support activity recognition. Moving beyond surface-level categorization, I will showcase a method for automatically discovering object state changes during actions, and an approach to building descriptive models of social interactions between groups of individuals. These new capabilities for egocentric video analysis will enable new applications in life logging, elder care, human-robot interaction, developmental screening, augmented reality and social media.
|
Page generated in 0.1108 seconds