• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 9
  • 5
  • 5
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 160
  • 160
  • 84
  • 65
  • 62
  • 46
  • 39
  • 27
  • 22
  • 21
  • 21
  • 19
  • 19
  • 18
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Learning to Recognize Agent Activities and Intentions

Kerr, Wesley January 2010 (has links)
Psychological research has demonstrated that subjects shown animations consisting of nothing more than simple geometric shapes perceive the shapes as being alive, having goals and intentions, and even engaging in social activities such as chasing and evading one another. While the subjects could not directly perceive affective state, motor commands, or the beliefs and intentions of the actors in the animations, they still used intentional language to describe the moving shapes. The purpose of this dissertation is to design, develop, and evaluate computational representations and learning algorithms that learn to recognize the behaviors of agents as they perform and execute different activities. These activities take place within simulations, both 2D and 3D. Our goal is to add as little hand-crafted knowledge to the representation as possible and to produce algorithms that perform well over a variety of different activity types. Any patterns found in similar activities should be discovered by the learning algorithm and not by us, the designers. In addition, we demonstrate that if an artificial agent learns about activities through participation, where it has access to its own internal affective state, motor commands, etc., it can then infer the unobservable affective state of other agents.
22

Activity recognition in desktop environments /

Shen, Jianqiang. January 1900 (has links)
Thesis (Ph. D.)--Oregon State University, 2009. / Printout. Includes bibliographical references (leaves 129-138). Also available on the World Wide Web.
23

Automatic extraction of behavioral patterns for elderly mobility and daily routine analysis

Li, Chen 08 June 2018 (has links)
The elderly living in smart homes can have their daily movement recorded and analyzed. Given the fact that different elders can have their own living habits, a methodology that can automatically identify their daily activities and discover their daily routines will be useful for better elderly care and support. In this thesis research, we focus on developing data mining algorithms for automatic detection of behavioral patterns from the trajectory data of an individual for activity identification, daily routine discovery, and activity prediction. The key challenges for the human activity analysis include the need to consider longer-range dependency of the sensor triggering events for activity modeling and to capture the spatio-temporal variations of the behavioral patterns exhibited by human. We propose to represent the trajectory data using a behavior-aware flow graph which is a probabilistic finite state automaton with its nodes and edges attributed with some local behavior-aware features. Subflows can then be extracted from the flow graph using the kernel k-means as the underlying behavioral patterns for activity identification. Given the identified activities, we propose a novel nominal matrix factorization method under a Bayesian framework with Lasso to extract highly interpretable daily routines. To better take care of the variations of activity durations within each daily routine, we further extend the Bayesian framework with a Markov jump process as the prior to incorporate the shift-invariant property into the model. For empirical evaluation, the proposed methodologies have been compared with a number of existing activity identification and daily routine discovery methods based on both synthetic and publicly available real smart home data sets with promising results obtained. In the thesis, we also illustrate how the proposed unsupervised methodology could be used to support exploratory behavior analysis for elderly care.
24

WiFi-Based Driver Activity Recognition Using CSI Signal

Bai, Yunhao January 2020 (has links)
No description available.
25

Motivation and Quantification of Physical Activity for Hospitalised Cancer Patients

Thorsteinsdottir, Arnrun January 2015 (has links)
Previous studies have shown the positive effect of increased physical activity for cancer patients during treatments of chemotherapy and stem cell transplantation. Moderate exercise has shown to cause significantly less loss of muscle mass, less symptoms of cancer related fatigue, less need for platelet transfusions during treatment time and shorter hospitalisation. Inactivity at hospital clinics is though still a major concern and it seems like lack of motivation plays a big roll. It has been shown that an overview of activity level, personal goal setting and education on the importance of physical activity can work as a motivation towards increased physical activity. This project aimed to make a prototype that can quantify physical activity of hospitalised cancer patients and represent it in a motivational and informative way. An accelerometer was used to collect activity data; the data was processed and used to train a support vector machine for classification of activities. Activities recognised by the prototype are the postures lying down, sitting and standing as well as recognising when the user is active. Over 90% accuracy was obtained in activity recognition for specific training sets. The prototype was tested on patients at the haematology clinic at the Karolinska hospital in Huddinge. Test subjects rated the classification accuracy and the motivational value of the prototype on a scale of 1-5. The accuracy was rated 4.2 out of 5 and the motivational value 3.25 out of 5. A pilot study to further test the feasibility of the product will be performed in the summer of 2015.
26

Automated Recognition of Human Activity : A Practical Perspective of the State of Research

Hansson, Hampus, Gyllström, Martin January 2021 (has links)
The rapid development of sensor technology in smartphone and wearable devices has led research to the area of human activity recognition (HAR). As a phase in HAR, applying classification models to collected sensor data is well-researched, and many of the different models can recognize activities successfully. Furthermore, some methods give successful results only using one or two sensors. The use of HAR within pain management is also an existing research field, but applying HAR to the pain treatment strategy of acceptance and commitment therapy (ACT) is not well-documented. The relevance of HAR in this context is that ACT:s core ideas are based on the perspective that daily life activities are connected to pain. In this thesis, state-of-the-art examples for sensor-based HAR applicable to ACT are provided through a literature review. Based on these findings, the practical use is assessed in order to provide a perspective to the current state of research.
27

Using a Smartphone to Detect the Standing-to-Kneeling and Kneeling-to-Standing Postural Transitions / Smartphone-baserad detektion av posturala övergångar mellan stående och knästående ställning

Setterquist, Dan January 2018 (has links)
In this report we investigate how well a smartphone can be used to detect the standing-to-kneeling and kneeling-to-standing postural transitions. Possible applications include measuring time spent kneeling in certain groups of workers prone to knee-straining work. Accelerometer and gyroscope data was recorded from a group of 10 volunteers while performing a set of postural transitions according to an experimental script. The set of postural transitions included the standing-to-kneeling and kneeling-to-standing transitions, in addition to a selection of transitions common in knee-straining occupations. Using recorded video, the recorded data was labeled and segmented into a data set consisting of 3-second sensor data segments in 9 different classes. The classification performance of a number of different LSTM-networks were evaluated on the data set. When evaluated in a user-specific setting, the best network achieved an overall classification accuracy of 89.4 %. The network achieved precision 0.982 and recall 0.917 for the standing-to-kneeling transitions, and precision 0.900 and recall 0.900 for the kneeling-to-standing transitions. When the same network was evaluated in a user-independent setting it achieved an overall accuracy of 66.3 %, with precision 0.720 and recall 0.746 for the standing-to-kneeling transitions, and precision 0.707 and recall 0.604 for the kneeling-to-standing transitions. The network was also evaluated in a setting where only accelerometer data was used. The achieved performance was similar to that achieved when using data from both the accelerometer and gyroscope. The classification speed of the network was evaluated on a smartphone. On a Samsung Galaxy S7 the average time needed to perform one classification was 38.5 milliseconds. The classification can therefore be done in real time. / I denna rapport undersöks möjligheten att använda en smartphone för att upptäcka posturala övergångar mellan stående och knästående ställning. Ett möjligt användningsområde för sådan detektion är att mäta mängd tid spenderad knäståendes hos vissa yrkesgrupper. Accelerometerdata och gyroskopdata spelades in från en grupp av 10 försökspersoner medan de utförde vissa posturala övergångar, vilka inkluderade övergångar från stående till knästående ställning samt från knästående till stående ställning. Genom att granska inspelad video från försöken markerades bitar av den inspelade datan som tillhörandes en viss postural övergång. Datan segmenterades och gav upphov till ett dataset bestående av 3 sekunder långa segment av sensordata i 9 olika klasser. Prestandan för ett antal olika LSTM-nätverk utvärderades på datasetet. Det bästa nätverket uppnådde en övergripande noggrannhet av 89.4 % när det utvärderades användarspecifikt. Nätverket uppnådde en precision av 0.982 och en återkallelse av 0.917 för övergångar från stående till knästående ställning, samt en precision av 0.900 och en återkallelse av 0.900 för övergångar från knästående till stående ställning. När samma nätverk utvärderades användaroberoende uppnådde det en övergripande noggrannhet av 66.3 %, med en precision av 0.720 och återkallelse av 0.746 för övergångar från stående till knästående ställning, samt en precision av 0.707 och återkallelse av 0.604 för övergångar mellan knästående och stående ställning. Nätverket utvärderades också i en konfiguration där enbart accelerometerdata nyttjades, och uppnådde liknande prestanda som när både accelerometerdata och gyroskopdata användes. Nätverkets klassificeringshastighet utvärderades på en smartphone. När klassificeringen utfördes på en Samsung Galaxy S7 var den genomsnittliga körningstiden 38.5 millisekunder, vilket är snabbt nog för att utföras i realtid.
28

Trust in Human Activity Recognition Deep Learning Models

Simons, Ama January 2021 (has links)
Trust is explored in this thesis through the analysis of the robustness of wearable device based artificial intelligence based models to changes in data acquisition. Specifically changes in wearable device hardware and different recording sessions are explored. Three human activity recognition models are used as a vehicle to explore this: Model A which is trained using accelerometer signals recorded by a wearable sensor referred to as Astroskin, Model H which is trained using accelerometer signals from a wearable sensor referred to as the BioHarness and Model A Type 1 which was trained on Astroskin accelerometer signals that was recorded on the first session of the experimental protocol. On a test set recorded by Astroskin Model A had a 99.07% accuracy. However on a test set recorded by the BioHarness Model A had a 65.74% accuracy. On a test set recorded by BioHarness Model H had a 95.37% accuracy. However on a test set recorded by Astroskin Model H had a 29.63% accuracy. Model A Type 1 an average accuracy of 99.57% on data recorded by the same wearable sensor and same session. An average accuracy of 50.95% was obtained on a test set that was recorded by the same wearable sensor but by a different session. An average accuracy of 41.31% was obtained on data that was recorded by a different wearable sensor and same session. An average accuracy of 19.28% was obtained on data that was recorded by a different wearable sensor and different session. An out of domain discriminator for Model A Type 1 was also implemented. The out of domain discriminator was able to differentiate between the data that trained Model A Type 1 and other types (data recorded by a different wearable devices/different sessions) with an accuracy of 97.60%. / Thesis / Master of Applied Science (MASc) / The trustworthiness of artificial intelligence must be explored before society can fully reap its benefits. The element of trust that is explored in this thesis is the robustness of wearable device based artificial intelligence models to changes in data acquisition. The specific changes that are explored are changes in the wearable device used to record the input data as well as input data from different recording sessions. Using human activity recognition models as a vehicle, the results show that performance degradation occurs when the wearable device is changed and when data comes from a different recording session. An out of domain discriminator is developed to alert users when a potential performance degradation can occur.
29

Trajectory Analytics

Santiteerakul, Wasana 05 1900 (has links)
The numerous surveillance videos recorded by a single stationary wide-angle-view camera persuade the use of a moving point as the representation of each small-size object in wide video scene. The sequence of the positions of each moving point can be used to generate a trajectory containing both spatial and temporal information of object's movement. In this study, we investigate how the relationship between two trajectories can be used to recognize multi-agent interactions. For this purpose, we present a simple set of qualitative atomic disjoint trajectory-segment relations which can be utilized to represent the relationships between two trajectories. Given a pair of adjacent concurrent trajectories, we segment the trajectory pair to get the ordered sequence of related trajectory-segments. Each pair of corresponding trajectory-segments then is assigned a token associated with the trajectory-segment relation, which leads to the generation of a string called a pairwise trajectory-segment relationship sequence. From a group of pairwise trajectory-segment relationship sequences, we utilize an unsupervised learning algorithm, particularly the k-medians clustering, to detect interesting patterns that can be used to classify lower-level multi-agent activities. We evaluate the effectiveness of the proposed approach by comparing the activity classes predicted by our method to the actual classes from the ground-truth set obtained using the crowdsourcing technique. The results show that the relationships between a pair of trajectories can signify the low-level multi-agent activities.
30

Eye Movement Analysis for Activity Recognition in Everyday Situations

Gustafsson, Anton January 2018 (has links)
Den ständigt ökande mängden av smarta enheter i vår vardag har lett till nya problem inom HCI så som hur vi människor ska interagera med dessa enheter på ett effektivt och enkelt sätt. Än så länge har kontextuellt medvetna system visat sig kunna vara ett möjligt sätt att lösa detta problem. Om ett system hade kunnat automatiskt detektera personers aktiviteter och avsikter, kunde det agera utan någon explicit inmatning från användaren. Ögon har tidigare visat sig avslöja mycket information om en persons kognitiva tillstånd och skulle kunna vara en möjlig modalitet för att extrahera aktivitesinformation ifrån.I denna avhandling har vi undersökt möjligheten att detektera aktiviteter genom att använda en billig, hemmabyggd ögonspårningsapparat. Ett experiment utfördes där deltagarna genomförde aktiviteter i ett kök för att samla in data om deras ögonrörelser. Efter experimentet var färdigt, annoterades, förbehandlades och klassificerades datan med hjälp av en multilayer perceptron--och en random forest--klassificerare.Trots att mängden data var relativt liten, visade resultaten att igenkänningsgraden var mellan 30-40% beroende på vilken klassificerare som användes. Detta bekräftar tidigare forskning att aktivitetsigenkänning genom att analysera ögonrörelser är möjligt. Dock visar det även att det fortfarande är svårt att uppnå en hög igenkänningsgrad. / The increasing amount of smart devices in our everyday environment has created new problems within human-computer interaction such as how we humans are supposed to interact with these devices efficiently and with ease. So far, context-aware systems could be a possible candidate to solve this problem. If a system automatically could detect people's activities and intentions, it could act accordingly without any explicit input from the user. Eyes have previously shown to be a rich source of information about a person's cognitive state and current activity. Because of this, eyes could be a viable input modality for extracting information from. In this thesis, we examine the possibility of detecting human activity by using a low cost, home-built monocular eye tracker. An experiment was conducted were participants performed everyday activities in a kitchen to collect eye movement data. After conducting the experiment, the data was annotated, preprocessed and classified using multilayer perceptron and random forest classifiers.Even though the data set collected was small, the results showed a recognition rate of between 30-40% depending on the classifier used. This confirms previous work that activity recognition using eye movement data is possible but that achieving high accuracy is challenging.

Page generated in 0.3949 seconds