• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 105
  • 9
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 154
  • 154
  • 80
  • 64
  • 59
  • 43
  • 38
  • 25
  • 22
  • 20
  • 20
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An approach to activity recognition using multiple sensors

Tran, Tien Dung January 2006 (has links)
Building smart home environments which automatically or semi-automatically assist and comfort occupants is an important topic in the pervasive computing field, especially with the coming of cheap, easy-to-install sensors. This has given rise to the indispensable need for human activity recognition from ubiquitous sensors whose purpose is to observe and understand what occupants are trying to do from sensory data. The main approach to the problem of human activity recognition is a probabilistic one so as to handle the complication of uncertainty, the overlapping of human behaviours and environmental noise. This thesis develops a probabilistic model as a framework for human activity recognition using multiple multi-modal sensors in complex pervasive environments. The probabilistic model to be developed is adapted and based on the abstract hidden Markov model (AHMM) with one layer to fuse multiple sensors. The concept of factored state representation is employed in the model to parsimoniously represent the state transitions for reducing the number of required parameters. The exact method is used in learning the model’s parameters and performing inference. To be able to incorporate a large number of sensors, several more parsimonious representations including the mixtures of smaller multinomials and sigmoid functions are investigated to model the state transitions, resulting in a reduction of the number of parameters and time required for training. / We examine the approximate variational method to significantly reduce the time required for training the model instead of using the exact method. A system of fixed point equations is derived to iteratively update the free variational parameters. We also present the factored model in the case where all variables are continuous with the use of the conditional Gaussian distribution to model state transitions. The variational method is still employed in this case to speed up the model’s training process. The developed model is implemented and applied in recognizing daily activity in our smart home and the Nokia lab from multiple sensors. The experimental results show that the model is appropriate for fusing multiple sensors in activity recognition with a reasonable recognition performance.
22

Learning to Recognize Agent Activities and Intentions

Kerr, Wesley January 2010 (has links)
Psychological research has demonstrated that subjects shown animations consisting of nothing more than simple geometric shapes perceive the shapes as being alive, having goals and intentions, and even engaging in social activities such as chasing and evading one another. While the subjects could not directly perceive affective state, motor commands, or the beliefs and intentions of the actors in the animations, they still used intentional language to describe the moving shapes. The purpose of this dissertation is to design, develop, and evaluate computational representations and learning algorithms that learn to recognize the behaviors of agents as they perform and execute different activities. These activities take place within simulations, both 2D and 3D. Our goal is to add as little hand-crafted knowledge to the representation as possible and to produce algorithms that perform well over a variety of different activity types. Any patterns found in similar activities should be discovered by the learning algorithm and not by us, the designers. In addition, we demonstrate that if an artificial agent learns about activities through participation, where it has access to its own internal affective state, motor commands, etc., it can then infer the unobservable affective state of other agents.
23

Activity recognition in desktop environments /

Shen, Jianqiang. January 1900 (has links)
Thesis (Ph. D.)--Oregon State University, 2009. / Printout. Includes bibliographical references (leaves 129-138). Also available on the World Wide Web.
24

Automatic extraction of behavioral patterns for elderly mobility and daily routine analysis

Li, Chen 08 June 2018 (has links)
The elderly living in smart homes can have their daily movement recorded and analyzed. Given the fact that different elders can have their own living habits, a methodology that can automatically identify their daily activities and discover their daily routines will be useful for better elderly care and support. In this thesis research, we focus on developing data mining algorithms for automatic detection of behavioral patterns from the trajectory data of an individual for activity identification, daily routine discovery, and activity prediction. The key challenges for the human activity analysis include the need to consider longer-range dependency of the sensor triggering events for activity modeling and to capture the spatio-temporal variations of the behavioral patterns exhibited by human. We propose to represent the trajectory data using a behavior-aware flow graph which is a probabilistic finite state automaton with its nodes and edges attributed with some local behavior-aware features. Subflows can then be extracted from the flow graph using the kernel k-means as the underlying behavioral patterns for activity identification. Given the identified activities, we propose a novel nominal matrix factorization method under a Bayesian framework with Lasso to extract highly interpretable daily routines. To better take care of the variations of activity durations within each daily routine, we further extend the Bayesian framework with a Markov jump process as the prior to incorporate the shift-invariant property into the model. For empirical evaluation, the proposed methodologies have been compared with a number of existing activity identification and daily routine discovery methods based on both synthetic and publicly available real smart home data sets with promising results obtained. In the thesis, we also illustrate how the proposed unsupervised methodology could be used to support exploratory behavior analysis for elderly care.
25

WiFi-Based Driver Activity Recognition Using CSI Signal

Bai, Yunhao January 2020 (has links)
No description available.
26

Motivation and Quantification of Physical Activity for Hospitalised Cancer Patients

Thorsteinsdottir, Arnrun January 2015 (has links)
Previous studies have shown the positive effect of increased physical activity for cancer patients during treatments of chemotherapy and stem cell transplantation. Moderate exercise has shown to cause significantly less loss of muscle mass, less symptoms of cancer related fatigue, less need for platelet transfusions during treatment time and shorter hospitalisation. Inactivity at hospital clinics is though still a major concern and it seems like lack of motivation plays a big roll. It has been shown that an overview of activity level, personal goal setting and education on the importance of physical activity can work as a motivation towards increased physical activity. This project aimed to make a prototype that can quantify physical activity of hospitalised cancer patients and represent it in a motivational and informative way. An accelerometer was used to collect activity data; the data was processed and used to train a support vector machine for classification of activities. Activities recognised by the prototype are the postures lying down, sitting and standing as well as recognising when the user is active. Over 90% accuracy was obtained in activity recognition for specific training sets. The prototype was tested on patients at the haematology clinic at the Karolinska hospital in Huddinge. Test subjects rated the classification accuracy and the motivational value of the prototype on a scale of 1-5. The accuracy was rated 4.2 out of 5 and the motivational value 3.25 out of 5. A pilot study to further test the feasibility of the product will be performed in the summer of 2015.
27

Automated Recognition of Human Activity : A Practical Perspective of the State of Research

Hansson, Hampus, Gyllström, Martin January 2021 (has links)
The rapid development of sensor technology in smartphone and wearable devices has led research to the area of human activity recognition (HAR). As a phase in HAR, applying classification models to collected sensor data is well-researched, and many of the different models can recognize activities successfully. Furthermore, some methods give successful results only using one or two sensors. The use of HAR within pain management is also an existing research field, but applying HAR to the pain treatment strategy of acceptance and commitment therapy (ACT) is not well-documented. The relevance of HAR in this context is that ACT:s core ideas are based on the perspective that daily life activities are connected to pain. In this thesis, state-of-the-art examples for sensor-based HAR applicable to ACT are provided through a literature review. Based on these findings, the practical use is assessed in order to provide a perspective to the current state of research.
28

Using a Smartphone to Detect the Standing-to-Kneeling and Kneeling-to-Standing Postural Transitions / Smartphone-baserad detektion av posturala övergångar mellan stående och knästående ställning

Setterquist, Dan January 2018 (has links)
In this report we investigate how well a smartphone can be used to detect the standing-to-kneeling and kneeling-to-standing postural transitions. Possible applications include measuring time spent kneeling in certain groups of workers prone to knee-straining work. Accelerometer and gyroscope data was recorded from a group of 10 volunteers while performing a set of postural transitions according to an experimental script. The set of postural transitions included the standing-to-kneeling and kneeling-to-standing transitions, in addition to a selection of transitions common in knee-straining occupations. Using recorded video, the recorded data was labeled and segmented into a data set consisting of 3-second sensor data segments in 9 different classes. The classification performance of a number of different LSTM-networks were evaluated on the data set. When evaluated in a user-specific setting, the best network achieved an overall classification accuracy of 89.4 %. The network achieved precision 0.982 and recall 0.917 for the standing-to-kneeling transitions, and precision 0.900 and recall 0.900 for the kneeling-to-standing transitions. When the same network was evaluated in a user-independent setting it achieved an overall accuracy of 66.3 %, with precision 0.720 and recall 0.746 for the standing-to-kneeling transitions, and precision 0.707 and recall 0.604 for the kneeling-to-standing transitions. The network was also evaluated in a setting where only accelerometer data was used. The achieved performance was similar to that achieved when using data from both the accelerometer and gyroscope. The classification speed of the network was evaluated on a smartphone. On a Samsung Galaxy S7 the average time needed to perform one classification was 38.5 milliseconds. The classification can therefore be done in real time. / I denna rapport undersöks möjligheten att använda en smartphone för att upptäcka posturala övergångar mellan stående och knästående ställning. Ett möjligt användningsområde för sådan detektion är att mäta mängd tid spenderad knäståendes hos vissa yrkesgrupper. Accelerometerdata och gyroskopdata spelades in från en grupp av 10 försökspersoner medan de utförde vissa posturala övergångar, vilka inkluderade övergångar från stående till knästående ställning samt från knästående till stående ställning. Genom att granska inspelad video från försöken markerades bitar av den inspelade datan som tillhörandes en viss postural övergång. Datan segmenterades och gav upphov till ett dataset bestående av 3 sekunder långa segment av sensordata i 9 olika klasser. Prestandan för ett antal olika LSTM-nätverk utvärderades på datasetet. Det bästa nätverket uppnådde en övergripande noggrannhet av 89.4 % när det utvärderades användarspecifikt. Nätverket uppnådde en precision av 0.982 och en återkallelse av 0.917 för övergångar från stående till knästående ställning, samt en precision av 0.900 och en återkallelse av 0.900 för övergångar från knästående till stående ställning. När samma nätverk utvärderades användaroberoende uppnådde det en övergripande noggrannhet av 66.3 %, med en precision av 0.720 och återkallelse av 0.746 för övergångar från stående till knästående ställning, samt en precision av 0.707 och återkallelse av 0.604 för övergångar mellan knästående och stående ställning. Nätverket utvärderades också i en konfiguration där enbart accelerometerdata nyttjades, och uppnådde liknande prestanda som när både accelerometerdata och gyroskopdata användes. Nätverkets klassificeringshastighet utvärderades på en smartphone. När klassificeringen utfördes på en Samsung Galaxy S7 var den genomsnittliga körningstiden 38.5 millisekunder, vilket är snabbt nog för att utföras i realtid.
29

Trust in Human Activity Recognition Deep Learning Models

Simons, Ama January 2021 (has links)
Trust is explored in this thesis through the analysis of the robustness of wearable device based artificial intelligence based models to changes in data acquisition. Specifically changes in wearable device hardware and different recording sessions are explored. Three human activity recognition models are used as a vehicle to explore this: Model A which is trained using accelerometer signals recorded by a wearable sensor referred to as Astroskin, Model H which is trained using accelerometer signals from a wearable sensor referred to as the BioHarness and Model A Type 1 which was trained on Astroskin accelerometer signals that was recorded on the first session of the experimental protocol. On a test set recorded by Astroskin Model A had a 99.07% accuracy. However on a test set recorded by the BioHarness Model A had a 65.74% accuracy. On a test set recorded by BioHarness Model H had a 95.37% accuracy. However on a test set recorded by Astroskin Model H had a 29.63% accuracy. Model A Type 1 an average accuracy of 99.57% on data recorded by the same wearable sensor and same session. An average accuracy of 50.95% was obtained on a test set that was recorded by the same wearable sensor but by a different session. An average accuracy of 41.31% was obtained on data that was recorded by a different wearable sensor and same session. An average accuracy of 19.28% was obtained on data that was recorded by a different wearable sensor and different session. An out of domain discriminator for Model A Type 1 was also implemented. The out of domain discriminator was able to differentiate between the data that trained Model A Type 1 and other types (data recorded by a different wearable devices/different sessions) with an accuracy of 97.60%. / Thesis / Master of Applied Science (MASc) / The trustworthiness of artificial intelligence must be explored before society can fully reap its benefits. The element of trust that is explored in this thesis is the robustness of wearable device based artificial intelligence models to changes in data acquisition. The specific changes that are explored are changes in the wearable device used to record the input data as well as input data from different recording sessions. Using human activity recognition models as a vehicle, the results show that performance degradation occurs when the wearable device is changed and when data comes from a different recording session. An out of domain discriminator is developed to alert users when a potential performance degradation can occur.
30

Classifying Pairwise Object Interactions: A Trajectory Analytics Approach

Janmohammadi, Siamak 05 1900 (has links)
We have a huge amount of video data from extensively available surveillance cameras and increasingly growing technology to record the motion of a moving object in the form of trajectory data. With proliferation of location-enabled devices and ongoing growth in smartphone penetration as well as advancements in exploiting image processing techniques, tracking moving objects is more flawlessly achievable. In this work, we explore some domain-independent qualitative and quantitative features in raw trajectory (spatio-temporal) data in videos captured by a fixed single wide-angle view camera sensor in outdoor areas. We study the efficacy of those features in classifying four basic high level actions by employing two supervised learning algorithms and show how each of the features affect the learning algorithms’ overall accuracy as a single factor or confounded with others.

Page generated in 0.0835 seconds