• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 70
  • 8
  • 6
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 107
  • 107
  • 71
  • 31
  • 29
  • 21
  • 19
  • 13
  • 13
  • 13
  • 13
  • 13
  • 12
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Utilizing Convolutional Neural Networks for Specialized Activity Recognition: Classifying Lower Back Pain Risk Prediction During Manual Lifting

Snyder, Kristian 05 October 2020 (has links)
No description available.
52

Study of Semi-supervised Deep Learning Methods on Human Activity Recognition Tasks

Song, Shiping January 2019 (has links)
This project focuses on semi-supervised human activity recognition (HAR) tasks, in which the inputs are partly labeled time series data acquired from sensors such as accelerometer data, and the outputs are predefined human activities. Most state-of-the-art existing work in HAR area is supervised now, which relies on fully labeled datasets. Since the cost to label the collective instances increases fast with the increasing scale of data, semi-supervised methods are now widely required. This report proposed two semi-supervised methods and then investigated how well they perform on a partly labeled dataset, comparing to the state-of-the-art supervised method. One of these methods is designed based on the state-of-the-art supervised method, Deep-ConvLSTM, together with the semi-supervised learning concepts, self-training. Another one is modified based on a semi-supervised deep learning method, LSTM initialized by seq2seq autoencoder, which is firstly introduced for natural language processing. According to the experiments on a published dataset (Opportunity Activity Recognition dataset), both of these semi-supervised methods have better performance than the state-of-the-art supervised methods. / Detta projekt fokuserar på halvövervakad Human Activity Recognition (HAR), där indata delvis är märkta tidsseriedata från sensorer som t.ex. accelerometrar, och utdata är fördefinierade mänskliga aktiviteter. De främsta arbetena inom HAR-området använder numera övervakade metoder, vilka bygger på fullt märkta dataset. Eftersom kostnaden för att märka de samlade instanserna ökar snabbt med den ökade omfattningen av data, föredras numera ofta halvövervakade metoder. I denna rapport föreslås två halvövervakade metoder och det undersöks hur bra de presterar på ett delvis märkt dataset jämfört med den moderna övervakade metoden. En av dessa metoder utformas baserat på en högkvalitativ övervakad metod, DeepConvLSTM, kombinerad med självutbildning. En annan metod baseras på en halvövervakad djupinlärningsmetod, LSTM, initierad av seq2seq autoencoder, som först införs för behandling av naturligt språk. Enligt experimenten på ett publicerat dataset (Opportunity Activity Recognition dataset) har båda dessa metoder bättre prestanda än de toppmoderna övervakade metoderna.
53

Efficient Wearable Big Data Harnessing and Mining with Deep Intelligence

Elijah J Basile (13161057) 27 July 2022 (has links)
<p>Wearable devices and their ubiquitous use and deployment across multiple areas of health provide key insights in patient and individual status via big data through sensor capture at key parts of the individual’s body. While small and low cost, their limitations rest in their computational and battery capacity. One key use of wearables has been in individual activity capture. For accelerometer and gyroscope data, oscillatory patterns exist between daily activities that users may perform. By leveraging spatial and temporal learning via CNN and LSTM layers to capture both the intra and inter-oscillatory patterns that appear during these activities, we deployed data sparsification via autoencoders to extract the key topological properties from the data and transmit via BLE that compressed data to a central device for later decoding and analysis. Several autoencoder designs were developed to determine the principles of system design that compared encoding overhead on the sensor device with signal reconstruction accuracy. By leveraging asymmetric autoencoder design, we were able to offshore much of the computational and power cost of signal reconstruction from the wearable to the central devices, while still providing robust reconstruction accuracy at several compression efficiencies. Via our high-precision Bluetooth voltmeter, the integrated sparsified data transmission configuration was tested for all quantization and compression efficiencies, generating lower power consumption to the setup without data sparsification for all autoencoder configurations. </p> <p><br></p> <p>Human activity recognition (HAR) is a key facet of lifestyle and health monitoring. Effective HAR classification mechanisms and tools can provide healthcare professionals, patients, and individuals key insights into activity levels and behaviors without the intrusive use of human or camera observation. We leverage both spatial and temporal learning mechanisms via CNN and LSTM integrated architectures to derive an optimal classification architecture that provides robust classification performance for raw activity inputs and determine that a LSTMCNN utilizing a stacked-bidirectional LSTM layer provides superior classification performance to the CNNLSTM (also utilizing a stacked-bidirectional LSTM) at all input widths. All inertial data classification frameworks are based off sensor data drawn from wearable devices placed at key sections of the body. With the limitation of wearable devices being a lack of computational and battery power, data compression techniques to limit the quantity of transmitted data and reduce the on-board power consumption have been employed. While this compression methodology has been shown to reduce overall device power consumption, this comes at a cost of more-or-less information loss in the reconstructed signals. By employing an asymmetric autoencoder design and training the LSTMCNN classifier with the reconstructed inputs, we minimized the classification performance degradation due to the wearable signal reconstruction error The classifier is further trained on the autoencoder for several input widths and with quantized and unquantized models. The performance for the classifier trained on reconstructed data ranged between 93.0\% and 86.5\% accuracy dependent on input width and autoencoder quantization, showing promising potential of deep learning with wearable sparsification. </p>
54

SmartWall: Novel RFID-enabled Ambient Human Activity Recognition using Machine Learning for Unobtrusive Health Monitoring

Oguntala, George A., Abd-Alhameed, Raed, Noras, James M., Hu, Yim Fun, Nnabuike, Eya N., Ali, N., Elfergani, Issa T., Rodriguez, Jonathan 05 1900 (has links)
Yes / Human activity recognition from sensor readings have proved to be an effective approach in pervasive computing for smart healthcare. Recent approaches to ambient assisted living (AAL) within a home or community setting offers people the prospect of more individually-focused care and improved quality of living. However, most of the available AAL systems are often limited by computational cost. In this paper, a simple, novel non-wearable human activity classification framework using the multivariate Gaussian is proposed. The classification framework augments prior information from the passive RFID tags to obtain more detailed activity profiling. The proposed algorithm based on multivariate Gaussian via maximum likelihood estimation is used to learn the features of the human activity model. Twelve sequential and concurrent experimental evaluations are conducted in a mock apartment environment. The sampled activities are predicted using a new dataset of the same activity and high prediction accuracy is established. The proposed framework suits well for the single and multi-dwelling environment and offers pervasive sensing environment for both patients and carers. / Tertiary Education Trust Fund of Federal Government of Nigeria and by the European Union’s Horizon 2020 research and innovation programme under Grant Agreement H2020-MSCA-ITN-2016 SECRET-722424
55

What, When, and Where Exactly? Human Activity Detection in Untrimmed Videos Using Deep Learning

Rahman, Md Atiqur 06 December 2023 (has links)
Over the past decade, there has been an explosion in the volume of video data, including internet videos and surveillance camera footage. These videos often feature extended durations with unedited content, predominantly filled with background clutter, while the relevant activities of interest occupy only a small portion of the footage. Consequently, there is a compelling need for advanced processing techniques to automatically analyze this vast reservoir of video data, specifically with the goal of identifying the segments that contain the events of interest. Given that humans are the primary subjects in these videos, comprehending human activities plays a pivotal role in automated video analysis. This thesis seeks to tackle the challenge of detecting human activities from untrimmed videos, aiming to classify and pinpoint these activities both in their spatial and temporal dimensions. To achieve this, we propose a modular approach. We begin by developing a temporal activity detection framework, and then progressively extend the framework to support activity detection in the spatio-temporal dimension. To perform temporal activity detection, we introduce an end-to-end trainable deep learning model leveraging 3D convolutions. Additionally, we propose a novel and adaptable fusion strategy to combine both the appearance and motion information extracted from a video, using RGB and optical flow frames. Importantly, we incorporate the learning of this fusion strategy into the activity detection framework. Building upon the temporal activity detection framework, we extend it by incorporating a spatial localization module to enable activity detection both in space and time in a holistic end-to-end manner. To accomplish this, we leverage shared spatio-temporal feature maps to jointly optimize both spatial and temporal localization of activities, thus making the entire pipeline more effective and efficient. Finally, we introduce several novel techniques for modeling actor motion, specifically designed for efficient activity recognition. This is achieved by harnessing 2D pose information extracted from video frames and then representing human motion through bone movement, bone orientation, and body joint positions. Our experimental evaluations, conducted using benchmark datasets, showcase the effectiveness of the proposed temporal and spatio-temporal activity detection methods when compared to the current state-of-the-art methods. Moreover, the proposed motion representations excel in both performance and computational efficiency. Ultimately, this research shall pave the way forward towards imbuing computers with social visual intelligence, enabling them to comprehend human activities in any given time and space, opening up exciting possibilities for the future.
56

Step Counter and Activity Recognition Using Smartphone IMUs

Israelsson, Anton, Strandell, Max January 2022 (has links)
Fitness tracking is a rapidly growing market as more people desire to take better control over their lives. And the growing availability of smartphones with sensitive sensors makes it possible for anyone to take part. This project aims to implement a Step Counter and create a model for Human Activity Recognition (HAR) to classify activities such as walking, running, cycling, ascending and descending stairs, and standing still, using sensor data from handheld devices. The Step Counter is implemented by processing acceleration data and finding and validating steps. HAR is implemented using three machine learning algorithms on processed sensor data: Random Forest (RF), Support Vector Machine (SVM), and Artificial Neural Network (ANN). The step counter achieved 99.48% accuracy. The HAR models achieved 99.7%, 99.6%, and 99.5% accuracy on RF, ANN, and SVM, respectively. / Aktivitetsspårning är en snabbt växande marknad när fler människor önskar att ta bättre kontroll över deras liv. Den växande tillgängligheten på smartphones med känsliga sensorer gör det möjligt för vem som helst att delta. Detta projekt siktar på att implementera en stegräknare samt skapa en modell för mänsklig aktivitetsigenkänning (HAR) för att klassificera aktiviteter såsom att promenera, springa, cykla, gå upp eller ner för trappor och stå stilla, med användning av sensordata från handhållna enheter. Stegräknaren implementeras genom att bearbeta accelerationsdata och hitta samt validera steg. HAR implementeras med hjälp av tre maskininlärningsalgoritmer på bearbetad sensordata: Random Forest (RF), Support Vector Machine (SVM) och Artificial Neural Network (ANN). Stegräknaren uppnådde en noggrannhet på 99.48%. HAR-modellerna uppnådde en noggrannhet på 99.7%, 99.6% samt 99.5% med RF, ANN och SVM. / Kandidatexjobb i elektroteknik 2022, KTH, Stockholm
57

A Methodology for Extracting Human Bodies from Still Images

Tsitsoulis, Athanasios January 2013 (has links)
No description available.
58

From robotics to healthcare: toward clinically-relevant 3-D human pose tracking for lower limb mobility assessments

Mitjans i Coma, Marc 11 September 2024 (has links)
With an increase in age comes an increase in the risk of frailty and mobility decline, which can lead to dangerous falls and can even be a cause of mortality. Despite these serious consequences, healthcare systems remain reactive, highlighting the need for technologies to predict functional mobility decline. In this thesis, we present an end-to-end autonomous functional mobility assessment system that seeks to bridge the gap between robotics research and clinical rehabilitation practices. Unlike many fully integrated black-box models, our approach emphasizes the need for a system that is both reliable as well as transparent to facilitate its endorsement and adoption by healthcare professionals and patients. Our proposed system is characterized by the sensor fusion of multimodal data using an optimization framework known as factor graphs. This method, widely used in robotics, enables us to obtain visually interpretable 3-D estimations of the human body in recorded footage. These representations are then used to implement autonomous versions of standardized assessments employed by physical therapists for measuring lower-limb mobility, using a combination of custom neural networks and explainable models. To improve the accuracy of the estimations, we investigate the application of the Koopman operator framework to learn linear representations of human dynamics: We leverage these outputs as prior information to enhance the temporal consistency across entire movement sequences. Furthermore, inspired by the inherent stability of natural human movement, we propose ways to impose stability constraints in the dynamics during the training of linear Koopman models. In this light, we propose a sufficient condition for the stability of discrete-time linear systems that can be represented as a set of convex constraints. Additionally, we demonstrate how it can be seamlessly integrated into larger-scale gradient descent optimization methods. Lastly, we report the performance of our human pose detection and autonomous mobility assessment systems by evaluating them on outcome mobility datasets collected from controlled laboratory settings and unconstrained real-life home environments. While we acknowledge that further research is still needed, the study results indicate that the system can demonstrate promising performance in assessing mobility in home environments. These findings underscore the significant potential of this and similar technologies to revolutionize physical therapy practices.
59

Deep Learning Informed Assistive Technologies for Biomedical and Human Activity Applications

Bayat, Nasrin 01 January 2024 (has links) (PDF)
This dissertation presents a comprehensive exploration and implementation of attention mechanisms and transformers on several healthcare-related and assistive applications. The overarching goal is to demonstrate successful implementation of the state-of-the-art approaches and provide validated models with their superior performance to inform future research and development. In Chapter 1, attention mechanisms are harnessed for the fine-grained classification of white blood cells (WBCs), showcasing their efficacy in medical diagnostics. The proposed multi-attention framework ensures accurate WBC subtype classification by capturing discriminative features from various layers, leading to superior performance compared to other existing approaches used in previous work. More importantly, the attention-based method showed consistently better results than without attention in all three backbone architectures tested (ResNet, XceptionNet and Efficient- Net). Chapter 2 introduces a self-supervised framework leveraging vision transformers for object detection, semantic and custom algorithms for collision prediction in application to assistive technology for visually impaired. In addition, Multimodal sensory feedback system was designed and fabricated to convey environmental information and potential collisions to the user for real-time navigation and grasping assistance. Chapter 3 presents implementation of transformer-based method for operation-relevant human activity recognition (HAR) and demonstrated its performance over other deep learning model, long-short term memory (LSTM). In addition, feature engineering was used (principal component analysis) to extract most discriminatory and representative motion features from the instrumented sensors, indicating that the joint angle features are more important than body segment orientations. Further, identification of a minimal number and placement of wearable sensors for use in real-world data collections and activity recognitions, addressing the critical gap found in the respective field to enhance the practicality and utility of wearable sensors for HAR. The premise and efficacy of attention-based mechanisms and transformers was confirmed through its demonstrated performance in classification accuracy as compared to LSTM. These research outcomes from three distinct applications of attention-based mechanisms and trans- formers and demonstrated performance over existing models and methods support their utility and applicability across various biomedical and human activity research fields. By sharing the custom designed model architectures, implementation methods, and resulting classification performance has direct impact in the related field by allowing direct adoption and implementation of the developed methods.
60

Non-Bayesian Out-of-Distribution Detection Applied to CNN Architectures for Human Activity Recognition

Socolovschi, Serghei January 2022 (has links)
Human Activity Recognition (HAR) field studies the application of artificial intelligence methods for the identification of activities performed by people. Many applications of HAR in healthcare and sports require the safety-critical performance of the predictive models. The predictions produced by these models should be not only correct but also trustworthy. However, in recent years it has been shown that modern neural networks tend to produce sometimes wrong and overconfident predictions when processing unusual inputs. This issue puts at risk the prediction credibility and calls for solutions that might help estimate the uncertainty of the model’s predictions. In the following work, we started the investigation of the applicability of Non-Bayesian Uncertainty Estimation methods to the Deep Learning classification models in the HAR. We trained a Convolutional Neural Network (CNN) model with public datasets, such as UCI HAR and WISDM, which collect sensor-based time-series data about activities of daily life. Through a series of four experiments, we evaluated the performance of two Non-Bayesian uncertainty estimation methods, ODIN and Deep Ensemble, on out-of-distribution detection. We found out that the ODIN method is able to separate out-of-distribution samples from the in-distribution data. However, we also obtained unexpected behavior, when the out-of-distribution data contained exclusively dynamic activities. The Deep Ensemble method did not provide satisfactory results for our research question. / Inom området Human Activity Recognition (HAR) studeras tillämpningen av metoder för artificiell intelligens för identifiering av aktiviteter som utförs av människor. Många av tillämpningarna av HAR inom hälso och sjukvård och idrott kräver att de prediktiva modellerna har en säkerhetskritisk prestanda. De förutsägelser som dessa modeller ger upphov till ska inte bara vara korrekta utan också trovärdiga. Under de senaste åren har det dock visat sig att moderna neurala nätverk tenderar att ibland ge felaktiga och överdrivet säkra förutsägelser när de behandlar ovanliga indata. Detta problem äventyrar förutsägelsernas trovärdighet och kräver lösningar som kan hjälpa till att uppskatta osäkerheten i modellens förutsägelser. I följande arbete inledde vi undersökningen av tillämpligheten av icke-Bayesianska metoder för uppskattning av osäkerheten på Deep Learning-klassificeringsmodellerna i HAR. Vi tränade en CNN-modell med offentliga dataset, såsom UCI HAR och WISDM, som samlar in sensorbaserade tidsseriedata om aktiviteter i det dagliga livet. Genom en serie av fyra experiment utvärderade vi prestandan hos två icke-Bayesianska metoder för osäkerhetsuppskattning, ODIN och Deep Ensemble, för upptäckt av out-of-distribution. Vi upptäckte att ODIN-metoden kan skilja utdelade prover från data som är i distribution. Vi fick dock också ett oväntat beteende när uppgifterna om out-of-fdistribution uteslutande innehöll dynamiska aktiviteter. Deep Ensemble-metoden gav inga tillfredsställande resultat för vår forskningsfråga.

Page generated in 0.1786 seconds