• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Design and Validation of a Wearable SmartSole for Continuous Detection of Abnormal Gait

Wucherer, Karoline M 01 June 2023 (has links) (PDF)
Residual gait abnormalities are common following lower limb injury and/or stroke and can have several negative impacts on an individual’s life. Without continuous treatment and follow up, individuals can be prone to chronic pain as abnormal gait may lead to non-physiological loading of the musculoskeletal system. The current industry gold standard for diagnosing abnormal gait requires specialty equipment that is generally only available at designated gait facilities. Due to the inaccessibility and high cost associated with these facilities, a wearable SmartSole device to continuously detect abnormal gait was proposed. A previous iteration of the SmartSole was unable to properly detect abnormal gait and also experienced fracturing throughout the 3D printed body. In this present study, sensor placement and material selection were reconsidered to address these limitations. The objective of this study was to determine if a redesigned SmartSole could identify events of abnormal gait through validation and verification testing against the industry standard force plates. In total, 14 participants were selected for gait studies, 7 with pronounced gait abnormalities (e.g. limps), and 7 with physiological gait. Parameters of interest included stance time, gait cycle time, and the ratio of the force magnitudes recorded during heel strike and toe off. Results indicated that the SmartSole was effective at determining overall event timings within the gait cycle, as both stance and cycle time had strong, positive correlations (left stance: r = 0.761, right stance: r = 0.560, left cycle: r = 0.688) with the force plates, with the exception of right foot cycle time. The sole was not effective at measuring actual values of events during gait as there were weak correlations with the force plates. Furthermore, when comparing parameters of interest between the injured and non-injured sides for test participants with gait abnormalities, neither the SmartSole nor the force plates were able to detect significant differences. The inability of the sole to accurately collect force magnitudes or to detect abnormal gait leads to the conclusion that additional sensors may need to be implemented. Future iterations may consider placement of additional sensors to allow for a “fuller picture” and the inclusion of other types of sensors for improved, continuous tracking of gait abnormalities.
2

Reconnaissance des actions humaines à partir d'une séquence vidéo

Touati, Redha 12 1900 (has links)
The work done in this master's thesis, presents a new system for the recognition of human actions from a video sequence. The system uses, as input, a video sequence taken by a static camera. A binary segmentation method of the the video sequence is first achieved, by a learning algorithm, in order to detect and extract the different people from the background. To recognize an action, the system then exploits a set of prototypes generated from an MDS-based dimensionality reduction technique, from two different points of view in the video sequence. This dimensionality reduction technique, according to two different viewpoints, allows us to model each human action of the training base with a set of prototypes (supposed to be similar for each class) represented in a low dimensional non-linear space. The prototypes, extracted according to the two viewpoints, are fed to a $K$-NN classifier which allows us to identify the human action that takes place in the video sequence. The experiments of our model conducted on the Weizmann dataset of human actions provide interesting results compared to the other state-of-the art (and often more complicated) methods. These experiments show first the sensitivity of our model for each viewpoint and its effectiveness to recognize the different actions, with a variable but satisfactory recognition rate and also the results obtained by the fusion of these two points of view, which allows us to achieve a high performance recognition rate. / Le travail mené dans le cadre de ce projet de maîtrise vise à présenter un nouveau système de reconnaissance d’actions humaines à partir d'une séquence d'images vidéo. Le système utilise en entrée une séquence vidéo prise par une caméra statique. Une méthode de segmentation binaire est d'abord effectuée, grâce à un algorithme d’apprentissage, afin de détecter les différentes personnes de l'arrière-plan. Afin de reconnaitre une action, le système exploite ensuite un ensemble de prototypes générés, par une technique de réduction de dimensionnalité MDS, à partir de deux points de vue différents dans la séquence d'images. Cette étape de réduction de dimensionnalité, selon deux points de vue différents, permet de modéliser chaque action de la base d'apprentissage par un ensemble de prototypes (censé être relativement similaire pour chaque classe) représentés dans un espace de faible dimension non linéaire. Les prototypes extraits selon les deux points de vue sont amenés à un classifieur K-ppv qui permet de reconnaitre l'action qui se déroule dans la séquence vidéo. Les expérimentations de ce système sur la base d’actions humaines de Wiezmann procurent des résultats assez intéressants comparés à d’autres méthodes plus complexes. Ces expériences montrent d'une part, la sensibilité du système pour chaque point de vue et son efficacité à reconnaitre les différentes actions, avec un taux de reconnaissance variable mais satisfaisant, ainsi que les résultats obtenus par la fusion de ces deux points de vue, qui permet l'obtention de taux de reconnaissance très performant.
3

Reconnaissance des actions humaines à partir d'une séquence vidéo

Touati, Redha 12 1900 (has links)
The work done in this master's thesis, presents a new system for the recognition of human actions from a video sequence. The system uses, as input, a video sequence taken by a static camera. A binary segmentation method of the the video sequence is first achieved, by a learning algorithm, in order to detect and extract the different people from the background. To recognize an action, the system then exploits a set of prototypes generated from an MDS-based dimensionality reduction technique, from two different points of view in the video sequence. This dimensionality reduction technique, according to two different viewpoints, allows us to model each human action of the training base with a set of prototypes (supposed to be similar for each class) represented in a low dimensional non-linear space. The prototypes, extracted according to the two viewpoints, are fed to a $K$-NN classifier which allows us to identify the human action that takes place in the video sequence. The experiments of our model conducted on the Weizmann dataset of human actions provide interesting results compared to the other state-of-the art (and often more complicated) methods. These experiments show first the sensitivity of our model for each viewpoint and its effectiveness to recognize the different actions, with a variable but satisfactory recognition rate and also the results obtained by the fusion of these two points of view, which allows us to achieve a high performance recognition rate. / Le travail mené dans le cadre de ce projet de maîtrise vise à présenter un nouveau système de reconnaissance d’actions humaines à partir d'une séquence d'images vidéo. Le système utilise en entrée une séquence vidéo prise par une caméra statique. Une méthode de segmentation binaire est d'abord effectuée, grâce à un algorithme d’apprentissage, afin de détecter les différentes personnes de l'arrière-plan. Afin de reconnaitre une action, le système exploite ensuite un ensemble de prototypes générés, par une technique de réduction de dimensionnalité MDS, à partir de deux points de vue différents dans la séquence d'images. Cette étape de réduction de dimensionnalité, selon deux points de vue différents, permet de modéliser chaque action de la base d'apprentissage par un ensemble de prototypes (censé être relativement similaire pour chaque classe) représentés dans un espace de faible dimension non linéaire. Les prototypes extraits selon les deux points de vue sont amenés à un classifieur K-ppv qui permet de reconnaitre l'action qui se déroule dans la séquence vidéo. Les expérimentations de ce système sur la base d’actions humaines de Wiezmann procurent des résultats assez intéressants comparés à d’autres méthodes plus complexes. Ces expériences montrent d'une part, la sensibilité du système pour chaque point de vue et son efficacité à reconnaitre les différentes actions, avec un taux de reconnaissance variable mais satisfaisant, ainsi que les résultats obtenus par la fusion de ces deux points de vue, qui permet l'obtention de taux de reconnaissance très performant.

Page generated in 0.0531 seconds