• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 25
  • 22
  • 8
  • 7
  • 7
  • 6
  • 5
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 519
  • 131
  • 117
  • 83
  • 77
  • 74
  • 65
  • 59
  • 56
  • 56
  • 56
  • 53
  • 51
  • 49
  • 45
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Wearable brain computer interfaces with near infrared spectroscopy

Ortega, Antonio 17 January 2023 (has links)
Brain computer interfaces (BCIs) are devices capable of relaying information directly from the brain to a digital device. BCIs have been proposed for a diverse range of clinical and commercial applications; for example, to allow paralyzed subjects to communicate, or to improve machine human interactions. At their core, BCIs need to predict the current state of the brain from variables measuring functional physiology. Functional near infrared spectroscopy (fNIRS) is a non-invasive optical technology able to measure hemodynamic changes in the brain. Along with electroencephalography (EEG), fNIRS is the only technique that allows non-invasive and portable sensing of brain signals. Portability and wearability are very desirable characteristics for BCIs, as they allow them to be used in contexts beyond the laboratory, extending their usability for clinical and commercial applications, as well as for ecologically valid research. Unfortunately, due to limited access to the brain, non-invasive BCIs tend to suffer from low accuracy in their estimation of the brain state. It has been suggested that feedback could increase BCI accuracy as the brain normally relies on sensory feedback to adjust its strategies. Despite this, presenting relevant and accurate feedback in a timely manner can be challenging when processing fNIRS signals, as they tend to be contaminated by physiological and motion artifacts. In this dissertation, I present the hardware and software solutions we proposed and developed to deal with these challenges. First, I will talk about ninjaNIRS, the wearable open source fNIRS device we developed in our laboratory, which could help fNIRS neuroscience and BCIs to become more accessible. Next, I will present an adaptive filter strategy to recover the neural responses from fNIRS signals in real-time, which could be used for feedback and classification in a BCI paradigm. We showed that our wearable fNIRS device can operate autonomously for up to three hours and can be easily carried in a backpack, while offering noise equivalent power comparable to commercial devices. Our adaptive multimodal Kalman filter strategy provided a six-fold increase in contrast to noise ratio of the brain signals compared to standard filtering while being able to process at least 24 channels at 400 samples per second using a standard computer. This filtering strategy, along with visual feedback during a left vs right motion imagery task, showed a relative increase of accuracy of 37.5% compared to not using feedback. With this, we show that it is possible to present relevant feedback for fNIRS BCI in real-time. The findings on this dissertation might help improve the design of future fNIRS BCIs, and thus increase the usability and reliability of this technology.
122

To Interstitial Fluid and Beyond: Microneedles and Electrochemical Aptamer Based Sensors as a Generalizable, Wearable Biosensor Platform

Friedel, Mark January 2022 (has links)
No description available.
123

Trust in Human Activity Recognition Deep Learning Models

Simons, Ama January 2021 (has links)
Trust is explored in this thesis through the analysis of the robustness of wearable device based artificial intelligence based models to changes in data acquisition. Specifically changes in wearable device hardware and different recording sessions are explored. Three human activity recognition models are used as a vehicle to explore this: Model A which is trained using accelerometer signals recorded by a wearable sensor referred to as Astroskin, Model H which is trained using accelerometer signals from a wearable sensor referred to as the BioHarness and Model A Type 1 which was trained on Astroskin accelerometer signals that was recorded on the first session of the experimental protocol. On a test set recorded by Astroskin Model A had a 99.07% accuracy. However on a test set recorded by the BioHarness Model A had a 65.74% accuracy. On a test set recorded by BioHarness Model H had a 95.37% accuracy. However on a test set recorded by Astroskin Model H had a 29.63% accuracy. Model A Type 1 an average accuracy of 99.57% on data recorded by the same wearable sensor and same session. An average accuracy of 50.95% was obtained on a test set that was recorded by the same wearable sensor but by a different session. An average accuracy of 41.31% was obtained on data that was recorded by a different wearable sensor and same session. An average accuracy of 19.28% was obtained on data that was recorded by a different wearable sensor and different session. An out of domain discriminator for Model A Type 1 was also implemented. The out of domain discriminator was able to differentiate between the data that trained Model A Type 1 and other types (data recorded by a different wearable devices/different sessions) with an accuracy of 97.60%. / Thesis / Master of Applied Science (MASc) / The trustworthiness of artificial intelligence must be explored before society can fully reap its benefits. The element of trust that is explored in this thesis is the robustness of wearable device based artificial intelligence models to changes in data acquisition. The specific changes that are explored are changes in the wearable device used to record the input data as well as input data from different recording sessions. Using human activity recognition models as a vehicle, the results show that performance degradation occurs when the wearable device is changed and when data comes from a different recording session. An out of domain discriminator is developed to alert users when a potential performance degradation can occur.
124

Gait Analysis from Wearable Devices using Image and Signal Processing

Schneider, Bradley A. January 2017 (has links)
No description available.
125

Fractal Structure and Complexity Matching in Naturalistic Human Behavior

Rigoli, Lillian M. 24 September 2018 (has links)
No description available.
126

The birth of the cyberkid: a genealogy of the educational arena for assistive technology

Savas, Thomas 26 February 2007 (has links)
No description available.
127

Sensory : Designing an Ai-Powered Interactive Artefact for Managing Sensory Overload Experiences

Trăistar, Bianca January 2023 (has links)
This project aims to explore how interactive technological artefacts can support young adults with sensory overload in their experiences. It intends to investigate the potential of carefully designing technologies tailored to the user's sensory needs of managing and reflecting on the sensory experience, and understanding their patterns. The project adopted an iterative design process and applied several methods with an emphasis on designing meaningful user experiences. The project conducted user research with people experiencing sensory overload to understand the experiential aspects of trying to manage the negative experiences it entails. The final design consists of two prototypes. The first one was a Role Prototype created with Figma and took the form of a mHealth app powered by Artificial Intelligence for detecting patterns in the user’s data and documenting sensory experiences through journaling. The second one is an Implementation prototype in the form of a tangible electronic prototype created by using Arduino Nano BLE 33 Sense and sensors for recognising biometric data and environmental cues. Additionally, technological exploration was undertaken through sketching in hardware. This led to delving into how Machine Learning and gas sensors can be combined in order to create a scent-detecting sensor. The results suggest that the design concept provides the user with valuable tools to manage their overwhelming sensory experiences. Lastly, the results indicate that a sensory experience prediction component would be a valuable feature for people who need to manage sensory overload.
128

Validity Parameters for Step Counting Wearable Technologies During Treadmill Walking in Young People 6-20 Years of Age

Gould, Zachary 18 December 2020 (has links) (PDF)
Introduction: Wearable technologies play an important contemporary role in the measurement of physical activity (PA) and promotion of human health across the lifespan, including for young people (i.e., children, adolescents, and young adults). As new objective wearable technologies continue to develop, standardized approaches to documenting validation parameters (i.e., measures of accuracy, precision, and bias) are needed to ensure confidence and comparability in step-defined PA. Purpose: To produce validity parameters for step counting wearable technologies during treadmill walking in young people 6-20 years of age Methods: 120 participants completed 5-minute treadmill bouts from13.4 to 134.1 m·min-1. Participants wore eight technologies (two at the arm/wrist, four at the waist, one on the thigh, and one on the ankle) while steps were directly observed. Speed, wear location, and age -specific measures of accuracy (mean absolute percent error; MAPE), precision (correlation coefficient, standard deviation; SD, coefficient of variation; CoV), and bias (percent error; PE) were computed and cataloged. Results: Speed and wear location had a significant effect on accuracy and bias measures for wearable technologies (pConclusion: While the analyses indicate the significance of speed and wear location on wearable technology performance, the useful and comprehensive validity reference values cataloged herein will help optimize measurement of PA in youth. Future research should continue to rigorously validate new wearable technologies as they are developed, and also extend these standardized reference values developed in the laboratory to the free-living environment.
129

Multiscale Quantitative Analytics of Human Visual Searching Tasks

Chen, Xiaoyu 16 July 2021 (has links)
Benefit from the recent advancements of artificial intelligence (AI) methods, industrial automation has replaced human labors in many tasks. However, humans are still placed in the central role when visual searching tasks are highly involved for manufacturing decision-making. For example, highly customized products fabricated by additive manufacturing processes have posed significant challenges to AI methods in terms of their performance and generalizability. As a result, in practice, human visual searching tasks are still widely involved in manufacturing contexts (e.g., human resource management, quality inspection, etc.) based on various visualization techniques. Quantitatively modeling the visual searching behaviors and performance will not only contribute to the understanding of decision-making process in a visualization system, but also advance AI methods by incubating them with human expertise. In general, visual searching can be quantitatively understood from multiple scales, namely, 1) the population scale to treat individuals equally and model the general relationship between individual's physiological signals with visual searching decisions; 2) the individual scale to model the relationship between individual differences and visual searching decisions; and 3) the attention scale to model the relationship between individuals' attention in visual searching and visual searching decisions. The advancements of wearable sensing techniques enable such multiscale quantitative analytics of human visual searching performance. For example, by equipping human users with electroencephalogram (EEG) device, eye tracker, and logging system, the multiscale quantitative relationships among human physiological signals, behaviors and performance can be readily established. This dissertation attempts to quantify visual searching process from multiple scales by proposing (1) a data-fusion method to model the quantitative relationship between physiological signals and human's perceived task complexities (population scale, Chapter 2); (2) a recommender system to quantify and decompose the individual differences into explicit and implicit differences via personalized recommender system-based sensor analytics (individual scale, Chapter 3); and (3) a visual language processing modeling framework to identify and correlate visual cues (i.e., identified from fixations) with humans' quality inspection decisions in human visual searching tasks (attention scale, Chapter 4). Finally, Chapter 5 summarizes the contributions and proposes future research directions. The proposed methodologies can be readily extended to other applications and research studies to support multi-scale quantitative analytics. Besides, the quantitative understanding of human visual searching behaviors performance can also generate insights to further incubate AI methods with human expertise. Merits of the proposed methodologies are demonstrated in a visualization evaluation user study, and a cognitive hacking user study. Detailed notes to guide the implementation and deployment are provided for practitioners and researchers in each chapter. / Doctor of Philosophy / Existing industrial automation is limited by the performance and generalizability of artificial intelligence (AI) methods. Therefore, various human visual searching tasks are still widely involved in manufacturing contexts based on many visualization techniques, e.g., to searching for specific information, and to make decisions based on sequentially gathered information. Quantitatively modeling the visual searching performance will not only contribute to the understanding of human behaviors in a visualization system, but also advance the AI methods by incubating them with human expertise. In this dissertation, visual searching performance is characterized from multiple scales, namely, 1) the population scale to understand the visual searching performance in regardless of individual differences; 2) the individual scale to model the performance by quantifying individual differences; and 3) the attention scale to quantify the human visual searching-based decision-making process. Thanks to the advancements in wearable sensing techniques, this dissertation attempts to quantify visual searching process from multiple scales by proposing (1) a data-fusion method to model the quantitative relationship between physiological signals and human's perceived task complexities (population scale, Chapter 2); (2) a recommender system to suggest the best visualization design to the right person at the right time via sensor analytics (individual scale, Chapter 3); and (3) a visual language processing modeling framework to model humans' quality inspection decisions (attention scale, Chapter 4). Finally, Chapter 5 summarizes the contributions and proposes future research directions. Merits of the proposed methodologies are demonstrated in a visualization evaluation user study, and a cognitive hacking user study. The proposed methodologies can be readily extended to other applications and research studies to support multi-scale quantitative analytics.
130

Joint Angle Estimation Method for Wearable Human Motion Capture

Redhouse, Amanda Jean 27 May 2021 (has links)
This thesis presents a method for estimating the positions of human limbs during motion that can be applied to wearable, textile-based sensors. The method was validated for the elbow and shoulder joints with data from two garments with resistive, thread-based sensors sewn into the garments at multiple locations. The proposed method was able to estimate the elbow joint position with an average error of 2.2 degrees. The method also produced an average difference in Euclidean distance of 3.7 degrees for the estimated shoulder joint position using data from nine sensors placed around the subject's shoulder. The most accurate combination of sensors on the shoulder garment was found to produce an average difference in distance of 3.4 degrees and used only six sensors. The characteristics of the resistive, thread-based sensor used to validate the method are also detailed as some of their behaviors proved to affect the accuracy of the method negatively. / Master of Science / Human motion capture systems gather data on the position of the human body during motion. The data is then used to recreate and analyze the motion digitally. There is a need for motion capture devices capable of measuring long-term data on human motion, especially in physical therapy. However, the currently available motion capture systems have limitations that make long-term or daily use either impossible or uncomfortable. This thesis presents a method that uses data from wearable, textile-based sensors to estimate the positions of human limbs during motion. Two garments were used to validate the method on the elbow and shoulder joints. The proposed method was able to measure the elbow and shoulder joints with an average accuracy that is within the acceptable range for clinical settings.

Page generated in 0.0286 seconds