• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 1
  • Tagged with
  • 248
  • 248
  • 248
  • 122
  • 91
  • 91
  • 65
  • 44
  • 39
  • 38
  • 36
  • 32
  • 31
  • 30
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Take the Lead: Toward a Virtual Video Dance Partner

Farris, Ty 01 August 2021 (has links) (PDF)
My work focuses on taking a single person as input and predicting the intentional movement of one dance partner based on the other dance partner's movement. Human pose estimation has been applied to dance and computer vision, but many existing applications focus on a single individual or multiple individuals performing. Currently there are very few works that focus specifically on dance couples combined with pose prediction. This thesis is applicable to the entertainment and gaming industry by training people to dance with a virtual dance partner. Many existing interactive or virtual dance partners require a motion capture system, multiple cameras or a robot which creates an expensive cost. This thesis does not use a motion capture system and combines OpenPose with swing dance YouTube videos to create a virtual dance partner. By taking in the current dancer's moves as input, the system predicts the dance partner's corresponding moves in the video frames. In order to create a virtual dance partner, datasets that contain information about the skeleton keypoints are necessary to predict a dance partner's pose. There are existing dance datasets for a specific type of dance, but these datasets do not cover swing dance. Furthermore, the dance datasets that do include swing have a limited number of videos. The contribution of this thesis is a large swing dataset that contains three different types of swing dance: East Coast, Lindy Hop and West Coast. I also provide a basic framework to extend the work to create a real-time and interactive dance partner.
182

A Pareto-Frontier Analysis of Performance Trends for Small Regional Coverage LEO Constellation Systems

Hinds, Christopher Alan 01 December 2014 (has links) (PDF)
As satellites become smaller, cheaper, and quicker to manufacture, constellation systems will be an increasingly attractive means of meeting mission objectives. Optimizing satellite constellation geometries is therefore a topic of considerable interest. As constellation systems become more achievable, providing coverage to specific regions of the Earth will become more common place. Small countries or companies that are currently unable to afford large and expensive constellation systems will now, or in the near future, be able to afford their own constellation systems to meet their individual requirements for small coverage regions. The focus of this thesis was to optimize constellation geometries for small coverage regions with the constellation design limited between 1-6 satellites in a Walker-delta configuration, at an altitude of 200-1500km, and to provide remote sensing coverage with a minimum ground elevation angle of 60 degrees. Few Pareto-frontiers have been developed and analyzed to show the tradeoffs among various performance metrics, especially for this type of constellation system. The performance metrics focus on geometric coverage and include revisit time, daily visibility time, constellation altitude, ground elevation angle, and the number of satellites. The objective space containing these performance metrics were characterized for 5 different regions at latitudes of 0, 22.5, 45, 67.5, and 90 degrees. In addition, the effect of minimum ground elevation angle was studied on the achievable performance of this type of constellation system. Finally, the traditional Walker-delta pattern constraint was relaxed to allow for asymmetrical designs. These designs were compared to see how the Walker-delta pattern performs compared to a more relaxed design space. The goal of this thesis was to provide both a framework as well as obtain and analyze Pareto-frontiers for constellation performance relating to small regional coverage LEO constellation systems. This work provided an in-depth analysis of the trends in both the design and objective space of the obtained Pareto-frontiers. A variation on the εNSGA-II algorithm was utilized along with a MATLAB/STK interface to produce these Pareto-frontiers. The εNSGA-II algorithm is an evolutionary algorithm that was developed by Kalyanmoy Deb to solve complex multi-objective optimization problems. The algorithm used in this study proved to be very efficient at obtaining various Pareto-frontiers. This study was also successful in characterizing the design and solution space surrounding small LEO remote sensing constellation systems providing small regional coverage.
183

Proceedings of Cyberworlds 2009

Ugail, Hassan, Qahwaji, Rami S.R., Earnshaw, Rae A., Willis, P.J. 11 1900 (has links)
No
184

A Bridge between Graph Neural Networks and Transformers: Positional Encodings as Node Embeddings

Manu, Bright Kwaku 01 December 2023 (has links) (PDF)
Graph Neural Networks and Transformers are very powerful frameworks for learning machine learning tasks. While they were evolved separately in diverse fields, current research has revealed some similarities and links between them. This work focuses on bridging the gap between GNNs and Transformers by offering a uniform framework that highlights their similarities and distinctions. We perform positional encodings and identify key properties that make the positional encodings node embeddings. We found that the properties of expressiveness, efficiency and interpretability were achieved in the process. We saw that it is possible to use positional encodings as node embeddings, which can be used for machine learning tasks such as node classification, graph classification, and link prediction. We discuss some challenges and provide future directions.
185

Designing an Artificial Immune inspired Intrusion Detection System

Anderson, William Hosier 08 December 2023 (has links) (PDF)
The domain of Intrusion Detection Systems (IDS) has witnessed growing interest in recent years due to the escalating threats posed by cyberattacks. As Internet of Things (IoT) becomes increasingly integrated into our every day lives, we widen our attack surface and expose more of our personal lives to risk. In the same way the Human Immune System (HIS) safeguards our physical self, a similar solution is needed to safeguard our digital self. This thesis presents the Artificial Immune inspired Intrusion Detection System (AIS-IDS), an IDS modeled after the HIS. This thesis proposes an architecture for AIS-IDS, instantiates an AIS-IDS model for evaluation, conducts a robust set of experiments to ascertain the efficacy of the AIS-IDS, and answers key research questions aimed at evaluating the validity of the AIS-IDS. Finally, two expansions to the AIS-IDS are proposed with the goal of further infusing the HIS into AIS-IDS design.
186

Biomarker Identification for Breast Cancer Types Using Feature Selection and Explainable AI Methods

La Rosa Giraud, David E 01 January 2023 (has links) (PDF)
This paper investigates the impact the LASSO, mRMR, SHAP, and Reinforcement Feature Selection techniques on random forest models for the breast cancer subtypes markers ER, HER2, PR, and TN as well as identifying a small subset of biomarkers that could potentially cause the disease and explain them using explainable AI techniques. This is important because in areas such as healthcare understanding why the model makes a specific decision is important it is a diagnostic of an individual which requires reliable AI. Another contribution is using feature selection methods to identify a small subset of biomarkers capable of predicting if a specific RNA sequence will have one of the cancer labels positive. The study begins by obtaining baseline accuracy metric using a random forest model on The Cancer Genome Atlas's breast cancer database to then explore the effects of feature selection, selecting different numbers of features, significantly influencing model accuracy, and selecting a small number of potential biomarkers that may produce a specific type of breast cancer. Once the biomarkers were selected, the explainable AI techniques SHAP and LIME were applied to the models and provided insight into influential biomarkers and their impact on predictions. The main results are that there are some shared biomarkers between some of the subsets that had high influence over the model prediction, LASSO and Reinforcement Feature selection sets scoring the highest accuracy of all sets and obtaining some insight into how the models used the features by using existing explainable AI methods SHAP and LIME to understand how these selected features are affecting the model's prediction.
187

Predicting Location and Training Effectiveness (PLATE)

Bruenner, Erik Rolf 01 June 2023 (has links) (PDF)
Abstract Predicting Location and Training Effectiveness (PLATE) Erik Bruenner Physical activity and exercise have been shown to have an enormous impact on many areas of human health and can reduce the risk of many chronic diseases. In order to better understand how exercise may affect the body, current kinesiology studies are designed to track human movements over large intervals of time. Procedures used in these studies provide a way for researchers to quantify an individual’s activity level over time, along with tracking various types of activities that individuals may engage in. Movement data of research subjects is often collected through various sensors, such as accelerometers. Data from these specialized sensors may be fed into a deep learning model which can accurately predict what movements a person is making based on aggregated sensor data. However, in order for prediction models to produce accurate classifications of activities, they must be ‘trained’. Training occurs through the process of supervised learning on large amounts of data where movements are already known. These training data sets are also known as ‘validation’ data or ‘ground truth’. Currently, generation of these ground truth sets is very labor-intensive. To generate these labeled data sets, research assistants must analyze many hours of video footage with research subjects. These research assistants painstakingly categorize each video, second by second, with a description of the activity the subject was engaging in. Using only labeled video, the PLATE project facilitates the generation of ground truth data by developing an artificial intelligence (AI) that predicts video quality labels, along with labels that denote the physical location that these activities occurred in. The PLATE project builds on previous work by a former graduate student, Roxanne Miller. Miller developed a classification system to categorize subject activities into groups such as ‘Stand’, ‘Sit’, ‘Walk’, ‘Run’, etc. The PLATE project focuses instead on development of AI to generate ground truth training in order to accurately detect and identify the quality of video data, and the location of the video data. In the context of the PLATE project, video quality refers to whether or not a test subject is visible in the frame. Location classifications include categorizing ‘indoors’, ‘outdoors’, and ‘traveling’. More specifically, indoor categories are further identified as ‘house’, ‘office’, ‘school’, ‘store’ or ‘commercial’ space. Outdoor locations are further classified as ‘commercial space’, ‘park/greenspace’, ‘residential’ or ‘neighborhood’. The nature of our location classification problem lends itself particularly well to a hierarchical classification approach, where general indoor, outdoor, or travel categories are predicted, then separate models predict the subclassifications of these categories. The PLATE project uses three convolutional neural networks in its hierarchical location prediction pipeline, and one convolutional neural network to predict if video frames are high or low quality. Results from the PLATE project demonstrate that quality can be predicted with an accuracy of 96%, general location with an accuracy of 75%, and specific locations with an accuracy of 31%. The findings and model produced by the PLATE project are utilized in the PathML project as part of a ground truth prediction software for activity monitoring studies. PathML is a project funded by the NIH as part of a Small Business Research Initiative. Cal Poly partnered with Sentimetrix Inc, a data analytics/machine learning company, to build a methodology for automated labeling of human physical activity. The partnership aims to utilize this methodology to develop a software tool that performs automatic labeling and facilitates the subsequent human inspection. Phase I (proof of concept) of the project took place from September 2021 to August 2022, Phase II (final software production) is pending. This thesis is part of the research that took place during Phase I lifetime, and continues to support Phase II development.
188

Design And Implementation Of A Vision-Based Deep-Learning Protocol For Kinematic Feature Extraction With Application To Stroke Rehabilitation

Luna Inga, Juan Diego 01 June 2024 (has links) (PDF)
Stroke is a leading cause of long-term disability, affecting thousands of individuals annually and significantly impairing their mobility, independence, and quality of life. Traditional methods for assessing motor impairments are often costly and invasive, creating substantial barriers to effective rehabilitation. This thesis explores the use of DeepLabCut (DLC), a deep-learning-based pose estimation tool, to extract clinically meaningful kinematic features from video data of stroke survivors with upper-extremity (UE) impairments. To conduct this investigation, a specialized protocol was developed to tailor DLC for analyzing movements characteristic of UE impairments in stroke survivors. This protocol was validated through comparative analysis using peak acceleration (PA), mean squared jerk (MSJ), and area under the curve (AUC) as kinematic features. These features were extracted from the DLC output and compared to those derived from the assumed ground-truth data from IMU sensors worn by the participants. The accuracy of this analysis was quantified using percent mean squared error (PMSE) between each IMU sensor and DLC. PMSE analysis indicates that DLC-based kinematic features capture aspects of both accelerometer and gyroscope for the control participant. PA (8.78%) and AUC (3.28%) align more closely with the gyroscope, while MSJ (5.20%) demonstrates greater agreement with the accelerometer. On the other hand, for the stroke participant, DLC estimations for all kinematic features predominantly reflect data from the accelerometer. Across all datasets, AUC has the smallest PMSE values, suggesting that, based on our data, motor effort and energy expenditure in the tasks are best represented by DLC. Additionally, PMSE values for the stroke dataset are higher than those for the control, highlighting DLC's limitations in accurately detecting finer details of motion data in individuals with UE impairments. The results indicate that DLC reasonably estimates kinematic data for both participants, although further refinement of the methods is necessary to enhance the analysis of stroke data.
189

Exploring Algorithmic Literacy for College Students: An Educator’s Roadmap

Archambault, Susan Gardner 01 January 2022 (has links) (PDF)
Research shows that college students are largely unaware of the impact of algorithms on their everyday lives. Also, most university students are not being taught about algorithms as part of the regular curriculum. This exploratory, qualitative study aimed to explore subject-matter experts’ insights and perceptions of the knowledge components, coping behaviors, and pedagogical considerations to aid faculty in teaching algorithmic literacy to college students. Eleven individual, semi-structured interviews and one focus group were conducted with scholars and teachers of critical algorithm studies and related fields. Findings suggested three sets of knowledge components that would contribute to students’ algorithmic literacy: general characteristics and distinguishing traits of algorithms, key domains in everyday life using algorithms (including the potential benefits and risks), and ethical considerations for the use and application of algorithms. Findings also suggested five behaviors that students could use to help them better cope with algorithmic systems and nine teaching strategies to help improve students’ algorithmic literacy. Suggestions also surfaced for alternative forms of assessment, potential placement in the curriculum, and how to distinguish between basic algorithmic awareness compared to algorithmic literacy. Recommendations for expanding on the current Association of College and Research Libraries’ Framework for Information Literacy for Higher Education (2016) to more explicitly include algorithmic literacy were presented.
190

Learning Preference Models for Autonomous Mobile Robots in Complex Domains

Silver, David 01 December 2010 (has links)
Achieving robust and reliable autonomous operation even in complex unstructured environments is a central goal of field robotics. As the environments and scenarios to which robots are applied have continued to grow in complexity, so has the challenge of properly defining preferences and tradeoffs between various actions and the terrains they result in traversing. These definitions and parameters encode the desired behavior of the robot; therefore their correctness is of the utmost importance. Current manual approaches to creating and adjusting these preference models and cost functions have proven to be incredibly tedious and time-consuming, while typically not producing optimal results except in the simplest of circumstances. This thesis presents the development and application of machine learning techniques that automate the construction and tuning of preference models within complex mobile robotic systems. Utilizing the framework of inverse optimal control, expert examples of robot behavior can be used to construct models that generalize demonstrated preferences and reproduce similar behavior. Novel learning from demonstration approaches are developed that offer the possibility of significantly reducing the amount of human interaction necessary to tune a system, while also improving its final performance. Techniques to account for the inevitability of noisy and imperfect demonstration are presented, along with additional methods for improving the efficiency of expert demonstration and feedback. The effectiveness of these approaches is confirmed through application to several real world domains, such as the interpretation of static and dynamic perceptual data in unstructured environments and the learning of human driving styles and maneuver preferences. Extensive testing and experimentation both in simulation and in the field with multiple mobile robotic systems provides empirical confirmation of superior autonomous performance, with less expert interaction and no hand tuning. These experiments validate the potential applicability of the developed algorithms to a large variety of future mobile robotic systems.

Page generated in 0.4934 seconds