• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 51
  • 1
  • Tagged with
  • 249
  • 249
  • 249
  • 123
  • 92
  • 91
  • 65
  • 44
  • 40
  • 38
  • 36
  • 32
  • 32
  • 30
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Evaluating Flexibility Metrics on Simple Temporal Networks with Reinforcement Learning

Khan, Hamzah I 01 January 2018 (has links)
Simple Temporal Networks (STNs) were introduced by Tsamardinos (2002) as a means of describing graphically the temporal constraints for scheduling problems. Since then, many variations on the concept have been used to develop and analyze algorithms for multi-agent robotic scheduling problems. Many of these algorithms for STNs utilize a flexibility metric, which measures the slack remaining in an STN under execution. Various metrics have been proposed by Hunsberger (2002); Wilson et al. (2014); Lloyd et al. (2018). This thesis explores how adequately these metrics convey the desired information by using them to build a reward function in a reinforcement learning problem.
72

Leveraging Overhead Imagery for Localization, Mapping, and Understanding

Workman, Scott 01 January 2018 (has links)
Ground-level and overhead images provide complementary viewpoints of the world. This thesis proposes methods which leverage dense overhead imagery, in addition to sparsely distributed ground-level imagery, to advance traditional computer vision problems, such as ground-level image localization and fine-grained urban mapping. Our work focuses on three primary research areas: learning a joint feature representation between ground-level and overhead imagery to enable direct comparison for the task of image geolocalization, incorporating unlabeled overhead images by inferring labels from nearby ground-level images to improve image-driven mapping, and fusing ground-level imagery with overhead imagery to enhance understanding. The ultimate contribution of this thesis is a general framework for estimating geospatial functions, such as land cover or land use, which integrates visual evidence from both ground-level and overhead image viewpoints.
73

Image-Based Roadway Assessment Using Convolutional Neural Networks

Song, Weilian 01 January 2019 (has links)
Road crashes are one of the main causes of death in the United States. To reduce the number of accidents, roadway assessment programs take a proactive approach, collecting data and identifying high-risk roads before crashes occur. However, the cost of data acquisition and manual annotation has restricted the effect of these programs. In this thesis, we propose methods to automate the task of roadway safety assessment using deep learning. Specifically, we trained convolutional neural networks on publicly available roadway images to predict safety-related metrics: the star rating score and free-flow speed. Inference speeds for our methods are mere milliseconds, enabling large-scale roadway study at a fraction of the cost of manual approaches.
74

Detecting Rip Currents from Images

Maryan, Corey C 18 May 2018 (has links)
Rip current images are useful for assisting in climate studies but time consuming to manually annotate by hand over thousands of images. Object detection is a possible solution for automatic annotation because of its success and popularity in identifying regions of interest in images, such as human faces. Similarly to faces, rip currents have distinct features that set them apart from other areas of an image, such as more generic patterns of the surf zone. There are many distinct methods of object detection applied in face detection research. In this thesis, the best fit for a rip current object detector is found by comparing these methods. In addition, the methods are improved with Haar features exclusively created for rip current images. The compared methods include max distance from the average, support vector machines, convolutional neural networks, the Viola-Jones object detector, and a meta-learner. The presented results are compared for accuracy, false positive rate, and detection rate. Viola-Jones has the top base-line performance by achieving a detection rate of 0.88 and identifying only 15 false positives in the test image set of 53 rip currents. The described meta-learner integrates the presented Haar features, which are developed in accordance with the original Viola-Jones algorithm. Ada-Boost, a feature ranking algorithm, shows that the newly presented Haar features extract more meaningful data from rip current images than some of the current features. The meta-classifier improves upon the stand-alone Viola-Jones when applying these features by reducing its false positives by 47% while retaining a similar computational cost and detection rate.
75

Detecting Metagame Shifts in League of Legends Using Unsupervised Learning

Peabody, Dustin P 18 May 2018 (has links)
Over the many years since their inception, the complexity of video games has risen considerably. With this increase in complexity comes an increase in the number of possible choices for players and increased difficultly for developers who try to balance the effectiveness of these choices. In this thesis we demonstrate that unsupervised learning can give game developers extra insight into their own games, providing them with a tool that can potentially alert them to problems faster than they would otherwise be able to find. Specifically, we use DBSCAN to look at League of Legends and the metagame players have formed with their choices and attempt to detect when the metagame shifts possibly giving the developer insight into what changes they should affect to achieve a more balanced, fun game.
76

USING AUTOENCODER TO REDUCE THE LENGTH OF THE AUTISM DIAGNOSTIC OBSERVATION SCHEDULE (ADOS)

Daghustani, Sara Hussain 01 March 2018 (has links)
This thesis uses autoencoders to explore the possibility of reducing the length of the Autism Diagnostic Observation Schedule (ADOS), which is a series of tests and observations used to diagnose autism spectrum disorders in children, adolescents, and adults of different developmental levels. The length of the ADOS, directly and indirectly, causes barriers to its access for many individuals, which means that individuals who need testing are unable to get it. Reducing the length of the ADOS without significantly sacrificing its accuracy would increase its accessibility. The autoencoders used in this thesis have specific connections between layers that mimic the sectional structure of the original ADOS. Autoencoders reduce the length of the ADOS by conducting its dimensionality through combining original variables into new variables. By examining the weights of variables entering the reduced diagnostic, this thesis explores which variables are prioritized and deprioritized by the autoencoder. These information yields insights as to which variables, and underlying concepts, should prioritize in a shorter ADOS. After training, all autoencoders used were able to reduce dimensionality with minimal accuracy losses. Examination of weights yielded many keen insights as to which ADOS variables are the least important to their modules and can thus be eliminated or deprioritized in a reduced diagnostic. In particular, the observation of self-injurious behavior was declared entirely unnecessary in the first three modules of the ADOS, a finding that corroborates other recent experimental results in the domain. This observation suggests that the solutions converged upon by the model have real-world significance.
77

Joint Angle Tracking with Inertial Sensors

El-Gohary, Mahmoud Ahmed 22 February 2013 (has links)
The need to characterize normal and pathological human movement has consistently driven researchers to develop new tracking devices and to improve movement analysis systems. Movement has traditionally been captured by either optical, magnetic, mechanical, structured light, or acoustic systems. All of these systems have inherent limitations. Optical systems are costly, require fixed cameras in a controlled environment, and suffer from problems of occlusion. Similarly, acoustic and structured light systems suffer from the occlusion problem. Magnetic and radio frequency systems suffer from electromagnetic disturbances, noise and multipath problems. Mechanical systems have physical constraints that limit the natural body movement. Recently, the availability of low-cost wearable inertial sensors containing accelerometers, gyroscopes, and magnetometers has provided an alternative means to overcome the limitations of other motion capture systems. Inertial sensors can be used to track human movement in and outside of a laboratory, cannot be occluded, and are low cost. To calculate changes in orientation, researchers often integrate the angular velocity. However, a relatively small error or drift in the measured angular velocity leads to large integration errors. This restricts the time of accurate measurement and tracking to a few seconds. To compensate that drift, complementary data from accelerometers and magnetometers are normally integrated in tracking systems that utilize the Kalman filter (KF) or the extended Kalman filter (EKF) to fuse the nonlinear inertial data. Orientation estimates are only accurate for brief moments when the body is not moving and acceleration is only due to gravity. Moreover, success of using magnetometers to compensate drift about the vertical axis is limited by magnetic field disturbance. We combine kinematic models designed for control of robotic arms with state space methods to estimate angles of the human shoulder and elbow using two wireless wearable inertial measurement units. The same method can be used to track movement of other joints using a minimal sensor configuration with one sensor on each segment. Each limb is modeled as one kinematic chain. Velocity and acceleration are recursively tracked and propagated from one limb segment to another using Newton-Euler equations implemented in state space form. To mitigate the effect of sensor drift on the tracking accuracy, our system incorporates natural physical constraints on the range of motion for each joint, models gyroscope and accelerometer random drift, and uses zero-velocity updates. The combined effect of imposing physical constraints on state estimates and modeling the sensor random drift results in superior joint angles estimates. The tracker utilizes the unscented Kalman filter (UKF) which is an improvement to the EKF. This removes the need for linearization of the system equations which introduces tracking errors. We validate the performance of the inertial tracking system over long durations of slow, normal, and fast movements. Joint angles obtained from our inertial tracker are compared to those obtained from an optical tracking system and a high-precision industrial robot arm. Results show an excellent agreement between joint angles estimated by the inertial tracker and those obtained from the two reference systems.
78

Vision-Based Motion for a Humanoid Robot

Alkhulayfi, Khalid Abdullah 13 July 2016 (has links)
The overall objective of this thesis is to build an integrated, inexpensive, human-sized humanoid robot from scratch that looks and behaves like a human. More specifically, my goal is to build an android robot called Marie Curie robot that can act like a human actor in the Portland Cyber Theater in the play Quantum Debate with a known script of every robot behavior. In order to achieve this goal, the humanoid robot need to has degrees of freedom (DOF) similar to human DOFs. Each part of the Curie robot was built to achieve the goal of building a complete humanoid robot. The important additional constraints of this project were: 1) to build the robot from available components, 2) to minimize costs, and 3) to be simple enough that the design can be replicated by non-experts, so they can create robot theaters worldwide. Furthermore, the robot appears lifelike because it executes two main behaviors like a human being. The first behavior is tracking where the humanoid robot uses a tracking algorithm to follow a human being. In other words, the tracking algorithm allows the robot to control its neck using the information taken from the vision system to look at the nearest human face. In addition, the robot uses the same vision system to track labeled objects. The second behavior is grasping where the inverse kinematics (IK) is calculated so the robot can move its hand to a specific coordinate in the surrounding space. IK gives the robot the ability to move its end-effector (hand) closer to how humans move their hands.
79

Augmented Terrain-Based Navigation to Enable Persistent Autonomy for Underwater Vehicles in GPS-Denied Environments

Reis, Gregory M 14 June 2018 (has links)
Aquatic robots, such as Autonomous Underwater Vehicles (AUVs), play a major role in the study of ocean processes that require long-term sampling efforts and commonly perform navigation via dead-reckoning using an accelerometer, a magnetometer, a compass, an IMU and a depth sensor for feedback. However, these instruments are subjected to large drift, leading to unbounded uncertainty in location. Moreover, the spatio-temporal dynamics of the ocean environment, coupled with limited communication capabilities, make navigation and localization difficult, especially in coastal regions where the majority of interesting phenomena occur. To add to this, the interesting features are themselves spatio-temporally dynamic, and effective sampling requires a good understanding of vehicle localization relative to the sampled feature. Therefore, our work is motivated by the desire to enable intelligent data collection of complex dynamics and processes that occur in coastal ocean environments to further our understanding and prediction capabilities. The study originated from the need to localize and navigate aquatic robots in a GPS-denied environment and examine the role of the spatio-temporal dynamics of the ocean into the localization and navigation processes. The methods and techniques needed range from the data collection to the localization and navigation algorithms used on-board of the aquatic vehicles. The focus of this work is to develop algorithms for localization and navigation of AUVs in GPS-denied environments. We developed an Augmented terrain-based framework that incorporates physical science data, i.e., temperature, salinity, pH, etc., to enhance the topographic map that the vehicle uses to navigate. In this navigation scheme, the bathymetric data are combined with the physical science data to enrich the uniqueness of the underlying terrain map and increase the accuracy of underwater localization. Another technique developed in this work addresses the problem of tracking an underwater vehicle when the GPS signal suddenly becomes unavailable. The methods include the whitening of the data to reveal the true statistical distance between datapoints and also incorporates physical science data to enhance the topographic map. Simulations were performed at Lake Nighthorse, Colorado, USA, between April 25th and May 2nd 2018 and at Big Fisherman's Cove, Santa Catalina Island, California, USA, on July 13th and July 14th 2016. Different missions were executed on different environments (snow, rain and the presence of plumes). Results showed that these two methodologies for localization and tracking work for reference maps that had been recorded within a week and the accuracy on the average error in localization can be compared to the errors found when using GPS if the time in which the observations were taken are the same period of the day (morning, afternoon or night). The whitening of the data had positive results when compared to localizing without whitening.
80

An Evolutionary Approach to Optimization of Compound Stock Trading Indicators Used to Confirm Buy Signals

Teeples, Allan W. 01 December 2010 (has links)
This thesis examines the application of genetic algorithms to the optimization of a composite set of technical indicator filters to confirm or reject buy signals in stock trading, based on probabilistic values derived from historical data. The simplicity of the design, which gives each filter within the composite filter the ability to act independently of the other filters, is outlined, and the cumulative indirect effect each filter has on all the others is discussed. This system is contrasted with the complexity of systems from previous research that attempt to merge several indicator filters together by giving each one a weight as a percentage of the whole, or which build a decision tree based rule comprised of several indicators. The detrimental effects of short-term market fluctuations on the effectiveness of the optimization are considered, and attempts to mitigate these effects by reducing the length of the optimization interval are discussed. Finally, the optimized indicators are used in simulated trading, using historical data. The results from the simulation are compared with the annual returns of the NASDAQ 100 Index on a yearly basis over a period of four years. The comparison shows that the composite indicator filter is proficient enough at filtering out inferior buy signals to substantially outperform the NASDAQ 100 Index during each year of the simulation.

Page generated in 0.1239 seconds