• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 36
  • 36
  • 36
  • 22
  • 12
  • 12
  • 12
  • 9
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Reinforcement Learning enabled hummingbird-like extreme maneuvers of a dual-motor at-scale flapping wing robot

Fan Fei (7461581) 31 January 2022 (has links)
<div>Insects and hummingbirds exhibit extraordinary flight capabilities and can simultaneously master seemingly conflicting goals: stable hovering and aggressive maneuvering, unmatched by small-scale man-made vehicles. Given a sudden looming visual stimulus at hover, a hummingbird initiates a fast backward translation coupled with a 180-degree yaw turn, which is followed by instant posture stabilization in just under 10 wingbeats. Considering the wingbeat frequency of 40Hz, this aggressive maneuver is accomplished in just 0.2 seconds. Flapping Wing Micro Air Vehicles (FWMAVs) hold great promise for closing this performance gap given its agility. However, the design and control of such systems remain challenging due to various constraints.</div><div><br></div><div>First, the design, optimization and system integration of a high performance at-scale biologically inspired tail-less hummingbird robot is presented. Designing such an FWMAV is a challenging task under the constraints of size, weight, power, and actuation limitations. It is even more challenging to design such a vehicle with independently controlled wings equipped with a total of only two actuators and be able to achieve animal-like flight performance. The detailed systematic solution for the design is presented, including system modeling and analysis of the wing-actuation system, body dynamics, and control and sensing requirements. Optimization is conducted to search for the optimal system parameters, and a hummingbird robot is built and validated experimentally.</div><div><br></div><div>An open-source high fidelity dynamic simulation for FWMAVs is developed to serve as a testbed for the onboard sensing and flight control algorithm, as well as design, and optimization of FWMAVs. For simulation validation, the hummingbird robot was recreated in the simulation. System identification was performed to obtain the dynamics parameters. The force generation, open-loop and closed-loop dynamic response between simulated and experimental flights were compared and validated. The unsteady aerodynamics and the highly nonlinear flight dynamics present challenging control problems for conventional and learning control algorithms such as Reinforcement Learning.</div><div><br></div><div>For robust transient and steady-state flight performance, a robust adaptive controller is developed to achieve stable hovering and fast maneuvering. The model-based nonlinear controller can stabilize the system and adapt to system parameter changes such as wear and tear, thermo effect on the actuator or strong disturbance such as ground effect. The controller is tuned in simulation and experimentally verified by hovering, point-to-point fast traversing, and following by rapid figure-of-eight trajectory. The experimental result demonstrates the state-of-the-art performance of the FWMAV in stationary hovering and fast trajectory tracking tasks, with minimum transient and steady-state error.</div><div><br></div><div>To achieve animal level maneuvering performance, especially the hummingbirds' near-maximal performance during rapid escape maneuvers, we developed a hybrid flight control strategy for aggressive maneuvers. The proposed hybrid control policy combines model-based nonlinear control with model-free reinforcement learning. The model-based nonlinear control stabilizes the system's closed-loop dynamics under disturbance and parameter variation. With the stabilized system, a model-free reinforcement learning policy trained in simulation can be optimized to achieve the desirable fast movement by temporarily "destabilizing" the system during flight. Two test cases were demonstrated to show the effectiveness of the hybrid control method: 1)a rapid escape maneuver observed in real hummingbird, 2) a drift-free fast 360-degree body flip. Direct simulation-to-real transfers are achieved, demonstrating the hummingbird-like fast evasive maneuvers on the at-scale hummingbird robot.</div>
12

Online Covering: Efficient and Learning-Augmented Algorithms

Young-san Lin (12868319) 14 June 2022 (has links)
<p>We start by slightly modifying the generic framework for solving online covering and packing linear programs (LP) proposed in the seminal work of Buchbinder and Naor (Mathematics of Operations Research, 34, 2009) to obtain efficient implementations in settings in which one has access to a separation oracle.</p> <p><br></p> <p>We then apply the generic framework to several online network connectivity problems with LP formulations, namely pairwise spanners and directed Steiner forests. Our results are comparable to the previous state-of-the-art results for these problems in the offline setting.</p> <p><br></p> <p>Further, we extend the generic frameworks to online optimization problems enhanced with <strong>machine-learning predictions</strong>. In particular, we present <strong>learning-augmented</strong> algorithms for online covering LPs and semidefinite programs (SDP), which outperform any optimal online algorithms when the prediction is accurate while maintaining reasonable guarantees when the prediction is misleading. Specifically, we obtain general online learning-augmented algorithms for covering LPs with fractional advice and general constraints and initiate the study of learning-augmented algorithms for covering SDPs.</p>
13

The Hanabi challenge: From Artificial Teams to Mixed Human-Machine Teams

Inferadi, Salam, Olof, Johnsson January 2022 (has links)
Denna rapport kommer fokusera på att beskriva processen på den fortsatta utvecklingen av det grafiska användargränssnittet (GUI) för Hanabi Benchmark. Hanabi är ett kortspel som introducerats som ett nytt forskningsområde inom artificiell intelligens (AI). Målet med projektet var att implementera en mänsklig användare, som sedan skulle kunna spela med maskin lärlingsbaserade agenter med andra ord icke-mänskliga spelare genom GUI.För att uppnå målen, implementerade vi kontroller för den mänskliga användaren i GUI. Modeller av agenter integrerades in till GUI som mänskliga användaren sedan skulle spela med. Slutligen utfördes en användarstudie för att utvärdera de olika agenternas prestation. / This report will describe the further development of the Graphical User Interface (GUI) for the Hanabi Benchmark. Hanabi is a card game that has been introduced as a new frontier for artificial intelligence (AI). The goal of the project was to implement a human-user, into the GUI, and give the possibility to play against Machine Learning (ML) based agents, viz, non-human players in the GUI.To achieve these goals, we implemented human controls into the GUI to give a human user the option to play the game in the GUI. Agent models were integrated into to the GUI for the human to play with. Finally, a small study was conducted to evaluate the agent’s performances.
14

Machine learning-based mobile device in-air signature authentication

Yubo Shao (14210069) 05 December 2022 (has links)
<p>In the last decade, people have been surrounded by mobile devices such as smartphones, smartwatches, laptops, smart TVs, tablets, and IoT devices. As sensitive personal information such as photos, messages, contact information, schedules, and bank accounts are all stored on mobile devices today, the security and protection of such personal information are becoming more and more important. Today’s mobile devices are equipped with a variety of embedded sensors such as accelerometer, gyroscope, magnetometer, camera, GPS sensor, acoustic sensors, etc. that produce raw data on location, motion, and the environment around us. Based on these sensor data, we propose novel in-air signature authentication technologies on both smartphone and smartwatch in this dissertation. In-air signature authentication, as an essential behavioral biometric trait, has been adopted for identity verification and user authorization, as well as the development of deep neural networks, has vastly facilitated this field. This dissertation examines two challenging problems. One problem is how to deploy machine learning techniques to authenticate user in-air signatures in more convenient, intuitive, and secure ways by using smartphone and smartwatch in daily settings. Another problem is how to deal with the limited computational resources on today’s mobile devices which restrict to use machine learning models due to the substantial computational costs introduced by millions of parameters. </p> <p>To address the two above problems separately, we conduct the following research works. 1) The first work AirSign leverages both in-built acoustic and motion sensors on today’s smartphone for user authentication by signing signatures in the air without requiring any special hardware. This system actively transmits inaudible acoustic signals from the earpiece speaker, receives echoes back through both in-built microphones to “illuminate” signature and hand geometry, and authenticates users according to the unique features extracted from echoes and motion sensors. 2) The second work DeepWatchSign leverages in-built motion sensors on today’s smartwatch for user in-air signature authentication. The system adopts LSTM-AutoEncoder to generate negative signature data automatically from the enrolled signatures and authenticates each user by the deep neural network model. 3) We close this dissertation with an l0-based sparse group lasso approach called MobilePrune which can compress the deep learning models for both desktop and mobile platforms. This approach adopts group lasso penalty to enforce sparsity at the group level to benefit General Matrix Multiply (GEMM) and optimize the l0 norm in an exact manner. We observe the substantial reduction of compression ratio and computational costs for deep learning models. This method also achieves less response delay and battery consumption on mobile devices.</p>
15

Multi-Scale and Multi-Rate Neural Networks for Intelligent Bearing Fault Diagnosis System

Xiaofan Liu (14265413) 15 December 2022 (has links)
<p> Roller bearing is one of the machine industry’s common components. The roller bearing operation status is usually related to production efficiency. Failure of bearings during operation will cause downtime and severe economic losses. To prevent this situation, the proposal of effective bearing fault diagnosis methods has become a popular research topic. This thesis research first validates several popular bearing diagnosis methods based on signal processing and machine learning. Second, a novel signal feature extraction method called sparse wavelet packet transform (WPT) decomposition and a corresponding feature learning model called multi-scale and multi-rate convolutional neural network (MSMR-CNN) are proposed. Finally, the proposed method is verified using both Case Western Reserve University (CWRU) dataset and the self-collected dataset. The results demonstrate that our proposed MSMR-CNN method achieves higher performance of bearing fault classification accuracy in comparison with the methods which are recently proposed by the other researchers using machine learning and neural networks .</p>
16

mustafa_ali_dissertation.pdf

Mustafa Fayez Ahmed Ali (14171313) 30 November 2022 (has links)
<p>Energy efficient machine learning accelerator design</p>
17

Maskininlärningsmetoder för bildklassificering av elektroniska komponenter / Machine learning based image classification of electronic components

Goobar, Leonard January 2013 (has links)
Micronic Mydata AB utvecklar och tillverkar maskiner för att automatisk montera elektroniska komponenter på kretskort, s.k. ”Pick and place” (PnP) maskiner. Komponenterna blir lokaliserade och inspekterade optiskt innan de monteras på kretskorten, för att säkerhetsställa att de monteras korrekt och inte är skadade. En komponent kan t.ex. plockas på sidan, vertikalt eller missas helt. Det nuvarande systemet räknar ut uppmätta parametrar så som: längd, bredd och kontrast.Projektet syftar till att undersöka olika maskininlärningsmetoder för att klassificera felaktiga plock som kan uppstå i maskinen. Vidare skall metoderna minska antalet defekta komponenter som monteras samt minska antalet komponenter som felaktigt avvisas. Till förfogande finns en databas innehållande manuellt klassificerade komponenter och tillhörande uppmätta parametrar och bilder. Detta kan användas som träningsdata för de maskininlärningsmetoder som undersöks och testas. Projektet skall även undersöka hur dessa maskininlärningsmetoder lämpar sig allmänt i mekatroniska produkter, med hänsyn till problem så som realtidsbegräsningar.Fyra olika maskininlärningsmetoder har blivit utvärderade och testade. Metoderna har blivit utvärderade för ett test set där den nuvarande metoden presterar mycket bra. Dels har de nuvarande parametrarna använts, samt en alternativ metod som extraherar parametrar (s.k. SIFT descriptor) från bilderna. De nuvarande parametrarna kan användas tillsammans med en SVM eller ett ANN och uppnå resultat som reducerar defekta och monterade komponenter med upp till 64 %. Detta innebär att dessa fel kan reduceras utan att uppgradera de nuvarande bildbehandlingsalgoritmerna. Genom att använda SIFT descriptor tillsammans med ett ANN eller en SVM kan de vanligare felen som uppstår klassificeras med en noggrannhet upp till ca 97 %. Detta överstiger kraftigt de resultat som uppnåtts när de nuvarande parametrarna har använts. / Micronic Mydata AB develops and builds machines for mounting electronic component onto PCBs, i.e. Pick and Place (PnP) machines. Before being mounted the components are localized and inspected optically, to ensure that the components are intact and picked correctly. Some of the errors which may occur are; the component is picked sideways, vertically or not picked at all. The current vision system computes parameter such as: length, width and contrast.The project strives to investigate and test machine learning approaches which enable automatic error classification. Additionally the approaches should reduce the number of defect components which are mounted, as well as reducing the number of components which are falsely rejected. At disposal is a large database containing the calculated parameters and images of manually classified components. This can be used as training data for the machine learning approaches. The project also strives to investigate how machine learning approaches can be implemented in mechatronic systems, and how limitations such as real-time constraints could affect the feasibility.Four machine learning approaches have been evaluated and verified against a test set where the current implementation performs very well. The currently calculated parameters have been used as inputs, as well as a new approach which extracts (so called SIFT descriptor) parameters from the raw images. The current parameters can be used with an ANN or a SVM and achieve results which reduce the number of poorly mounted components by up to 64 %. Hence, these defects can be decreased without updating the current vision algorithms. By using SIFT descriptors and an ANN or a SVM the more common classes can be classified with accuracies up to approximately 97 %. This greatly exceeds results achieved when using the currently computed parameters.
18

Digital Image Processing And Machine Learning Research: Digital Color Halftoning, Printed Image Artifact Detection And Quality Assessment, And Image Denoising.

Yi Yang (12481647) 29 April 2022 (has links)
<p>To begin with, we describe a project in which three screens for Cyan, Magenta, and Yellow colorants were designed jointly using the Direct Binary Search algorithm (DBS). The screen set generated by the algorithm can be used to halftone color images easily and quickly. The halftoning results demonstrate that by utilizing the screen sets, it is possible to obtain high-quality color halftone images while significantly reducing computational complexity.</p> <p>Our next research focuses on defect detection and quality assessment of printed images. We measure and analyze macro-uniformity, banding, and color plane misregistration. For these three defects, we designed different pipelines for them and developed a series of digital image processing and computer vision algorithms for the purpose of quantifying and evaluating these printed image defects. Additionally, we conduct a human psychophysical experiment to collect perceptual assessments and use machine learning approaches to predict image quality scores based on human vision.</p> <p>We study modern deep convolutional neural networks for image denoising and propose a network designed for AWGN image denoising. </p> <p>Our network removes the bias at each layer to achieve the benefits of scaling invariant network; additionally, it implements a mix loss function to boost performance. We train and evaluate our denoising results using PSNR, SSIM, and LPIPS, and demonstrate that our results achieve impressive performance on both objective and subjective IQA assessments.</p>
19

UBIQUITOUS HUMAN SENSING NETWORK FOR CONSTRUCTION HAZARD IDENTIFICATION USING WEARABLE EEG

Jungho Jeon (13149345) 25 July 2022 (has links)
<p>  </p> <p>Hazard identification is one of the most significant components in safety management at construction jobsites to prevent undesired fatalities and injuries of construction workers. The current practice, which relies on a limited number of safety managers’ manual and subjective inspections, and existing research efforts analyzing workers’ physical and physiological signals have achieved limited success, leaving many hazards unidentified at the jobsites. Motivated by this critical need, this research aims to develop a human sensing network that allows for ubiquitous hazard identification in the construction workplace.</p> <p>To attain this overarching goal, this research analyzes construction workers’ collective EEG signals collected from wearable EEG sensors based on machine learning, virtual reality (VR), and advanced signal processing techniques. Three specific research objectives are: (1) establishing a relationship between EEG signals and the existence of construction hazards, (2) identifying correlations between EEG signals/physiological states (e.g., emotion) and different hazard types, and (3) developing an integrated platform for real-time construction hazard mapping and comparing the results developed based on VR and real-world experimental settings.</p> <p>Specifically, the first objective establishes the relationship by investigating the feasibility of identifying construction hazards using a binary EEG classifier developed in VR, which can capture EEG signals associated with perceived hazards. In the second objective, correlations are discovered by testing the feasibility of differentiating construction hazard types based on a multi-class classifier constructed in VR. In the first and second objectives, the complex relationships are also analyzed in terms of brain dynamics and EEG signal components. In the third objective, the platform is developed by fusing EEG signals with heterogeneous data (e.g., location), and the discrepancies in VR and real-world environments are quantitatively assessed in terms of hazard identification performance and human behavioral responses.</p> <p>The primary outcome of this research is that the proposed approach can be applied to actual construction jobsites and used to detect all potential hazards, which was challenging to be achieved based on the current practice and existing research efforts. Also, the human cognitive mechanisms revealed in this research discover new neurocognitive knowledge in construction workers’ hazard perception. As a result, this research contributes to enhancing current hazard identification capability and improving construction workers’ safety and health.</p>
20

ADDRESSING DATA IMBALANCE IN BREAST CANCER PREDICTION USING SUPERVISED MACHINE LEARNING

Shuning Yin (13169550) 28 July 2022 (has links)
<p>Every 12 minutes, 12 women are diagnosed with breast cancer in the US, and 1 dies out of  it. Globally, every 46 seconds, a woman loses her life due to breast cancer, meaning more than  1,800 deaths every day. The condition makes the prediction of breast cancer very important. To  achieve the goal, supervised machine learning (ML) methods are used for breast cancer  likelihood predictions. However, due to imbalance in the real-world data with very low portion  of positive cases, the prediction accuracy of ML models for positive cancer cases was limited. Two procedures were done to address the issues in the study. Firstly, four supervised ML  models, including Naïve Bayes (NB), Logistic Regression (LR), Support Vector Machine (SVM), and Multilayer Perceptron (MLP), using WEKA, the industry-standard software, were  applied to the Breast Cancer Surveillance Consortium (BCSC) dataset to assess the impact of the  data imbalance on breast cancer prediction. Secondly, the data was manually built as balanced  (24,558 cases, 12,279 for each class-positive and negative) and unbalanced (99,000 cases for  negative) training datasets and a non-overlapping testing dataset (11,000 cases) based on the  same dataset and a decision support system was developed for two ML models, NB and LR to  tackle the class imbalance issue for breast cancer prediction. Overall, the results indicate that  MLP had the best performance on positive breast cancer prediction with 0.959 sensitivity and  0.907 PPV and balanced dataset predicted better results for all ML models than unbalanced  dataset. Furthermore, the proposed method improved the sensitivity of positive cancer case  prediction from 0.687 to 0.936 using the NB model and from 0.358 to 0.8306 using the LR  model. The improvement demonstrated that the approach provided higher confidence ML-based  predictions and filtered weaker ones, and the technique could efficiently address the class  imbalance issue in breast cancer likelihood prediction and be used in clinical practice.</p>

Page generated in 0.2727 seconds