• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 20
  • 13
  • 2
  • 1
  • Tagged with
  • 72
  • 72
  • 36
  • 26
  • 15
  • 15
  • 15
  • 14
  • 13
  • 12
  • 10
  • 9
  • 9
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modeling Of Interfacial Instability, Conductivity And Particle Migration In Confined Flows

Daihui Lu (11730407) 03 December 2021 (has links)
<div><div>This thesis analyzed three fundamental fluid dynamics problems arising from multiphase flows that may be encountered in hydraulically fractured flow passages. During hydraulic fracturing (``fracking''), complex fluids laden with proppants are pumped into tight rock formations. Flow passages in these formation are naturally heterogeneous with geometric variations, which become even more pronounced due to fracking. Upon increasing the flow area (and, thus, the conductivity of the rock), crude oil, shale gas or other hydrocarbons can then flow out of the formation more easily. In this context, we encounter the following three fluid mechanical phenomena: fluid--fluid interfacial instabilities, flow-wise variation of the hydraulic conductivity, and particle migration in the pumped fluids. </div><div><br></div><div>First, we studied the (in)stability of the interface between two immiscible liquids in angled (tapered) Hele-Shaw cells, as model of a non-uniform flow passage. We derived an expression for the growth rate of perturbations to the flat interface and for the critical capillary number, as functions of the small gap gradient (taper). On this basis, we formulated a three-regime theory to describe the interface's stability. Specifically, we found a new regime in which the growth rate changes from negative to positive (converging cells), or from positive to negative (diverging cells), thus the interface's stability can change type at some location in the cell. We conducted three-dimensional OpenFOAM simulations of the Navier--Stokes equations, using the continuous surface force method, to validate the theory.</div><div><br></div><div>Next, we investigated the flow-wise variation of the hydraulic conductivity inside a non-uniformly shaped fracture with permeable walls. Using lubrication theory for viscous flow, in conjunction with the Beavers--Joseph--Saffman boundary condition at the permeable walls, we obtained an analytical expression for the velocity profile, conductivity, and wall permeation velocity. The new expression highlights the effects of geometric variation, </div><div>the permeability of the walls, </div><div>and the effect of flow inertia.</div><div>The theory was validated against OpenFOAM simulations of the Navier--Stokes equations subject to a tensorial slip boundary condition.</div><div><br></div><div>Finally, we extended the utility of phenomenological models for particle migration in shear flow using the physics-informed neural networks (PINNs) approach. We first verified the approach for solving the inverse problem of radial particle migration in a non-Brownian suspension in an annular Couette flow. Then, we applied this approach to both non-Brownian and Brownian suspensions in Poiseuille slot flow, for which a definitive calibration of the phenomenological migration model has been lacking. Using PINNs, we identified the unknown/empirical parameters in the physical model, showing that (unlike assumptions made in the literature) they depend on the bulk volume fraction and shear P\'eclet number. </div></div>
12

Graph Representation Learning for Unsupervised and Semi-supervised Learning Tasks

Mengyue Hang (11812658) 19 December 2021 (has links)
<div> Graph representation learning and Graph Neural Network (GNNs) models provide flexible tools for modeling and representing relational data (graphs) in various application domains. Specifically, node embedding methods provide continuous representations for vertices that has proved to be quite useful for prediction tasks, and Graph Neural Networks (GNNs) have recently been used for semi-supervised node and graph classification tasks with great success. </div><div> </div><div> However, most node embedding methods for unsupervised tasks consider a simple, sparse graph, and are mostly optimized to encode aspects of the network structure (typically local connectivity) with random walks. And GNNs model dependencies among the attributes of nearby neighboring nodes rather than dependencies among observed node labels, which makes it not expressive enough for semi-supervised node classification tasks. </div><div> </div><div> This thesis will investigate methods to address these limitations, including: </div><div><br></div><div> (1) For heterogeneous graphs: Development of a method for dense(r), heterogeneous graphs that incorporates global statistics into the negative sampling procedure with applications in recommendation tasks;</div><div> (2) For capturing long-range role equivalence: Formalized notions of representation-based equivalence w.r.t regular/automorphic equivalence in a single graph or multiple graph samples, which is employed in a embedding-based models to capture long-range equivalence patterns that reflect topological roles; </div><div> (3) For collective classification: Since GNNs model dependencies among the attributes of nearby neighboring nodes rather than dependencies among observed node labels, we develop an add-on collective learning framework to GNNs that provably boosts their expressiveness for node classification tasks, beyond that of an {\em optimal} WL-GNN, utilizing self-supervised learning and Monte Carlo sampled embeddings to incorporate node labels during inductive learning for semi-supervised node classification tasks.</div>
13

Data-Driven Anomaly and Precursor Detection in Metroplex Airspace Operations

Raj Deshmukh (8704416) 17 April 2020 (has links)
<div>The air traffic system is one of the most complex and safety-critical systems, which is expected to grow at an average rate of 0.9% a year -- from 51.8 million operational activities in 2018 to 62 million in 2039 -- within the National Airspace System. In such systems, it is important to identify degradations in system performance, especially in terms of safety and efficiency. Among the operations of various subsystems of the air traffic system, the arrival and departure operations in the terminal airspace require more attention because of its higher impact (about 75% incidents) on the entire system's safety, ranging from single aircraft incidents to multi-airport congestion incidents.</div><div><br></div><div>The first goal of this dissertation is to identify the air traffic system's degradations -- called anomalies -- in the multi-airport terminal airspace or metroplex airspace, by developing anomaly detection models that can separate anomalous flights from normal ones. Within the metroplex airspace, airport operational parameters such as runway configuration and coordination between proximal airports are a major driving factor in aircraft’s behaviors. As a substantial amount of data is continually recording such behaviors through sensing technologies and data collection capabilities, modern machine learning techniques provide powerful tools for the identification of anomalous flights in the metroplex airspace. The proposed algorithm ingests heterogeneous data, comprising the surveillance dataset, which represents an aircraft’s physical behaviors, and the airport operations dataset, which reflects operational procedures at airports. Typically, such aviation data is unlabeled, and thus the proposed algorithm is developed based on hierarchical unsupervised learning approaches for anomaly detection. This base algorithm has been extended to an anomaly monitoring algorithm that uses the developed anomaly detection models to detect anomalous flights within real-time streaming data.</div><div><br></div><div>A natural next-step after detecting anomalies is to determine the causes for these anomalies. This involves identifying the occurrence of precursors, which are triggers or conditions that precede an anomaly and have some operational correlation to the occurrence of the anomaly. A precursor detection algorithm is developed which learns the causes for the detected anomalies using supervised learning approaches. If detected, the precursor could be used to trigger actions to avoid the anomaly from ever occurring.</div><div><br></div><div>All proposed algorithms are demonstrated with real air traffic surveillance and operations datasets, comprising of departure and arrival operations at LaGuardia Airport, John F. Kennedy International Airport, and Newark Liberty International Airport, thereby detecting and predicting anomalies for all airborne operations in the terminal airspace within the New York metroplex. Critical insight regarding air traffic management is gained from visualizations and analysis of the results of these extensive tests, which show that the proposed algorithms have a potential to be used as decision-support tools that can aid pilots and air traffic controllers to mitigate anomalies from ever occurring, thus improving the safety and efficiency of metroplex airspace operations.</div>
14

Inferential GANs and Deep Feature Selection with Applications

Yao Chen (8892395) 15 June 2020 (has links)
Deep nueral networks (DNNs) have become popular due to their predictive power and flexibility in model fitting. In unsupervised learning, variational autoencoders (VAEs) and generative adverarial networks (GANs) are two most popular and successful generative models. How to provide a unifying framework combining the best of VAEs and GANs in a principled way is a challenging task. In supervised learning, the demand for high-dimensional data analysis has grown significantly, especially in the applications of social networking, bioinformatics, and neuroscience. How to simultaneously approximate the true underlying nonlinear system and identify relevant features based on high-dimensional data (typically with the sample size smaller than the dimension, a.k.a. small-n-large-p) is another challenging task.<div><br></div><div>In this dissertation, we have provided satisfactory answers for these two challenges. In addition, we have illustrated some promising applications using modern machine learning methods.<br></div><div><br></div><div>In the first chapter, we introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs. GANs have been impactful on many problems and applications but suffer from unstable training. The Wasserstein GAN (WGAN) leverages the Wasserstein distance to avoid the caveats in the minmax two-player training of GANs but has other defects such as mode collapse and lack of metric to detect the convergence. The iWGAN model jointly learns an encoder network and a generator network motivated by the iterative primal dual optimization process. The encoder network maps the observed samples to the latent space and the generator network maps the samples from the latent space to the data space. We establish the generalization error bound of iWGANs to theoretically justify the performance of iWGANs. We further provide a rigorous probabilistic interpretation of our model under the framework of maximum likelihood estimation. The iWGAN, with a clear stopping criteria, has many advantages over other autoencoder GANs. The empirical experiments show that the iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample. We illustrate the ability of iWGANs by obtaining a competitive and stable performance with state-of-the-art for benchmark datasets. <br></div><div><br></div><div>In the second chapter, we present a general framework for high-dimensional nonlinear variable selection using deep neural networks under the framework of supervised learning. The network architecture includes both a selection layer and approximation layers. The problem can be cast as a sparsity-constrained optimization with a sparse parameter in the selection layer and other parameters in the approximation layers. This problem is challenging due to the sparse constraint and the nonconvex optimization. We propose a novel algorithm, called Deep Feature Selection, to estimate both the sparse parameter and the other parameters. Theoretically, we establish the algorithm convergence and the selection consistency when the objective function has a Generalized Stable Restricted Hessian. This result provides theoretical justifications of our method and generalizes known results for high-dimensional linear variable selection. Simulations and real data analysis are conducted to demonstrate the superior performance of our method.<br></div><div><br></div><div><div>In the third chapter, we develop a novel methodology to classify the electrocardiograms (ECGs) to normal, atrial fibrillation and other cardiac dysrhythmias as defined by the Physionet Challenge 2017. More specifically, we use piecewise linear splines for the feature selection and a gradient boosting algorithm for the classifier. In the algorithm, the ECG waveform is fitted by a piecewise linear spline, and morphological features related to the piecewise linear spline coefficients are extracted. XGBoost is used to classify the morphological coefficients and heart rate variability features. The performance of the algorithm was evaluated by the PhysioNet Challenge database (3658 ECGs classified by experts). Our algorithm achieves an average F1 score of 81% for a 10-fold cross validation and also achieved 81% for F1 score on the independent testing set. This score is similar to the top 9th score (81%) in the official phase of the Physionet Challenge 2017.</div></div><div><br></div><div>In the fourth chapter, we introduce a novel region-selection penalty in the framework of image-on-scalar regression to impose sparsity of pixel values and extract active regions simultaneously. This method helps identify regions of interest (ROI) associated with certain disease, which has a great impact on public health. Our penalty combines the Smoothly Clipped Absolute Deviation (SCAD) regularization, enforcing sparsity, and the SCAD of total variation (TV) regularization, enforcing spatial contiguity, into one group, which segments contiguous spatial regions against zero-valued background. Efficient algorithm is based on the alternative direction method of multipliers (ADMM) which decomposes the non-convex problem into two iterative optimization problems with explicit solutions. Another virtue of the proposed method is that a divide and conquer learning algorithm is developed, thereby allowing scaling to large images. Several examples are presented and the experimental results are compared with other state-of-the-art approaches. <br></div>
15

COPING WITH LIMITED DATA: MACHINE-LEARNING-BASED IMAGE UNDERSTANDING APPLICATIONS TO FASHION AND INKJET IMAGERY

Zhi Li (8067608) 02 December 2019 (has links)
<div>Machine learning has been revolutionizing our approach to image understanding problems. However, due to the unique nature of the problem, finding suitable data or learning from limited data properly is a constant challenge. In this work, we focus on building machine learning pipelines for fashion and inkjet image analysis with limited data. </div><div><br></div><div>We first look into the dire issue of missing and incorrect information on online fashion marketplace. Unlike professional online fashion retailers, sellers on P2P marketplaces tend not to provide correct color categorical information, which is pivotal for fashion shopping. Therefore, to assist users to provide correct color information, we aim to build an image understanding pipeline that can extract garment region in the fashion image and match the color of the fashion item to a pre-defined color categories on the fashion marketplace. To cope with the challenges of lack of suitable data, we propose an autonomous garment color extraction system that uses both clustering and semantic segmentation algorithm to extract the identify fashion garments in the image. In addition, a psychophysical experiment is designed to collect human subjects' color naming schema, and a random forest classifier is trained to given close prediction of color label for the fashion item. Our system is able to perform pixel level segmentation on fashion product portraits and parse human body parts and various fashion categories with human presence. </div><div><br></div><div>We also develop an inkjet printing analysis pipeline using pre-trained neural network. Our pipeline is able to learn to perceive print quality, namely high frequency noise level, of the test targets, without intense training. Our research also suggests that in spite of being trained on large scale dataset for object recognition, features generated from neural networks reacts to textural component of the image without any localized features. In addition, we expand our pipeline to printer forensics, and the pipeline is able to identify the printer model by examining the inkjet dot pattern at a microscopic level. Overall, the data-driven computer vision approach presents great value and potential to improve future inkjet imaging technology, even when the data source is limited.</div>
16

Learning-Based Testing of Microservices : An Exploratory Case Study Using LBTest / Inlärningsbaserad testning av microservices

Nycander, Peter January 2015 (has links)
Learning-based testing (LBT) is a relatively new testing paradigm which automatically generates test cases for black-box testing of a system under test (SUT). LBT uses machine learning to model a SUT, and combines this with model-based testing. This thesis uses LBTest, a research tool created at CSC, in order to apply LBT on a new architectural style of distributed systems called microservices. Two new approaches to using LBT have been implemented to test a commercial product for counter-party credit risk. One approach is to monitor the internal processes to extract the states of the software. The second is based on fault injection on the software level. Errors have been found during the fault injection approach. Lastly, some general recommendations are given on how to implement LBT. / Inlärningsbaserad testning (LBT) är en relativt ny testningsparadigm som automatiskt genererar testfall för black-box-testning av ett system under test (SUT). LBT använder sig av maskininlärning för att modellera ett SUT, och kombinerar det med modellbaserad testning. I det här examensarbetet används LBTest, ett forskningverktyg skapat på CSC, för att applicera LBT på microservices. Två nya tillvägagångssätt att använda LBT på har implementerats för att testa en kommersiell produkt för uträkning av kreditrisk hos motparter. Ett tillvägagångssätt är att avlyssna interna processer för att extrahera tillstånden hos mjukvaran. Det andra tillvägagångssättet är baserat på felinjicering på mjukvarunivå. Fel har hittats med hjälp av felinjiceringstillvägagångssättet. Som avslutning ges rekommendationer till hur LBT implementeras.
17

TRAJECTORY PATTERN IDENTIFICATION AND CLASSIFICATION FOR ARRIVALS IN VECTORED AIRSPACE

Chuhao Deng (11184909) 26 July 2021 (has links)
<div> <div> <div> <p>As the demand and complexity of air traffic increase, it becomes crucial to maintain the safety and efficiency of the operations in airspaces, which, however, could lead to an increased workload for Air Traffic Controllers (ATCs) and delays in their decision-making processes. Although terminal airspaces are highly structured with the flight procedures such as standard terminal arrival routes and standard instrument departures, the aircraft are frequently instructed to deviate from such procedures by ATCs to accommodate given traffic situations, e.g., maintaining the separation from neighboring aircraft or taking shortcuts to meet scheduling requirements. Such deviation, called vectoring, could even increase the delays and workload of ATCs. This thesis focuses on developing a framework for trajectory pattern identification and classification that can provide ATCs, in vectored airspace, with real-time information of which possible vectoring pattern a new incoming aircraft could take so that such delays and workload could be reduced. This thesis consists of two parts, trajectory pattern identification and trajectory pattern classification. </p> <p>In the first part, a framework for trajectory pattern identification is proposed based on agglomerative hierarchical clustering, with dynamic time warping and squared Euclidean distance as the dissimilarity measure between trajectories. Binary trees with fixes that are provided in the aeronautical information publication data are proposed in order to catego- rize the trajectory patterns. In the second part, multiple recurrent neural network based binary classification models are trained and utilized at the nodes of the binary trees to compute the possible fixes an incoming aircraft could take. The trajectory pattern identifi- cation framework and the classification models are illustrated with the automatic dependent surveillance-broadcast data that were recorded between January and December 2019 in In- cheon international airport, South Korea . </p> </div> </div> </div>
18

Physics-based data-driven modeling of composite materials and structures through machine learning

Fei Tao (12437451) 21 April 2022 (has links)
<p>Composite materials have been successfully applied in various industries, such as aerospace, automobile, and wind turbines, etc. Although the material properties of composites are desirable, the behaviors of composites are complicated. Many efforts have been made to model the constitutive behavior and failure of composites, but a complete and validated methodology has not been completely achieved yet. Recently, machine learning techniques have attracted many researchers from the mechanics field, who are seeking to construct surrogate models with machine learning, such as deep neural networks (DNN), to improve the computational speed or employ machine learning to discover unknown governing laws to improve the accuracy. Currently, the majority of studies mainly focus on improving computational speed. Few works focus on applying machine learning to discover unknown governing laws from experimental data.  In this study, we will demonstrate the implementation of machine learning to discover unknown governing laws of composites. Additionally, we will also present an application of machine learning to accelerate the design optimization of a composite rotor blade.</p> <p><br></p> <p>To enable the machine learning model to discover constitutive laws directly from experimental data, we proposed a framework to couple finite element (FE) with DNN to form a fully coupled mechanics system FE-DNN. The proposed framework enables data communication between FE and DNN, which takes advantage of the powerful learning ability of DNN and the versatile problem-solving ability of FE. To implement the framework to composites, we introduced positive definite deep neural network (PDNN) to the framework to form FE-PDNN, which solves the convergence robustness issue of learning the constitutive law of a severely damaged material. In addition, the lamination theory is introduced to the FE-PDNN mechanics system to enable FE-PDNN to discover the lamina constitutive law based on the structural level responses.</p> <p><br></p> <p>We also developed a framework that combines sparse regression with compressed sensing, which leveraging advances in sparsity techniques and machine learning, to discover the failure criterion of composites from experimental data. One advantage of the proposed approach is that this framework does not need Bigdata to train the model. This feature satisfies the current failure data size constraint. Unlike the traditional curve fitting techniques, which results in a solution with nonzero coefficients in all the candidate functions. This framework can identify the most significant features that govern the dataset. Besides, we have conducted a comparison between sparse regression and DNN to show the superiority of sparse regression under limited dataset. Additionally, we used an optimization approach to enforce a constraint to the discovered criterion so that the predicted data to be more conservative than the experimental data. This modification can yield a conservative failure criterion to satisfy the design needs.</p> <p><br></p> <p>Finally, we demonstrated employing machine learning to accelerate the planform design of a composite rotor blade with strength consideration. The composite rotor blade planform design focuses on optimizing planform parameters to achieve higher performance. However, the strength of the material is rarely considered in the planform design, as the physic-based strength analysis is expensive since millions of load cases can be accumulated during the optimization. Ignoring strength analysis may result in the blade working in an unsafe or low safety factor region since composite materials are anisotropic and susceptible to failure. To reduce the computational cost of the blade cross-section strength analysis, we proposed to construct a surrogate model using the artificial neural network (ANN) for beam level failure criterion to replace the physics-based strength analysis. The surrogate model is constructed based on the Timoshenko beam model, where the mapping is between blade loads and the strength ratios of the cross-section. The results showed that the surrogate model constraint using machine learning can achieve the same accuracy as the physics-based simulation while the computing time is significantly reduced. </p>
19

CODESIGN AND CONTROL OF SMART POWERED LOWER LIMB PROSTHESES

Abdelhadi, Mohamed January 2021 (has links)
No description available.
20

Reinforcement Learning enabled hummingbird-like extreme maneuvers of a dual-motor at-scale flapping wing robot

Fan Fei (7461581) 31 January 2022 (has links)
<div>Insects and hummingbirds exhibit extraordinary flight capabilities and can simultaneously master seemingly conflicting goals: stable hovering and aggressive maneuvering, unmatched by small-scale man-made vehicles. Given a sudden looming visual stimulus at hover, a hummingbird initiates a fast backward translation coupled with a 180-degree yaw turn, which is followed by instant posture stabilization in just under 10 wingbeats. Considering the wingbeat frequency of 40Hz, this aggressive maneuver is accomplished in just 0.2 seconds. Flapping Wing Micro Air Vehicles (FWMAVs) hold great promise for closing this performance gap given its agility. However, the design and control of such systems remain challenging due to various constraints.</div><div><br></div><div>First, the design, optimization and system integration of a high performance at-scale biologically inspired tail-less hummingbird robot is presented. Designing such an FWMAV is a challenging task under the constraints of size, weight, power, and actuation limitations. It is even more challenging to design such a vehicle with independently controlled wings equipped with a total of only two actuators and be able to achieve animal-like flight performance. The detailed systematic solution for the design is presented, including system modeling and analysis of the wing-actuation system, body dynamics, and control and sensing requirements. Optimization is conducted to search for the optimal system parameters, and a hummingbird robot is built and validated experimentally.</div><div><br></div><div>An open-source high fidelity dynamic simulation for FWMAVs is developed to serve as a testbed for the onboard sensing and flight control algorithm, as well as design, and optimization of FWMAVs. For simulation validation, the hummingbird robot was recreated in the simulation. System identification was performed to obtain the dynamics parameters. The force generation, open-loop and closed-loop dynamic response between simulated and experimental flights were compared and validated. The unsteady aerodynamics and the highly nonlinear flight dynamics present challenging control problems for conventional and learning control algorithms such as Reinforcement Learning.</div><div><br></div><div>For robust transient and steady-state flight performance, a robust adaptive controller is developed to achieve stable hovering and fast maneuvering. The model-based nonlinear controller can stabilize the system and adapt to system parameter changes such as wear and tear, thermo effect on the actuator or strong disturbance such as ground effect. The controller is tuned in simulation and experimentally verified by hovering, point-to-point fast traversing, and following by rapid figure-of-eight trajectory. The experimental result demonstrates the state-of-the-art performance of the FWMAV in stationary hovering and fast trajectory tracking tasks, with minimum transient and steady-state error.</div><div><br></div><div>To achieve animal level maneuvering performance, especially the hummingbirds' near-maximal performance during rapid escape maneuvers, we developed a hybrid flight control strategy for aggressive maneuvers. The proposed hybrid control policy combines model-based nonlinear control with model-free reinforcement learning. The model-based nonlinear control stabilizes the system's closed-loop dynamics under disturbance and parameter variation. With the stabilized system, a model-free reinforcement learning policy trained in simulation can be optimized to achieve the desirable fast movement by temporarily "destabilizing" the system during flight. Two test cases were demonstrated to show the effectiveness of the hybrid control method: 1)a rapid escape maneuver observed in real hummingbird, 2) a drift-free fast 360-degree body flip. Direct simulation-to-real transfers are achieved, demonstrating the hummingbird-like fast evasive maneuvers on the at-scale hummingbird robot.</div>

Page generated in 0.0947 seconds