Spelling suggestions: "subject:"cachine 1earning based"" "subject:"cachine 1earning eased""
1 |
DATA-DRIVEN MULTISCALE PREDICTION OF MATERIAL PROPERTIES USING MACHINE LEARNING ALGORITHMSMoonseop Kim (7326788) 16 October 2019 (has links)
<div>
<div>
<div>
<p>The objective of this study is that combination of molecular dynamics (MD) simulations
and machine learning to complement each other. In this study, four steps are conducted.
</p>
<p>First is based on the empirical potentials development in silicon nanowires for theory parts of
molecular dynamics. Many-body empirical potentials have been developed for the last three
decades, and with the advance of supercomputers, these potentials are expected to be even more
useful for the next three decades. Atomistic calculations using empirical potentials can be
particularly useful in understanding the structural aspects of Si or Si-H systems, however, existing
empirical potentials have many errors of parameters. We propose a novel technique to understand
and construct interatomic potentials with an emphasis on parameter fitting, in which the
relationship between material properties and potential parameters is explained. The input database
has been obtained from density functional theory (DFT) calculations with the Vienna ab initio
simulation package (VASP) using the projector augmented-wave method within the generalized
gradient approximation. The DFT data are used in the fitting process to guarantee the compatibility
within the context of multiscale modeling.
</p>
<p>Second, application part of MD simulations, enhancement of mechanical properties was
focused in this research by using MEAM potentials. For instance, Young’s modulus, ultimate
tensile strength, true strain, true stress and stress-strain relationship were calculated for nanosized
Cu-precipitates using quenching & partitioning (Q&P) processing and nanosized Fe3C
strengthened ultrafine-grained (UFG) ferritic steel. In the stress-strain relationship, the structure
of simulation is defined using the constant total number of particles, constant-energy, constant-volume ensemble (NVE) is pulled in the y-direction, or perpendicular to the boundary interface,
to increase strain. The strain in increased for a specified number of times in a loop and the stress
is calculated at each point before the simulation loops.</p></div></div>
</div>
<div>
<div>
<div>
<p>Third, based on the MD simulations, machine learning and the peridynamics are applied to
prediction of disk damage patterns. The peridynamics is the nonlocal extension of classical
continuum mechanics and same as MD model. Especially, FEM is based on the partial differential
equations, however, partial derivatives do not exist on crack and damage surfaces. To complement
this problem, the peridynamics was used which is based on the integral equations and overcome
deficiencies in the modeling of deformation discontinuities. In this study, the forward problem (i),
if we have images of damage and crack, crack patterns are predicted by using trained data
compared to true solutions which are hit by changing the x and y hitting coordinates on the disk.
The inverse problem (ii), if we have images of damage and crack, the corresponding hitting
location, indenter velocity and indenter size are predicted by using trained data. Furthermore, we
did the regression analysis for the images of the crack patterns with Neural processes to predict
the crack patterns. In the regression problem, by representing the results of the variance according
to the epochs, it can be confirmed that the result of the variance is decreased by increasing the
epoch through the neural processes. Therefore, the result of the training gradually improves, and
the ranges of the variance are expressed as 0 to 0.035. The most critical point of this study is that
the neural processes makes an accurate prediction even if the information of the training data is
missing or not enough. The results show that if the context points are set to 10, 100, 300, and 784,
the training information is deliberately omitted such as context points of 10, 100 and 300, and the
predictions are different when context points are significantly lower. However, when comparing
the results of context points 100 and 784, the predicted results appear to be very similar to each
other because of the Gaussian processes in the neural processes. Therefore, if the training data is
trained through the Neural processes, the missing information of training data can be supplemented
to predict the results.
</p>
<p>Finally, we predicted the data by applying various data using deep learning as well as MD
simulation data. This study applied the deep learning to Cryo-EM images and Line Trip (LT) data
with power systems. In this study, deep learning method was applied to reduce the effort of
selection of high-quality particles. This study proposes a learning frame structure using deep
learning and aims at freeing passively selecting high quality particles as the ultimate goal. For
predicting the line trip data and bad data detection, we choose to analyze the frequency signal
because suddenly the frequency changes in the power system due to events such as generator trip,
line trip or load shedding in large power systems.
</p>
</div>
</div>
</div>
|
2 |
Prediction of disease spread phenomena in large dynamic topology with application to malware detection in ad hoc networksNadra M Guizani (8848631) 18 May 2020 (has links)
Prediction techniques based on data are applied in a broad range of applications such as bioinformatics, disease spread, and mobile intrusion detection, just to name a few. With the rapid emergence of on-line technologies numerous techniques for collecting and storing data for prediction-based analysis have been proposed in the literature. With the growing size of global population, the spread of epidemics is increasing at an alarming rate. Consequently, public and private health care officials are in a dire need of developing technological solutions for managing epidemics. Most of the existing syndromic surveillance and disease detection systems deal with a small portion of a real dataset. From the communication network perspective, the results reported in the literature generally deal with commonly known network topologies. Scalability of a disease detection system is a real challenge when it comes to modeling and predicting disease spread across a large population or large scale networks. In this dissertation, we address this challenge by proposing a hierarchical aggregation approach that classifies a dynamic disease spread phenomena at different scalability levels. Specifically, we present a finite state model (SEIR-FSM) for predicting disease spread, the model manifests itself into three different levels of data aggregation and accordingly makes prediction of disease spread at various scales. We present experimental results of this model for different disease spread behaviors on all levels of granularity. Subsequently, we present a mechanism for mapping the population interaction network model to a wireless mobile network topology. The objective is to analyze the phenomena of malware spread based on vulnerabilities. The goal is to develop and evaluate a wireless mobile intrusion detection system that uses a Hidden Markov model in connection with the FSM disease spread model (HMM-FSM). Subsequently, we propose a software-based architecture that acts as a network function virtualization (NFV) to combat malware spread in IoT based networks. Taking advantage of the NFV infrastructure's potential to provide new security solutions for IoT environments to combat malware attacks. We propose a scalable and generalized IDS that uses a Recurrent Neural Network Long Short Term Memory (RNN-LSTM) learning model for predicting malware attacks in a timely manner for the NFV to deploy the appropriate countermeasures. The analysis utilizes the susceptible (S), exposed (E), infected (I), and resistant (R) (SEIR) model to capture the dynamics of the spread of the malware attack and subsequently provide a patching mechanism for the network. Our analysis focuses primarily on the feasibility and the performance evaluation of the NFV RNN-LSTM proposed model.
|
3 |
SYSTEMATICALLY LEARNING OF INTERNAL RIBOSOME ENTRY SITE AND PREDICTION BY MACHINE LEARNINGJunhui Wang (5930375) 15 May 2019 (has links)
<p><a>Internal
ribosome entry sites (IRES) are segments of the mRNA found in untranslated
regions, which can recruit the ribosome and initiate translation independently
of the more widely used 5’ cap dependent translation initiation mechanism. IRES
play an important role in conditions where has been 5’ cap dependent
translation initiation blocked or repressed. They have been found to play
important roles in viral infection, cellular apoptosis, and response to other
external stimuli. It has been suggested that about 10% of mRNAs, both viral and
cellular, can utilize IRES. But due to the limitations of IRES bicistronic
assay, which is a gold standard for identifying IRES, relatively few IRES have
been definitively described and functionally validated compared to the
potential overall population. Viral and cellular IRES may be mechanistically
different, but this is difficult to analyze because the mechanistic differences
are still not very clearly defined. Identifying additional IRES is an important
step towards better understanding IRES mechanisms. Development of a new
bioinformatics tool that can accurately predict IRES from sequence would be a
significant step forward in identifying IRES-based regulation, and in
elucidating IRES mechanism. This dissertation systematically studies the
features which can distinguish IRES from nonIRES sequences. Sequence features
such as kmer words, and structural features such as predicted MFE of folding, Q<sub>MFE</sub>,
and sequence/structure triplets are evaluated as possible discriminative
features. Those potential features incorporated into an IRES classifier based
on XGBboost, a machine learning model, to classify novel sequences as belong to
IRES or nonIRES groups. The XGBoost model performs better than previous
predictors, with higher accuracy and lower computational time. The number of
features in the model has been greatly reduced, compared to previous
predictors, by adding global kmer and structural features. The trained XGBoost
model has been implemented as the first high-throughput bioinformatics tool for
IRES prediction, IRESpy. This website provides a public tool for all IRES
researchers and can be used in other genomics applications such as gene
annotation and analysis of differential gene expression.</a></p>
|
4 |
Modeling Of Interfacial Instability, Conductivity And Particle Migration In Confined FlowsDaihui Lu (11730407) 03 December 2021 (has links)
<div><div>This thesis analyzed three fundamental fluid dynamics problems arising from multiphase flows that may be encountered in hydraulically fractured flow passages. During hydraulic fracturing (``fracking''), complex fluids laden with proppants are pumped into tight rock formations. Flow passages in these formation are naturally heterogeneous with geometric variations, which become even more pronounced due to fracking. Upon increasing the flow area (and, thus, the conductivity of the rock), crude oil, shale gas or other hydrocarbons can then flow out of the formation more easily. In this context, we encounter the following three fluid mechanical phenomena: fluid--fluid interfacial instabilities, flow-wise variation of the hydraulic conductivity, and particle migration in the pumped fluids. </div><div><br></div><div>First, we studied the (in)stability of the interface between two immiscible liquids in angled (tapered) Hele-Shaw cells, as model of a non-uniform flow passage. We derived an expression for the growth rate of perturbations to the flat interface and for the critical capillary number, as functions of the small gap gradient (taper). On this basis, we formulated a three-regime theory to describe the interface's stability. Specifically, we found a new regime in which the growth rate changes from negative to positive (converging cells), or from positive to negative (diverging cells), thus the interface's stability can change type at some location in the cell. We conducted three-dimensional OpenFOAM simulations of the Navier--Stokes equations, using the continuous surface force method, to validate the theory.</div><div><br></div><div>Next, we investigated the flow-wise variation of the hydraulic conductivity inside a non-uniformly shaped fracture with permeable walls. Using lubrication theory for viscous flow, in conjunction with the Beavers--Joseph--Saffman boundary condition at the permeable walls, we obtained an analytical expression for the velocity profile, conductivity, and wall permeation velocity. The new expression highlights the effects of geometric variation, </div><div>the permeability of the walls, </div><div>and the effect of flow inertia.</div><div>The theory was validated against OpenFOAM simulations of the Navier--Stokes equations subject to a tensorial slip boundary condition.</div><div><br></div><div>Finally, we extended the utility of phenomenological models for particle migration in shear flow using the physics-informed neural networks (PINNs) approach. We first verified the approach for solving the inverse problem of radial particle migration in a non-Brownian suspension in an annular Couette flow. Then, we applied this approach to both non-Brownian and Brownian suspensions in Poiseuille slot flow, for which a definitive calibration of the phenomenological migration model has been lacking. Using PINNs, we identified the unknown/empirical parameters in the physical model, showing that (unlike assumptions made in the literature) they depend on the bulk volume fraction and shear P\'eclet number. </div></div>
|
5 |
Graph Representation Learning for Unsupervised and Semi-supervised Learning TasksMengyue Hang (11812658) 19 December 2021 (has links)
<div> Graph representation learning and Graph Neural Network (GNNs) models provide flexible tools for modeling and representing relational data (graphs) in various application domains. Specifically, node embedding methods provide continuous representations for vertices that has proved to be quite useful for prediction tasks, and Graph Neural Networks (GNNs) have recently been used for semi-supervised node and graph classification tasks with great success. </div><div> </div><div> However, most node embedding methods for unsupervised tasks consider a simple, sparse graph, and are mostly optimized to encode aspects of the network structure (typically local connectivity) with random walks. And GNNs model dependencies among the attributes of nearby neighboring nodes rather than dependencies among observed node labels, which makes it not expressive enough for semi-supervised node classification tasks. </div><div> </div><div> This thesis will investigate methods to address these limitations, including: </div><div><br></div><div> (1) For heterogeneous graphs: Development of a method for dense(r), heterogeneous graphs that incorporates global statistics into the negative sampling procedure with applications in recommendation tasks;</div><div> (2) For capturing long-range role equivalence: Formalized notions of representation-based equivalence w.r.t regular/automorphic equivalence in a single graph or multiple graph samples, which is employed in a embedding-based models to capture long-range equivalence patterns that reflect topological roles; </div><div> (3) For collective classification: Since GNNs model dependencies among the attributes of nearby neighboring nodes rather than dependencies among observed node labels, we develop an add-on collective learning framework to GNNs that provably boosts their expressiveness for node classification tasks, beyond that of an {\em optimal} WL-GNN, utilizing self-supervised learning and Monte Carlo sampled embeddings to incorporate node labels during inductive learning for semi-supervised node classification tasks.</div>
|
6 |
Data-Driven Anomaly and Precursor Detection in Metroplex Airspace OperationsRaj Deshmukh (8704416) 17 April 2020 (has links)
<div>The air traffic system is one of the most complex and safety-critical systems, which is expected to grow at an average rate of 0.9% a year -- from 51.8 million operational activities in 2018 to 62 million in 2039 -- within the National Airspace System. In such systems, it is important to identify degradations in system performance, especially in terms of safety and efficiency. Among the operations of various subsystems of the air traffic system, the arrival and departure operations in the terminal airspace require more attention because of its higher impact (about 75% incidents) on the entire system's safety, ranging from single aircraft incidents to multi-airport congestion incidents.</div><div><br></div><div>The first goal of this dissertation is to identify the air traffic system's degradations -- called anomalies -- in the multi-airport terminal airspace or metroplex airspace, by developing anomaly detection models that can separate anomalous flights from normal ones. Within the metroplex airspace, airport operational parameters such as runway configuration and coordination between proximal airports are a major driving factor in aircraft’s behaviors. As a substantial amount of data is continually recording such behaviors through sensing technologies and data collection capabilities, modern machine learning techniques provide powerful tools for the identification of anomalous flights in the metroplex airspace. The proposed algorithm ingests heterogeneous data, comprising the surveillance dataset, which represents an aircraft’s physical behaviors, and the airport operations dataset, which reflects operational procedures at airports. Typically, such aviation data is unlabeled, and thus the proposed algorithm is developed based on hierarchical unsupervised learning approaches for anomaly detection. This base algorithm has been extended to an anomaly monitoring algorithm that uses the developed anomaly detection models to detect anomalous flights within real-time streaming data.</div><div><br></div><div>A natural next-step after detecting anomalies is to determine the causes for these anomalies. This involves identifying the occurrence of precursors, which are triggers or conditions that precede an anomaly and have some operational correlation to the occurrence of the anomaly. A precursor detection algorithm is developed which learns the causes for the detected anomalies using supervised learning approaches. If detected, the precursor could be used to trigger actions to avoid the anomaly from ever occurring.</div><div><br></div><div>All proposed algorithms are demonstrated with real air traffic surveillance and operations datasets, comprising of departure and arrival operations at LaGuardia Airport, John F. Kennedy International Airport, and Newark Liberty International Airport, thereby detecting and predicting anomalies for all airborne operations in the terminal airspace within the New York metroplex. Critical insight regarding air traffic management is gained from visualizations and analysis of the results of these extensive tests, which show that the proposed algorithms have a potential to be used as decision-support tools that can aid pilots and air traffic controllers to mitigate anomalies from ever occurring, thus improving the safety and efficiency of metroplex airspace operations.</div>
|
7 |
Inferential GANs and Deep Feature Selection with ApplicationsYao Chen (8892395) 15 June 2020 (has links)
Deep nueral networks (DNNs) have become popular due to their predictive power and flexibility in model fitting. In unsupervised learning, variational autoencoders (VAEs) and generative adverarial networks (GANs) are two most popular and successful generative models. How to provide a unifying framework combining the best of VAEs and GANs in a principled way is a challenging task. In supervised learning, the demand for high-dimensional data analysis has grown significantly, especially in the applications of social networking, bioinformatics, and neuroscience. How to simultaneously approximate the true underlying nonlinear system and identify relevant features based on high-dimensional data (typically with the sample size smaller than the dimension, a.k.a. small-n-large-p) is another challenging task.<div><br></div><div>In this dissertation, we have provided satisfactory answers for these two challenges. In addition, we have illustrated some promising applications using modern machine learning methods.<br></div><div><br></div><div>In the first chapter, we introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs. GANs have been impactful on many problems and applications but suffer from unstable training. The Wasserstein GAN (WGAN) leverages the Wasserstein distance to avoid the caveats in the minmax two-player training of GANs but has other defects such as mode collapse and lack of metric to detect the convergence. The iWGAN model jointly learns an encoder network and a generator network motivated by the iterative primal dual optimization process. The encoder network maps the observed samples to the latent space and the generator network maps the samples from the latent space to the data space. We establish the generalization error bound of iWGANs to theoretically justify the performance of iWGANs. We further provide a rigorous probabilistic interpretation of our model under the framework of maximum likelihood estimation. The iWGAN, with a clear stopping criteria, has many advantages over other autoencoder GANs. The empirical experiments show that the iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample. We illustrate the ability of iWGANs by obtaining a competitive and stable performance with state-of-the-art for benchmark datasets. <br></div><div><br></div><div>In the second chapter, we present a general framework for high-dimensional nonlinear variable selection using deep neural networks under the framework of supervised learning. The network architecture includes both a selection layer and approximation layers. The problem can be cast as a sparsity-constrained optimization with a sparse parameter in the selection layer and other parameters in the approximation layers. This problem is challenging due to the sparse constraint and the nonconvex optimization. We propose a novel algorithm, called Deep Feature Selection, to estimate both the sparse parameter and the other parameters. Theoretically, we establish the algorithm convergence and the selection consistency when the objective function has a Generalized Stable Restricted Hessian. This result provides theoretical justifications of our method and generalizes known results for high-dimensional linear variable selection. Simulations and real data analysis are conducted to demonstrate the superior performance of our method.<br></div><div><br></div><div><div>In the third chapter, we develop a novel methodology to classify the electrocardiograms (ECGs) to normal, atrial fibrillation and other cardiac dysrhythmias as defined by the Physionet Challenge 2017. More specifically, we use piecewise linear splines for the feature selection and a gradient boosting algorithm for the classifier. In the algorithm, the ECG waveform is fitted by a piecewise linear spline, and morphological features related to the piecewise linear spline coefficients are extracted. XGBoost is used to classify the morphological coefficients and heart rate variability features. The performance of the algorithm was evaluated by the PhysioNet Challenge database (3658 ECGs classified by experts). Our algorithm achieves an average F1 score of 81% for a 10-fold cross validation and also achieved 81% for F1 score on the independent testing set. This score is similar to the top 9th score (81%) in the official phase of the Physionet Challenge 2017.</div></div><div><br></div><div>In the fourth chapter, we introduce a novel region-selection penalty in the framework of image-on-scalar regression to impose sparsity of pixel values and extract active regions simultaneously. This method helps identify regions of interest (ROI) associated with certain disease, which has a great impact on public health. Our penalty combines the Smoothly Clipped Absolute Deviation (SCAD) regularization, enforcing sparsity, and the SCAD of total variation (TV) regularization, enforcing spatial contiguity, into one group, which segments contiguous spatial regions against zero-valued background. Efficient algorithm is based on the alternative direction method of multipliers (ADMM) which decomposes the non-convex problem into two iterative optimization problems with explicit solutions. Another virtue of the proposed method is that a divide and conquer learning algorithm is developed, thereby allowing scaling to large images. Several examples are presented and the experimental results are compared with other state-of-the-art approaches. <br></div>
|
8 |
COPING WITH LIMITED DATA: MACHINE-LEARNING-BASED IMAGE UNDERSTANDING APPLICATIONS TO FASHION AND INKJET IMAGERYZhi Li (8067608) 02 December 2019 (has links)
<div>Machine learning has been revolutionizing our approach to image understanding problems. However, due to the unique nature of the problem, finding suitable data or learning from limited data properly is a constant challenge. In this work, we focus on building machine learning pipelines for fashion and inkjet image analysis with limited data. </div><div><br></div><div>We first look into the dire issue of missing and incorrect information on online fashion marketplace. Unlike professional online fashion retailers, sellers on P2P marketplaces tend not to provide correct color categorical information, which is pivotal for fashion shopping. Therefore, to assist users to provide correct color information, we aim to build an image understanding pipeline that can extract garment region in the fashion image and match the color of the fashion item to a pre-defined color categories on the fashion marketplace. To cope with the challenges of lack of suitable data, we propose an autonomous garment color extraction system that uses both clustering and semantic segmentation algorithm to extract the identify fashion garments in the image. In addition, a psychophysical experiment is designed to collect human subjects' color naming schema, and a random forest classifier is trained to given close prediction of color label for the fashion item. Our system is able to perform pixel level segmentation on fashion product portraits and parse human body parts and various fashion categories with human presence. </div><div><br></div><div>We also develop an inkjet printing analysis pipeline using pre-trained neural network. Our pipeline is able to learn to perceive print quality, namely high frequency noise level, of the test targets, without intense training. Our research also suggests that in spite of being trained on large scale dataset for object recognition, features generated from neural networks reacts to textural component of the image without any localized features. In addition, we expand our pipeline to printer forensics, and the pipeline is able to identify the printer model by examining the inkjet dot pattern at a microscopic level. Overall, the data-driven computer vision approach presents great value and potential to improve future inkjet imaging technology, even when the data source is limited.</div>
|
9 |
TRAJECTORY PATTERN IDENTIFICATION AND CLASSIFICATION FOR ARRIVALS IN VECTORED AIRSPACEChuhao Deng (11184909) 26 July 2021 (has links)
<div>
<div>
<div>
<p>As the demand and complexity of air traffic increase, it becomes crucial to maintain
the safety and efficiency of the operations in airspaces, which, however, could lead to an
increased workload for Air Traffic Controllers (ATCs) and delays in their decision-making
processes. Although terminal airspaces are highly structured with the flight procedures such
as standard terminal arrival routes and standard instrument departures, the aircraft are
frequently instructed to deviate from such procedures by ATCs to accommodate given traffic
situations, e.g., maintaining the separation from neighboring aircraft or taking shortcuts to
meet scheduling requirements. Such deviation, called vectoring, could even increase the
delays and workload of ATCs. This thesis focuses on developing a framework for trajectory
pattern identification and classification that can provide ATCs, in vectored airspace, with
real-time information of which possible vectoring pattern a new incoming aircraft could
take so that such delays and workload could be reduced. This thesis consists of two parts,
trajectory pattern identification and trajectory pattern classification.
</p>
<p>In the first part, a framework for trajectory pattern identification is proposed based on
agglomerative hierarchical clustering, with dynamic time warping and squared Euclidean
distance as the dissimilarity measure between trajectories. Binary trees with fixes that are
provided in the aeronautical information publication data are proposed in order to catego-
rize the trajectory patterns. In the second part, multiple recurrent neural network based
binary classification models are trained and utilized at the nodes of the binary trees to
compute the possible fixes an incoming aircraft could take. The trajectory pattern identifi-
cation framework and the classification models are illustrated with the automatic dependent
surveillance-broadcast data that were recorded between January and December 2019 in In-
cheon international airport, South Korea .
</p>
</div>
</div>
</div>
|
10 |
Physics-based data-driven modeling of composite materials and structures through machine learningFei Tao (12437451) 21 April 2022 (has links)
<p>Composite materials have been successfully applied in various industries, such as aerospace, automobile, and wind turbines, etc. Although the material properties of composites are desirable, the behaviors of composites are complicated. Many efforts have been made to model the constitutive behavior and failure of composites, but a complete and validated methodology has not been completely achieved yet. Recently, machine learning techniques have attracted many researchers from the mechanics field, who are seeking to construct surrogate models with machine learning, such as deep neural networks (DNN), to improve the computational speed or employ machine learning to discover unknown governing laws to improve the accuracy. Currently, the majority of studies mainly focus on improving computational speed. Few works focus on applying machine learning to discover unknown governing laws from experimental data. In this study, we will demonstrate the implementation of machine learning to discover unknown governing laws of composites. Additionally, we will also present an application of machine learning to accelerate the design optimization of a composite rotor blade.</p>
<p><br></p>
<p>To enable the machine learning model to discover constitutive laws directly from experimental data, we proposed a framework to couple finite element (FE) with DNN to form a fully coupled mechanics system FE-DNN. The proposed framework enables data communication between FE and DNN, which takes advantage of the powerful learning ability of DNN and the versatile problem-solving ability of FE. To implement the framework to composites, we introduced positive definite deep neural network (PDNN) to the framework to form FE-PDNN, which solves the convergence robustness issue of learning the constitutive law of a severely damaged material. In addition, the lamination theory is introduced to the FE-PDNN mechanics system to enable FE-PDNN to discover the lamina constitutive law based on the structural level responses.</p>
<p><br></p>
<p>We also developed a framework that combines sparse regression with compressed sensing, which leveraging advances in sparsity techniques and machine learning, to discover the failure criterion of composites from experimental data. One advantage of the proposed approach is that this framework does not need Bigdata to train the model. This feature satisfies the current failure data size constraint. Unlike the traditional curve fitting techniques, which results in a solution with nonzero coefficients in all the candidate functions. This framework can identify the most significant features that govern the dataset. Besides, we have conducted a comparison between sparse regression and DNN to show the superiority of sparse regression under limited dataset. Additionally, we used an optimization approach to enforce a constraint to the discovered criterion so that the predicted data to be more conservative than the experimental data. This modification can yield a conservative failure criterion to satisfy the design needs.</p>
<p><br></p>
<p>Finally, we demonstrated employing machine learning to accelerate the planform design of a composite rotor blade with strength consideration. The composite rotor blade planform design focuses on optimizing planform parameters to achieve higher performance. However, the strength of the material is rarely considered in the planform design, as the physic-based strength analysis is expensive since millions of load cases can be accumulated during the optimization. Ignoring strength analysis may result in the blade working in an unsafe or low safety factor region since composite materials are anisotropic and susceptible to failure. To reduce the computational cost of the blade cross-section strength analysis, we proposed to construct a surrogate model using the artificial neural network (ANN) for beam level failure criterion to replace the physics-based strength analysis. The surrogate model is constructed based on the Timoshenko beam model, where the mapping is between blade loads and the strength ratios of the cross-section. The results showed that the surrogate model constraint using machine learning can achieve the same accuracy as the physics-based simulation while the computing time is significantly reduced. </p>
|
Page generated in 0.0943 seconds