• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

ANALYSIS OF CONTINUOUS LEARNING MODELS FOR TRAJECTORY REPRESENTATION

Kendal Graham Norman (15344170) 24 April 2023 (has links)
<p> Trajectory planning is a field with widespread utility, and imitation learning pipelines<br> show promise as an accessible training method for trajectory planning. MPNet is the state<br> of the art for imitation learning with respect to success rates. MPNet has two general<br> components to its runtime: a neural network predicts the location of the next anchor point in<br> a trajectory, and then planning infrastructure applies sampling-based techniques to produce<br> near-optimal, collision-less paths. This distinction between the two parts of MPNet prompts<br> investigation into the role of the neural architectures in the Neural Motion Planning pipeline,<br> to discover where improvements can be made. This thesis seeks to explore the importance<br> of neural architecture choice by removing the planning structures, and comparing MPNet’s<br> feedforward anchor point predictor with that of a continuous model trained to output a<br> continuous trajectory from start to goal. A new state of the art model in continuous learning<br> is the Neural Flow model. As a continuous model, it possess a low standard deviation runtime<br> which can be properly leveraged in the absence of planning infrastructure. Neural Flows also<br> output smooth, continuous trajectory curves that serve to reduce noisy path outputs in the<br> absence of lazy vertex contraction. This project analyzes the performance of MPNet, Resnet<br> Flow, and Coupling Flow models when sampling-based planning tools such as dropout, lazy<br> vertex contraction, and replanning are removed. Each neural planner is trained end-to-end in<br> an imitation learning pipeline utilizing a simple feedforward encoder, a CNN-based encoder,<br> and a Pointnet encoder to encode the environment, for purposes of comparison. Results<br> indicate that performance is competitive, with Neural Flows slightly outperforming MPNet’s<br> success rates on our reduced dataset in Simple2D, and being slighty outperformed by MPNet<br> with respect to collision penetration distance in our UR5 Cubby test suite. These results<br> indicate that continuous models can compete with the performance of anchor point predictor<br> models when sampling-based planning techniques are not applied. Neural Flow models also<br> have other benefits that anchor point predictors do not, like continuity guarantees, the ability<br> to select a proportional location in a trajectory to output, and smoothness. </p>
2

Machine Learning with Hard Constraints:Physics-Constrained Constitutive Models with Neural ODEs and Diffusion

Vahidullah Tac (19138804) 15 July 2024 (has links)
<p dir="ltr">Our current constitutive models of material behavior fall short of being able to describe the mechanics of soft tissues. This is because soft tissues like skin and rubber, unlike traditional engineering materials, exhibit extremely nonlinear mechanical behavior and usually undergo large deformations. Developing accurate constitutive models for such materials requires using flexible tools at the forefront of science, such as machine learning methods. However, our past experiences show that it is crucial to incorporate physical knowledge in models of physical phenomena. The past few years has witnessed the rise of physics-informed models where the goal is to impose governing physical laws by incorporating them in the loss function. However, we argue that such "soft" constraints are not enough. This "persuasion" method has no theoretical guarantees on the satisfaction of physics and result in overly complicated loss functions that make training of the models cumbersome. </p><p dir="ltr">We propose imposing the relevant physical laws as "hard" constraints. In this approach the physics of the problem are "baked in" into the structure of the model preventing it from ever violating them. We demonstrate the power of this paradigm on a number of constitutive models of soft tissue, including hyperelasticity, viscoelasticity and continuum damage models. </p><p dir="ltr">We also argue that new uncertainty quantification strategies have to be developed to address the rise in dimensionality and the inherent symmetries present in most machine learning models compared to traditional constitutive models. We demonstrate that diffusion models can be used to construct a generative framework for physics-constrained hyperelastic constitutive models.</p>

Page generated in 0.0224 seconds