1 |
Prediction Performance of Survival ModelsYuan, Yan January 2008 (has links)
Statistical models are often used for the prediction of
future random variables. There are two types of prediction, point
prediction and probabilistic prediction. The prediction accuracy is
quantified by performance measures, which are typically based on
loss functions. We study the estimators of these performance
measures, the prediction error and performance scores, for point and
probabilistic predictors, respectively. The focus of this thesis is
to assess the prediction performance of survival models that analyze
censored survival times. To accommodate censoring, we extend the
inverse probability censoring weighting (IPCW) method, thus
arbitrary loss functions can be handled. We also develop confidence
interval procedures for these performance measures.
We compare model-based, apparent loss based and cross-validation
estimators of prediction error under model misspecification and
variable selection, for absolute relative error loss (in chapter 3)
and misclassification error loss (in chapter 4). Simulation results
indicate that cross-validation procedures typically produce reliable
point estimates and confidence intervals, whereas model-based
estimates are often sensitive to model misspecification. The methods
are illustrated for two medical contexts in chapter 5. The apparent
loss based and cross-validation estimators of performance scores for
probabilistic predictor are discussed and illustrated with an
example in chapter 6. We also make connections for performance.
|
2 |
Prediction Performance of Survival ModelsYuan, Yan January 2008 (has links)
Statistical models are often used for the prediction of
future random variables. There are two types of prediction, point
prediction and probabilistic prediction. The prediction accuracy is
quantified by performance measures, which are typically based on
loss functions. We study the estimators of these performance
measures, the prediction error and performance scores, for point and
probabilistic predictors, respectively. The focus of this thesis is
to assess the prediction performance of survival models that analyze
censored survival times. To accommodate censoring, we extend the
inverse probability censoring weighting (IPCW) method, thus
arbitrary loss functions can be handled. We also develop confidence
interval procedures for these performance measures.
We compare model-based, apparent loss based and cross-validation
estimators of prediction error under model misspecification and
variable selection, for absolute relative error loss (in chapter 3)
and misclassification error loss (in chapter 4). Simulation results
indicate that cross-validation procedures typically produce reliable
point estimates and confidence intervals, whereas model-based
estimates are often sensitive to model misspecification. The methods
are illustrated for two medical contexts in chapter 5. The apparent
loss based and cross-validation estimators of performance scores for
probabilistic predictor are discussed and illustrated with an
example in chapter 6. We also make connections for performance.
|
3 |
ANALYSIS OF CONTINUOUS LEARNING MODELS FOR TRAJECTORY REPRESENTATIONKendal Graham Norman (15344170) 24 April 2023 (has links)
<p> Trajectory planning is a field with widespread utility, and imitation learning pipelines<br>
show promise as an accessible training method for trajectory planning. MPNet is the state<br>
of the art for imitation learning with respect to success rates. MPNet has two general<br>
components to its runtime: a neural network predicts the location of the next anchor point in<br>
a trajectory, and then planning infrastructure applies sampling-based techniques to produce<br>
near-optimal, collision-less paths. This distinction between the two parts of MPNet prompts<br>
investigation into the role of the neural architectures in the Neural Motion Planning pipeline,<br>
to discover where improvements can be made. This thesis seeks to explore the importance<br>
of neural architecture choice by removing the planning structures, and comparing MPNet’s<br>
feedforward anchor point predictor with that of a continuous model trained to output a<br>
continuous trajectory from start to goal. A new state of the art model in continuous learning<br>
is the Neural Flow model. As a continuous model, it possess a low standard deviation runtime<br>
which can be properly leveraged in the absence of planning infrastructure. Neural Flows also<br>
output smooth, continuous trajectory curves that serve to reduce noisy path outputs in the<br>
absence of lazy vertex contraction. This project analyzes the performance of MPNet, Resnet<br>
Flow, and Coupling Flow models when sampling-based planning tools such as dropout, lazy<br>
vertex contraction, and replanning are removed. Each neural planner is trained end-to-end in<br>
an imitation learning pipeline utilizing a simple feedforward encoder, a CNN-based encoder,<br>
and a Pointnet encoder to encode the environment, for purposes of comparison. Results<br>
indicate that performance is competitive, with Neural Flows slightly outperforming MPNet’s<br>
success rates on our reduced dataset in Simple2D, and being slighty outperformed by MPNet<br>
with respect to collision penetration distance in our UR5 Cubby test suite. These results<br>
indicate that continuous models can compete with the performance of anchor point predictor<br>
models when sampling-based planning techniques are not applied. Neural Flow models also<br>
have other benefits that anchor point predictors do not, like continuity guarantees, the ability<br>
to select a proportional location in a trajectory to output, and smoothness. </p>
|
Page generated in 0.0923 seconds