Spelling suggestions: "subject:"biasvariance tradeoff"" "subject:"biasvariance tradeoffs""
1 |
On the bias-variance tradeoff : textbooks need an updateNeal, Brayden 12 1900 (has links)
L’objectif principal de cette thèse est de souligner que le compromis biais-variance n’est
pas toujours vrai (p. ex. dans les réseaux neuronaux). Nous plaidons pour que ce manque
d’universalité soit reconnu dans les manuels scolaires et enseigné dans les cours d’introduction
qui couvrent le compromis.
Nous passons d’abord en revue l’historique du compromis entre les biais et les variances,
sa prévalence dans les manuels scolaires et certaines des principales affirmations faites au
sujet du compromis entre les biais et les variances. Au moyen d’expériences et d’analyses
approfondies, nous montrons qu’il n’y a pas de compromis entre la variance et le biais dans
les réseaux de neurones lorsque la largeur du réseau augmente. Nos conclusions semblent
contredire les affirmations de l’oeuvre historique de Geman et al. (1992). Motivés par cette
contradiction, nous revisitons les mesures expérimentales dans Geman et al. (1992). Nous
discutons du fait qu’il n’y a jamais eu de preuves solides d’un compromis dans les réseaux
neuronaux lorsque le nombre de paramètres variait. Nous observons un phénomène similaire
au-delà de l’apprentissage supervisé, avec un ensemble d’expériences d’apprentissage de
renforcement profond.
Nous soutenons que les révisions des manuels et des cours magistraux ont pour but
de transmettre cette compréhension moderne nuancée de l’arbitrage entre les biais et les
variances. / The main goal of this thesis is to point out that the bias-variance tradeoff is not always
true (e.g. in neural networks). We advocate for this lack of universality to be acknowledged
in textbooks and taught in introductory courses that cover the tradeoff.
We first review the history of the bias-variance tradeoff, its prevalence in textbooks,
and some of the main claims made about the bias-variance tradeoff. Through extensive
experiments and analysis, we show a lack of a bias-variance tradeoff in neural networks
when increasing network width. Our findings seem to contradict the claims of the landmark
work by Geman et al. (1992). Motivated by this contradiction, we revisit the experimental
measurements in Geman et al. (1992). We discuss that there was never strong evidence
for a tradeoff in neural networks when varying the number of parameters. We observe a
similar phenomenon beyond supervised learning, with a set of deep reinforcement learning
experiments.
We argue that textbook and lecture revisions are in order to convey this nuanced modern
understanding of the bias-variance tradeoff.
|
2 |
Increasing Policy Network Size Does Not Guarantee Better Performance in Deep Reinforcement LearningZachery Peter Berg (12455928) 25 April 2022 (has links)
<p>The capacity of deep reinforcement learning policy networks has been found to affect the performance of trained agents. It has been observed that policy networks with more parameters have better training performance and generalization ability than smaller networks. In this work, we find cases where this does not hold true. We observe unimodal variance in the zero-shot test return of varying width policies, which accompanies a drop in both train and test return. Empirically, we demonstrate mostly monotonically increasing performance or mostly optimal performance as the width of deep policy networks increase, except near the variance mode. Finally, we find a scenario where larger networks have increasing performance up to a point, then decreasing performance. We hypothesize that these observations align with the theory of double descent in supervised learning, although with specific differences.</p>
|
3 |
Comparative Study of Methods for Linguistic Modeling of Numerical DataVisa, Sofia January 2002 (has links)
No description available.
|
4 |
ASSESSMENT AND PREDICTION OF CARDIOVASCULAR STATUS DURING CARDIAC ARREST THROUGH MACHINE LEARNING AND DYNAMICAL TIME-SERIES ANALYSISShandilya, Sharad 02 July 2013 (has links)
In this work, new methods of feature extraction, feature selection, stochastic data characterization/modeling, variance reduction and measures for parametric discrimination are proposed. These methods have implications for data mining, machine learning, and information theory. A novel decision-support system is developed in order to guide intervention during cardiac arrest. The models are built upon knowledge extracted with signal-processing, non-linear dynamic and machine-learning methods. The proposed ECG characterization, combined with information extracted from PetCO2 signals, shows viability for decision-support in clinical settings. The approach, which focuses on integration of multiple features through machine learning techniques, suits well to inclusion of multiple physiologic signals. Ventricular Fibrillation (VF) is a common presenting dysrhythmia in the setting of cardiac arrest whose main treatment is defibrillation through direct current countershock to achieve return of spontaneous circulation. However, often defibrillation is unsuccessful and may even lead to the transition of VF to more nefarious rhythms such as asystole or pulseless electrical activity. Multiple methods have been proposed for predicting defibrillation success based on examination of the VF waveform. To date, however, no analytical technique has been widely accepted. For a given desired sensitivity, the proposed model provides a significantly higher accuracy and specificity as compared to the state-of-the-art. Notably, within the range of 80-90% of sensitivity, the method provides about 40% higher specificity. This means that when trained to have the same level of sensitivity, the model will yield far fewer false positives (unnecessary shocks). Also introduced is a new model that predicts recurrence of arrest after a successful countershock is delivered. To date, no other work has sought to build such a model. I validate the method by reporting multiple performance metrics calculated on (blind) test sets.
|
Page generated in 0.0734 seconds