Spelling suggestions: "subject:"early_dropping."" "subject:"earlyspring.""
1 |
Simple Bivalency Proofs of the Lower Bounds in Synchronous Consensus ProblemsWang, Xianbing, Teo, Yong Meng, Cao, Jiannong 01 1900 (has links)
A fundamental problem of fault-tolerant distributed computing is for the reliable processes to reach a consensus. For a synchronous distributed system of n processes with up to t crash failures and f failures actually occur, we prove using a straightforward bivalency argument that the lower bound for reaching uniform consensus is (f + 2)-rounds in the case of 0 < f ⤠t â2, and a new lower bound for early-stopping consensus is min (t + 1, f + 2)-rounds where 0 ⤠f ⤠t. Both proofs are simpler and more intuitive than the traditional methods such as backward induction. Our main contribution is that we solve the open problem of proving that bivalency can be applied to show the (f + 2)-rounds lower bound for synchronous uniform consensus. / Singapore-MIT Alliance (SMA)
|
2 |
A Low-Power Implementation of Turbo DecodersTang, Weihua January 2007 (has links)
<p>In the 3G standards, wireless communication system can support 2 Mb/s. With this data rate, multimedia communication is realized on handset. However, it is expected that new applications will require even higher data rates in future. In order to fulfil the growing requirement of high data rate, 100 Mb/s is considered as the aim of 4G standards. Such high data rate will result in very large power consumption, which is unacceptable considering the current battery capability. Therefore, reducing the power consumption of turbo decoders becomes a major issue to be solved. This report explores new techniques for implementing low power, small area and high throughput turbo decoders.</p>
|
3 |
Lower Bounds for Achieving Synchronous Early Stopping Consensus with Orderly Crash FailuresWang, Xianbing, Teo, Yong Meng, Cao, Jiannong 01 1900 (has links)
In this paper, we discuss the consensus problem for synchronous distributed systems with orderly crash failures. For a synchronous distributed system of n processes with up to t crash failures and f failures actually occur, first, we present a bivalency argument proof to solve the open problem of proving the lower bound, min (t + 1, f + 2) rounds, for early-stopping synchronous consensus with orderly crash failures, where t < n - 1. Then, we extend the system model with orderly crash failures to a new model in which a process is allowed to send multiple messages to the same destination process in a round and the failing processes still respect the order specified by the protocol in sending messages. For this new model, we present a uniform consensus protocol, in which all non-faulty processes always decide and stop immediately by the end of f + 1 rounds. We prove that the lower bound of early stopping protocols for both consensus and uniform consensus are f + 1 rounds under the new model, and our proposed protocol is optimal. / Singapore-MIT Alliance (SMA)
|
4 |
A Low-Power Implementation of Turbo DecodersTang, Weihua January 2007 (has links)
In the 3G standards, wireless communication system can support 2 Mb/s. With this data rate, multimedia communication is realized on handset. However, it is expected that new applications will require even higher data rates in future. In order to fulfil the growing requirement of high data rate, 100 Mb/s is considered as the aim of 4G standards. Such high data rate will result in very large power consumption, which is unacceptable considering the current battery capability. Therefore, reducing the power consumption of turbo decoders becomes a major issue to be solved. This report explores new techniques for implementing low power, small area and high throughput turbo decoders.
|
5 |
New Methods to Reduce Turbo Decoding Latency and the Complexity of Bit Insertion TechniquesAlMahamdy, Mohammed A. H. 12 June 2017 (has links)
No description available.
|
6 |
Evaluating cascade correlation neural networks for surrogate modelling needs and enhancing the Nimrod/O toolkit for multi-objective optimisationRiley, Mike J. W. January 2011 (has links)
Engineering design often requires the optimisation of multiple objectives, and becomes significantly more difficult and time consuming when the response surfaces are multimodal, rather than unimodal. A surrogate model, also known as a metamodel, can be used to replace expensive computer simulations, accelerating single and multi-objective optimisation and the exploration of new design concepts. The main research focus of this work is to investigate the use of a neural network surrogate model to improve optimisation of multimodal surfaces. Several significant contributions derive from evaluating the Cascade Correlation neural network as the basis of a surrogate model. The contributions to the neural network community ultimately outnumber those to the optimisation community. The effects of training this surrogate on multimodal test functions are explored. The Cascade Correlation neural network is shown to map poorly such response surfaces. A hypothesis for this weakness is formulated and tested. A new subdivision technique is created that addresses this problem; however, this new technique requires excessively large datasets upon which to train. The primary conclusion of this work is that Cascade Correlation neural networks form an unreliable basis for a surrogate model, despite successes reported in the literature. A further contribution of this work is the enhancement of an open source optimisation toolkit, achieved by the first integration of a truly multi-objective optimisation algorithm.
|
7 |
Evaluating cascade correlation neural networks for surrogate modelling needs and enhancing the Nimrod/O toolkit for multi-objective optimisationRiley, Mike J. W. 03 1900 (has links)
Engineering design often requires the optimisation of multiple objectives, and becomes significantly more difficult and time consuming when the response surfaces are multimodal, rather than unimodal. A surrogate model, also known as a metamodel, can be used to replace expensive computer simulations, accelerating single and multi-objective optimisation and the exploration of new design concepts. The main research focus of this work is to investigate the use of a neural network surrogate model to improve optimisation of multimodal surfaces.
Several significant contributions derive from evaluating the Cascade Correlation neural network as the basis of a surrogate model. The contributions to the neural network community ultimately outnumber those to the optimisation community.
The effects of training this surrogate on multimodal test functions are explored. The Cascade Correlation neural network is shown to map poorly such response surfaces. A hypothesis for this weakness is formulated and tested. A new subdivision technique is created that addresses this problem; however, this new technique requires excessively large datasets upon which to train.
The primary conclusion of this work is that Cascade Correlation neural networks form an unreliable basis for a surrogate model, despite successes reported in the literature.
A further contribution of this work is the enhancement of an open source optimisation toolkit, achieved by the first integration of a truly multi-objective optimisation algorithm.
|
8 |
Early Stopping of a Neural Network via the Receiver Operating Curve.Yu, Daoping 13 August 2010 (has links) (PDF)
This thesis presents the area under the ROC (Receiver Operating Characteristics) curve, or abbreviated AUC, as an alternate measure for evaluating the predictive performance of ANNs (Artificial Neural Networks) classifiers. Conventionally, neural networks are trained to have total error converge to zero which may give rise to over-fitting problems. To ensure that they do not over fit the training data and then fail to generalize well in new data, it appears effective to stop training as early as possible once getting AUC sufficiently large via integrating ROC/AUC analysis into the training process. In order to reduce learning costs involving the imbalanced data set of the uneven class distribution, random sampling and k-means clustering are implemented to draw a smaller subset of representatives from the original training data set. Finally, the confidence interval for the AUC is estimated in a non-parametric approach.
|
9 |
Reducing Training Time in Text Visual Question AnsweringBehboud, Ghazale 15 July 2022 (has links)
Artificial Intelligence (AI) and Computer Vision (CV) have brought the promise of many applications along with many challenges to solve. The majority of current AI research has been dedicated to single-modal data processing meaning they use only one modality such as visual recognition or text recognition. However, real-world challenges are often a combination of different modalities of data such as text, audio and images. This thesis focuses on solving the Visual Question Answering (VQA) problem which is a significant multi-modal challenge. VQA is defined as a computer vision system that when given a question about an image will answer based on an understanding of both the question and image. The goal is improving the training time of VQA models. In this thesis, Look, Read, Reason and Answer (LoRRA), which is a state-of-the-art architecture, is used as the base model. Then, Reduce Uni-modal Biases (RUBi) is applied to this model to reduce the importance of uni- modal biases in training. Finally, an early stopping strategy is employed to stop the training process once the model accuracy has converged to prevent the model from overfitting. Numerical results are presented which show that training LoRRA with RUBi and early stopping can converge in less than 5 hours. The impact of batch size, learning rate and warm up hyper parameters is also investigated and experimental results are presented. / Graduate
|
10 |
Early stopping for iterative estimation proceduresStankewitz, Bernhard 07 June 2024 (has links)
Diese Dissertation ist ein Beitrag zum Forschungsfeld Early stopping im Kontext iterativer Schätzverfahren. Wir betrachten Early stopping dabei sowohl aus der Perspektive impliziter Regularisierungsverfahren als auch aus der Perspektive adaptiver Methoden Analog zu expliziter Regularisierung reduziert das Stoppen eines Schätzverfahrens den stochastischen Fehler/die Varianz des endgültigen Schätzers auf Kosten eines zusätzlichen Approximationsfehlers/Bias. In diesem Forschungsbereich präsentieren wir eine neue Analyse des Gradientenabstiegsverfahrens für konvexe Lernprobleme in einem abstrakten Hilbert-Raum. Aus der Perspektive adaptiver Methoden müssen iterative Schätzerverfahren immer mit einer datengetriebenen letzten Iteration m kombiniert werden, die sowohl under- als auch over-fitting verhindert. In diesem Forschungsbereichpräsentieren wir zwei Beiträge: In einem statistischen inversen Problem, das durch iteratives Trunkieren der Singulärwertzerlegung regularisiert wird, untersuchen wir, unter welchen Umständen optimale Adaptiertheit erreicht werden kann, wenn wir an der ersten Iteration m stoppen, an der die geglätteten Residuen kleiner sind als ein kritischer Wert. Für L2-Boosting mittels Orthogonal Matching Pursuit (OMP) in hochdimensionalen linearen Modellen beweisen wir, dass sequenzielle Stoppverfahren statistische Optimalität garantieren können. Die Beweise beinhalten eine subtile punktweise Analyse einer stochastischen Bias-Varianz-Zerlegung, die durch den
Greedy-Algorithmus, der OMP unterliegt, induziert wird. Simulationsstudien
zeigen, dass sequentielle Methoden zu deutlich reduzierten Rechenkosten die
Leistung von Standardalgorithmen wie dem kreuzvalidierten Lasso oder der
nicht-sequentiellen Modellwahl über ein hochdimensionales Akaike- Kriterium
erbringen können. / This dissertation contributes to the growing literature on early stopping in modern statistics and machine learning. We consider early stopping from the perspective of both implicit regularization and adaptive estimation. From the former, analogous to an explicit regularization method, halting an iterative estimation procedure reduces the stochastic error/variance of the final estimator at the cost of some bias. In this area, we present a novel analysis of gradient descent learning for convex loss functions in an abstract Hilbert space setting, which combines techniques from inexact optimization and concentration of measure. From the perspective of adaptive estimation, iterative estimation procedures have to be combined with a data-driven choice m of the effectively selected iteration in order to avoid under- as well as over-fitting. In this area, we present two contributions: For truncated SVD estimation in statistical inverse problems, we examine under what circumstances optimal adaptation can be achieved by early stopping at the first iteration at which the smoothed residuals are smaller than a critical value. For L2-boosting via orthogonal matching pursuit (OMP) in high dimensional linear models, we prove that sequential early stopping rules can preserve statistical optimality in terms of a general oracle inequality for the empirical risk and recently established optimal convergence rates for the population risk.
|
Page generated in 0.0695 seconds