Spelling suggestions: "subject:"error found"" "subject:"error sound""
1 |
Accelerating convex optimization in machine learning by leveraging functional growth conditionsXu, Yi 01 August 2019 (has links)
In recent years, unprecedented growths in scale and dimensionality of data raise big computational challenges for traditional optimization algorithms; thus it becomes very important to develop efficient and effective optimization algorithms for solving numerous machine learning problems. Many traditional algorithms (e.g., gradient descent method) are black-box algorithms, which are simple to implement but ignore the underlying geometrical property of the objective function. Recent trend in accelerating these traditional black-box algorithms is to leverage geometrical properties of the objective function such as strong convexity. However, most existing methods rely too much on the knowledge of strong convexity, which makes them not applicable to problems without strong convexity or without knowledge of strong convexity. To bridge the gap between traditional black-box algorithms without knowing the problem's geometrical property and accelerated algorithms under strong convexity, how can we develop adaptive algorithms that can be adaptive to the objective function's underlying geometrical property? To answer this question, in this dissertation we focus on convex optimization problems and propose to explore an error bound condition that characterizes the functional growth condition of the objective function around a global minimum. Under this error bound condition, we develop algorithms that (1) can adapt to the problem's geometrical property to enjoy faster convergence in stochastic optimization; (2) can leverage the problem's structural regularizer to further improve the convergence speed; (3) can address both deterministic and stochastic optimization problems with explicit max-structural loss; (4) can leverage the objective function's smoothness property to improve the convergence rate for stochastic optimization. We first considered stochastic optimization problems with general stochastic loss. We proposed two accelerated stochastic subgradient (ASSG) methods with theoretical guarantees by iteratively solving the original problem approximately in a local region around a historical solution with the size of the local region gradually decreasing as the solution approaches the optimal set. Second, we developed a new theory of alternating direction method of multipliers (ADMM) with a new adaptive scheme of the penalty parameter for both deterministic and stochastic optimization problems with structured regularizers. With LEB condition, the proposed deterministic and stochastic ADMM enjoy improved iteration complexities of $\widetilde O(1/\epsilon^{1-\theta})$ and $\widetilde O(1/\epsilon^{2(1-\theta)})$ respectively. Then, we considered a family of optimization problems with an explicit max-structural loss. We developed a novel homotopy smoothing (HOPS) algorithm that employs Nesterov's smoothing technique and accelerated gradient method and runs in stages. Under a generic condition so-called local error bound (LEB) condition, it can improve the iteration complexity of $O(1/\epsilon)$ to $\widetilde O(1/\epsilon^{1-\theta})$ omitting a logarithmic factor with $\theta\in(0,1]$. Next, we proposed new restarted stochastic primal-dual (RSPD) algorithms for solving the problem with stochastic explicit max-structural loss. We successfully got a better iteration complexity than $O(1/\epsilon^2)$ without bilinear structure assumption, which is a big challenge of obtaining faster convergence for the considered problem. Finally, we consider finite-sum optimization problems with smooth loss and simple regularizer. We proposed novel techniques to automatically search for the unknown parameter on the fly of optimization while maintaining almost the same convergence rate as an oracle setting assuming the involved parameter is given. Under the Holderian error bound (HEB) condition with $\theta\in(0,1/2)$, the proposed algorithm also enjoys intermediate faster convergence rates than its standard counterparts with only the smoothness assumption.
|
2 |
REVIEW OF BANDWIDTH EFFICIENT MODULATION SCHEMESOsborne, William P., Ara, Sharmin 11 1900 (has links)
International Telemetering Conference Proceedings / October 30-November 02, 1995 / Riviera Hotel, Las Vegas, Nevada / The national telemetry ranges are being pushed to provide higher data rate telemetry
services by users with increasingly complex test procedure for increasingly complex
weapon systems. At the same time they are having trouble obtaining more spectrum in
which to provide these higher rates because of the demand for spectrum in SHF range
from various mobile/cellular Personal Communications Services (PCS) as well as
congress’s desire to auction spectrum and to transfer as much spectrum as possible to
commercial uses. In light of these pressures the industry is in need of a modulation
standard that will out perform the existing PCM/FM standard.
The motivation for the present review and analysis of the performance of various
coded/uncoded modulation schemes arises from this issue. Comparison of the
performance of these schemes will be utilized in the following work to find a suitable
solution to the existing problem.
|
3 |
Error Propagation Dynamics of PIV-based Pressure Field CalculationPan, Zhao 01 May 2016 (has links)
Particle Image Velocimetry (PIV) based pressure field calculation is becoming increasingly popular in experimental fluid dynamics due to its non-intrusive nature. Errors propagated from PIV results to pressure field calculations are unavoidable, and in most cases, non-negligible. However, the specific dynamics of this error propagation process have not been unveiled. This dissertation examines both why and how errors in the experimental data are propagated to the pressure field by direct analysis of the pressure Poisson equation. Error in the pressure calculations are bounded with the error level of the experimental data. The error bounds quantitatively explain why and how many factors (i.e., geometry and length scale of the flow domain, type of boundary conditions) determine the resulting error propagation. The reason that the type of flow and profile of the error matter to the error propagation is also qualitatively illustrated. Numerical and experimental validations are conducted to verify these results. The results and framework introduced in this research can be used to guide the optimization of the experimental design, and potentially estimate the error in the reconstructed pressure field before performing PIV experiments.
|
4 |
On the Tightness of the Balanced Truncation Error Bound with an Application to Arrowhead SystemsReiter, Sean Joseph 28 January 2022 (has links)
Balanced truncation model reduction for linear systems yields reduced-order models that satisfy a well-known error bound in terms of a system's Hankel singular values. This bound is known to hold with equality under certain conditions, such as when the full-order system is state-space symmetric.
In this work, we derive more general conditions in which the balanced truncation error bound holds with equality. We show that this holds for single-input, single-output systems that exhibit a generalized type of state-space symmetry based on the sign parameters corresponding to a system's Hankel singular values. We prove an additional result that shows how to determine this state-space symmetry from the arrowhead realization of a system, if available. In particular, we provide a formula for the sign parameters of an arrowhead system in terms of the off-diagonal entries of its arrowhead realization.
We then illustrate these results with an example of an arrowhead system arising naturally in power systems modeling that motivated our study. / Master of Science / Mathematical modeling of dynamical systems provides a powerful means for studying physical phenomena. Due the complexities of real-world problems, many mathematical models face computational difficulties due to the costs of accurate modeling. Model-order reduction of large-scale dynamical systems circumvents this by approximating the large-scale model with a ``smaller'' one that still accurately describes the problem of interest. Balanced truncation model reduction for linear systems is one such example, yielding reduced-order models that satisfy a tractable upper bound on the approximation error. This work investigates conditions in which this bound is known to hold with equality, becoming an exact formula for the error in reduction. We additionally show how to determine these conditions for a special class of linear dynamical systems known as arrowhead systems, which arise in special applications of network modeling. We provide an example of one such system from power systems modeling that motivated our study.
|
5 |
Paving the Randomized Gauss-SeidelWu, Wei 01 January 2017 (has links)
The Randomized Gauss-Seidel Method (RGS) is an iterative algorithm that solves overdetermined systems of linear equations Ax = b. This paper studies an update on the RGS method, the Randomized Block Gauss-Seidel Method. At each step, the algorithm greedily minimizes the objective function L(x) = kAx bk2 with respect to a subset of coordinates. This paper describes a Randomized Block Gauss-Seidel Method (RBGS) which uses a randomized control method to choose a subset at each step. This algorithm is the first block RGS method with an expected linear convergence rate which can be described by the properties of the matrix A and its column submatrices. The analysis demonstrates that RBGS improves RGS more when given appropriate column-paving of the matrix, a partition of the columns into well-conditioned blocks. The main result yields a RBGS method that is more e cient than the simple RGS method.
|
6 |
Analysis of Nonlinear Oscillations Using Computer Algebra / 計算機代数を用いた非線形振動の解析 / ケイサンキ ダイスウ オ モチイタ ヒセンケイ シンドウ ノ カイセキYagi, Masakazu 23 May 2008 (has links)
Kyoto University (京都大学) / 0048 / 新制・課程博士 / 博士(工学) / 甲第14049号 / 工博第2961号 / 新制||工||1439(附属図書館) / 26328 / UT51-2008-F441 / 京都大学大学院工学研究科電気工学専攻 / (主査)教授 和田 修己, 教授 引原 隆士, 准教授 久門 尚史, 教授 萩原 朋道 / 学位規則第4条第1項該当
|
7 |
Komprese výškových map / Height map compression techniquesLašan, Michal January 2016 (has links)
The goal of this thesis is to design a suitable method for lossy compression of heightmap terrain data. This method should accept blocks of float samples of dimensions 2^n x 2^n as an input, for which it should be able to perform progressive decompression of mip-maps (lower-resolution representations). It should keep the reconstructed data within a certain maximum per-sample error bound for each mip-map level. This bound should be in the unit of meters and adjustable by the user. Given these constraints, it should be as efficient as possible. Our method is inspired by the second generation of progressive wavelet-based compression scheme modified to satisfy the~maximum-error constraint. We simplified this scheme by factoring out unnecessary computations in order to improve the efficiency. Our method can compress a 256x256 block in about 30 ms and decompress it in about 2 ms. Thanks to these attributes, the method can be used in a real-time planet renderer. It achieves the compression ratio of 37:1 on the whole Earth 90m/sample terrain dataset transformed and separated into square blocks, while respecting the maximum error of 5m. Powered by TCPDF (www.tcpdf.org)
|
8 |
Continuum limits of evolution and variational problems on graphs / Limites continues de problèmes d'évolution et variationnels sur graphesHafiene, Yosra 05 December 2018 (has links)
L’opérateur du p-Laplacien non local, l’équation d’évolution et la régularisation variationnelle associées régies par un noyau donné ont des applications dans divers domaines de la science et de l’ingénierie. En particulier, ils sont devenus des outils modernes pour le traitement massif des données (y compris les signaux, les images, la géométrie) et dans les tâches d’apprentissage automatique telles que la classification. En pratique, cependant, ces modèles sont implémentés sous forme discrète (en espace et en temps, ou en espace pour la régularisation variationnelle) comme approximation numérique d’un problème continu, où le noyau est remplacé par la matrice d’adjacence d’un graphe. Pourtant, peu de résultats sur la consistence de ces discrétisations sont disponibles. En particulier, il est largement ouvert de déterminer quand les solutions de l’équation d’évolution ou du problème variationnel des tâches basées sur des graphes convergent (dans un sens approprié) à mesure que le nombre de sommets augmente, vers un objet bien défini dans le domaine continu, et si oui, à quelle vitesse. Dans ce manuscrit, nous posons les bases pour aborder ces questions.En combinant des outils de la théorie des graphes, de l’analyse convexe, de la théorie des semi- groupes non linéaires et des équations d’évolution, nous interprétons rigoureusement la limite continue du problème d’évolution et du problème variationnel du p-Laplacien discrets sur graphes. Plus précisé- ment, nous considérons une suite de graphes (déterministes) convergeant vers un objet connu sous le nom de graphon. Si les problèmes d’évolution et variationnel associés au p-Laplacien continu non local sont discrétisés de manière appropriée sur cette suite de graphes, nous montrons que la suite des solutions des problèmes discrets converge vers la solution du problème continu régi par le graphon, lorsque le nombre de sommets tend vers l’infini. Ce faisant, nous fournissons des bornes d’erreur/consistance.Cela permet à son tour d’établir les taux de convergence pour différents modèles de graphes. En parti- culier, nous mettons en exergue le rôle de la géométrie/régularité des graphons. Pour les séquences de graphes aléatoires, en utilisant des inégalités de déviation (concentration), nous fournissons des taux de convergence nonasymptotiques en probabilité et présentons les différents régimes en fonction de p, de la régularité du graphon et des données initiales. / The non-local p-Laplacian operator, the associated evolution equation and variational regularization, governed by a given kernel, have applications in various areas of science and engineering. In particular, they are modern tools for massive data processing (including signals, images, geometry), and machine learning tasks such as classification. In practice, however, these models are implemented in discrete form (in space and time, or in space for variational regularization) as a numerical approximation to a continuous problem, where the kernel is replaced by an adjacency matrix of a graph. Yet, few results on the consistency of these discretization are available. In particular it is largely open to determine when do the solutions of either the evolution equation or the variational problem of graph-based tasks converge (in an appropriate sense), as the number of vertices increases, to a well-defined object in the continuum setting, and if yes, at which rate. In this manuscript, we lay the foundations to address these questions.Combining tools from graph theory, convex analysis, nonlinear semigroup theory and evolution equa- tions, we give a rigorous interpretation to the continuous limit of the discrete nonlocal p-Laplacian evolution and variational problems on graphs. More specifically, we consider a sequence of (determin- istic) graphs converging to a so-called limit object known as the graphon. If the continuous p-Laplacian evolution and variational problems are properly discretized on this graph sequence, we prove that the solutions of the sequence of discrete problems converge to the solution of the continuous problem governed by the graphon, as the number of graph vertices grows to infinity. Along the way, we provide a consistency/error bounds. In turn, this allows to establish the convergence rates for different graph models. In particular, we highlight the role of the graphon geometry/regularity. For random graph se- quences, using sharp deviation inequalities, we deliver nonasymptotic convergence rates in probability and exhibit the different regimes depending on p, the regularity of the graphon and the initial data.
|
9 |
On the geometry of optimization problems and their structure / Sur la géométrie de problèmes d'optimisation et leur structureRoulet, Vincent 21 December 2017 (has links)
Dans de nombreux domaines tels que l’apprentissage statistique, la recherche opérationnelle ou encore la conception de circuits, une tâche est modélisée par un jeu de paramètres que l’on cherche à optimiser pour prendre la meilleure décision possible. Mathématiquement, le problème revient à minimiser une fonction de l’objectif recherché par des algorithmes itératifs. Le développement de ces derniers dépend alors de la géométrie de la fonction ou de la structure du problème. Dans une première partie, cette thèse étudie comment l’acuité d’une fonction autour de ses minima peut être exploitée par le redémarrage d’algorithmes classiques. Les schémas optimaux sont présentés pour des problèmes convexes généraux. Ils nécessitent cependant une description complète de la fonction, ce qui est rarement disponible. Des stratégies adaptatives sont donc développées et prouvées être quasi-optimales. Une analyse spécifique est ensuite conduite pour les problèmes parcimonieux qui cherchent des représentations compressées des variables du problème. Leur géométrie conique sous-jacente, qui décrit l’acuité de la fonction de l’objectif, se révèle contrôler à la fois la performance statistique du problème et l’efficacité des procédures d’optimisation par une seule quantité. Une seconde partie est dédiée aux problèmes d’apprentissage statistique. Ceux-ci effectuent une analyse prédictive de données à l’aide d’un large nombre d’exemples. Une approche générique est présentée pour à la fois résoudre le problème de prédiction et le simplifier en groupant soit les variables, les exemples ou les tâches. Des méthodes algorithmiques systématiques sont développées en analysant la géométrie induite par une partition des données. Une analyse théorique est finalement conduite lorsque les variables sont groupées par analogie avec les méthodes parcimonieuses. / In numerous fields such as machine learning, operational research or circuit design, a task is modeled by a set of parameters to be optimized in order to take the best possible decision. Formally, the problem amounts to minimize a function describing the desired objective with iterative algorithms. The development of these latter depends then on the characterization of the geometry of the function or the structure of the problem. In a first part, this thesis studies how sharpness of a function around its minimizers can be exploited by restarting classical algorithms. Optimal schemes are presented for general convex problems. They require however a complete description of the function that is rarely available. Adaptive strategies are therefore developed and shown to achieve nearly optimal rates. A specific analysis is then carried out for sparse problems that seek for compressed representation of the variables of the problem. Their underlying conic geometry, that describes sharpness of the objective, is shown to control both the statistical performance of the problem and the efficiency of dedicated optimization methods by a single quantity. A second part is dedicated to machine learning problems. These perform predictive analysis of data from large set of examples. A generic framework is presented to both solve the prediction problem and simplify it by grouping either features, samples or tasks. Systematic algorithmic approaches are developed by analyzing the geometry induced by partitions of the data. A theoretical analysis is then carried out for grouping features by analogy to sparse methods.
|
10 |
A Limit Theorem in Cryptography.Lynch, Kevin 16 August 2005 (has links) (PDF)
Cryptography is the study of encryptying and decrypting messages and deciphering encrypted messages when the code is unknown. We consider Λπ(Δx, Δy) which is a count of how many ways a permutation satisfies a certain property. According to Hawkes and O'Connor, the distribution of Λπ(Δx, Δy) tends to a Poisson distribution with parameter ½ as m → ∞ for all Δx,Δy ∈ (Z/qZ)m - 0. We give a proof of this theorem using the Stein-Chen method: As qm approaches infinity, the distribution of Λπ(Δx, Δy) is approximately Poisson with parameter ½. Error bounds for this approximation are provided.
|
Page generated in 0.0526 seconds