Spelling suggestions: "subject:"nondifferentiable functions"" "subject:"indifferentiable functions""
1 |
Continuous, Nowhere-Differentiable Functions with no Finite or Infinite One-Sided Derivative AnywhereLee, Jae S. (Jae Seung) 12 1900 (has links)
In this paper, we study continuous functions with no finite or infinite one-sided derivative anywhere. In 1925, A. S. Beskovitch published an example of such a function. Since then we call them Beskovitch functions. This construction is presented in chapter 2, The example was simple enough to clear the doubts about the existence of Besicovitch functions. In 1932, S. Saks showed that the set of Besicovitch functions is only a meager set in C[0,1]. Thus the Baire category method for showing the existence of Besicovitch functions cannot be directly applied. A. P. Morse in 1938 constructed Besicovitch functions. In 1984, Maly revived the Baire category method by finding a non-empty compact subspace of (C[0,1], || • ||) with respect to which the set of Morse-Besicovitch functions is comeager.
|
2 |
Nondifferentiable optimization algorithms with application to solving Lagrangian dual problemsChoi, Gyunghyun 19 June 2006 (has links)
In this research effort, we consider nondifferentiable optimization (NDO) problems that arise in several applications in science, engineering, and management, as well as in the context of other mathematical programming approaches such as Dantzig-Wolfe decomposition, Benders decomposition, Lagrangian duality, penalty function methods, and minimax problems. The importance and necessity of having effective solution methods for NDO problems has long been recognized by many scientists and engineers. However, the practical use of NDO techniques has been somewhat limited, principally due to the lack of computationally viable procedures, that are also supported by theoretical convergence properties, and are suitable for solving large-scale problems. In this research, we present some new algorithms that are based on popular computationally effective strategies, while at the same time, do not compromise on theoretical convergence issues.
First, a new variable target value method (VTVM) is introduced that has an e-convergence property, and that differs from other known methods in that it does not require any prior assumption regarding bounds on an optimum or regarding the solution space. In practice, the step-length is often calculated by using an estimate of the optimal objective function value. For general nondifferentiable optimization problems, however, this may not be readily available. Hence, we design an algorithm that does not assume the possibility of having an a prior estimate of the optimal objective function value. Furthermore, along with this new step-length rule, we present a new subgradient deflection strategy in which a selected subgradient is rotated optimally toward a point that has an objective function value less than the incumbent target value. We also develop another deflection strategy based on Shor’s space dilation algorithm, so that the resulting direction of motion turns out to be a particular convex combination of two successive subgradients and we establish suitable convergence results.
In the second part of this dissertation, we consider Lagrangian dual problems. Our motivation here is the inadequacy of the simplex method or even interior point methods to obtain quick, near-optimal solutions to large linear programming relaxations of certain discrete or combinatorial optimization problems. Lagrangian dual methods, on the other hand, are quite well equipped to deal with complicating constraints and ill-conditioning problems. However, available optimization methods for such problems are not very satisfactory, and can stall far from the optimal objective function value for some problems. Also, there is no practical implementation strategy for recovering a primal optimal solution. This is a major shortcoming, even if the method is used only for bounding purposes in the context of a branch and bound scheme. With this motivation, we present new primal convergence theorems that generalize existing results on recovering primal optimal solutions, and we describe a scheme for embedding these results within a practical primal-dual Lagrangian optimization procedure. / Ph. D.
|
Page generated in 0.1095 seconds