Spelling suggestions: "subject:"aptimization."" "subject:"anoptimization.""
351 |
Asymptotic behaviour of solutions in stochastic optimization : nonsmooth analysis and the derivation of non-normal limit distributions /King, Alan Jonathan. January 1986 (has links)
Thesis (Ph. D.)--University of Washington, 1986. / Vita. Bibliography: leaves [81]-83.
|
352 |
Symmetries, colorings, and polyanumeration /Nieman, Jeremy. January 2007 (has links)
Thesis (M.S.)--Rochester Institute of Technology, 2007. / Typescript. Includes bibliographical references (leaf 34).
|
353 |
Optimal operation policies with heterogeneous demand /Zhou, Weihua. January 2007 (has links)
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2007. / Includes bibliographical references (leaves 120-125). Also available in electronic version.
|
354 |
An advanced tabu search approach to the airlift loading problemRoesener, August G., January 1900 (has links) (PDF)
Thesis (Ph. D.)--University of Texas at Austin, 2006. / Vita. Includes bibliographical references.
|
355 |
Implementation and multiple dimensional extension to rectangle elimination methods for biobjective decision making /Spanos, Costas J. January 1982 (has links)
Thesis (M.S.)-- Carnegie-Mellon University, 1983. / Bibliography : p. 72-73.
|
356 |
Portfolio optimization based on robust estimation proceduresGao, Weiguo. January 2004 (has links)
Thesis (M.S.) -- Worcester Polytechnic Institute. / Keywords: Robust estimation; portfolio optimization. Includes bibliographical references (leaf 24).
|
357 |
Dynamic Memory Optimization using Pool Allocation and PrefetchingZhao, Qin, Rabbah, Rodric, Wong, Weng Fai 01 1900 (has links)
Heap memory allocation plays an important role in modern applications. Conventional heap allocators, however, generally ignore the underlying memory hierarchy of the system, favoring instead a low runtime overhead and fast response times. Unfortunately, with little concern for the memory hierarchy, the data layout may exhibit poor spatial locality, and degrade cache performance. In this paper, we describe a dynamic heap allocation scheme called pool allocation. The strategy aims to improve cache performance by inspecting memory allocation requests, and allocating memory from appropriate heap pools as dictated by the requesting context. The advantages are two fold. First, by pooling together data with a common context, we expect to improve spatial locality, as data fetched to the caches will contain fewer items from different contexts. If the allocation patterns are closely matched to the traversal patterns, the end result is faster memory performance. Second, by pooling heap objects, we expect access patterns to exhibit more regularity, thus creating more opportunities for data prefetching. Our dynamic memory optimizer exploits the increased regularity to insert prefetch instructions at runtime. The optimizations are implemented in DynamoRIO, a dynamic optimization framework. We evaluate the work using various benchmarks, and measure a 17% speedup over gcc -O3 on an Athlon MP, and a 13% speedup on a Pentium 4. / Singapore-MIT Alliance (SMA)
|
358 |
Solving optimization problems with generalized orthogonality constraintsZhu, Hong 08 July 2016 (has links)
This thesis focuses on optimization problems with generalized orthogonality constraints, which may also contains linear equality constraints. These problems appear in many areas, such as machine learning, signal processing, computer vision and so on.;Many problems in this form are NP hard. One challenge posed by generalized orthogonality constraints is local minimizers loaded by nonconvex constraints. More-over, the generalized orthogonality constraints are numerically expensive to preserve during iterations.;This thesis is mainly divided into two parts. The first part is focused on solving generalized orthogonality constrained optimization problems with differentiable objective functions. For this class of optimization problems, a generalized gradient.;This thesis is mainly divided into two parts. The first part is focused on solving generalized orthogonality constrained optimization problems with differentiable objective functions. For this class of optimization problems, a generalized gradient flow is proposed, which is contained on the constraints set if the initial condition satisfies generalized orthogonality constrains. The week convergence of the generalizedgradient flow is given. A discrete iterative scheme is also proposed to make the gradient flow method computable. In addition, we analyze the relationship between our discrete iteration scheme and some existing constraint preserving methods, and the relationship between our discrete iteration scheme and the inexact forward-backward method, respectively. Several problems which also can be solved by the generalized gradient flow are given. Furthermore, we also propose an optimal gradient flow by an alyzing the first order optimality condition.;The second part of this thesis is devoted to study of the generalized orthogonality constrained optimization problems with nondifferentiable objective functions. An approximate augmented Lagrangian method is used to deal with this class of problems. The global convergence is presented. We also extend the proximal alternating linearized minimization method (EPALM) to deal with the generalized orthogonality constraints appeared in the subproblem of the approximate augmented Lagrangian method. Moreover, to accelerate the EPALM method, an inertial proximal alternating linearized minimization method (IPALM) is proposed to deal with unconstrained nonconvex, nonsmooth problems with coupled objective functions.;Keywords: Generalized Orthogonality Constraints; Stiefel Manifold; Tangent Space;Gradient Flow; Approximate Augmented Lagrangian Method; Proximal Alternating Linearized Minimization Method.
|
359 |
Topics in optimization.January 2009 (has links)
Song, Haifeng. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 67-69). / Abstract also in Chinese. / Chapter 1 --- Introduction --- p.5 / Chapter 2 --- Preliminary --- p.8 / Chapter 2.1 --- Introduction --- p.8 / Chapter 2.2 --- Notations and fundamental properties --- p.8 / Chapter 2.3 --- Properties of polyhedra --- p.14 / Chapter 3 --- Results on Efficient Point Sets --- p.23 / Chapter 3.1 --- Introduction --- p.23 / Chapter 3.2 --- Geometric results on efficient point sets --- p.24 / Chapter 3.3 --- Density of positive proper efficient point sets --- p.33 / Chapter 4 --- Pareto Solutions of Polyhedral-valued Vector Optimization --- p.42 / Chapter 4.1 --- Introduction --- p.42 / Chapter 4.2 --- The structure of weak Pareto solution sets --- p.43 / Chapter 4.2.1 --- The general ordering cone case --- p.46 / Chapter 4.2.2 --- The polyhedral ordering cone case --- p.54 / Chapter 4.3 --- Connectedness of solution sets and optimal value sets --- p.55 / Chapter 4.4 --- Optimality conditions of piecewise linear mappings --- p.60 / Bibliography --- p.67
|
360 |
A Framework for Automated Generation of Specialized Function VariantsChaimov, Nicholas, Chaimov, Nicholas January 2012 (has links)
Efficient large-scale scientific computing requires efficient code, yet optimizing code to render it efficient simultaneously renders the code less readable, less maintainable, less portable, and requires detailed knowledge of low-level computer architecture, which the developers of scientific applications may lack. The necessary knowledge is subject to change over time as new architectures, such as GPGPU architectures like CUDA, which require very different optimizations than CPU-targeted code, become more prominent. The development of scientific cloud computing means that developers may not even know what machine their code will be running on when they are developing it.
This work takes steps towards automating the generation of code variants which are automatically optimized for both execution environment and input dataset. We demonstrate that augmenting an autotuning framework with a performance database which captures metadata about environment and input and performing decision tree learning over that data can help more fully automate the process of enhancing software performance.
|
Page generated in 0.1035 seconds