61 
A QuickandDirty Approach to Robustness in Linear OptimizationKarimi, Mehdi January 2012 (has links)
We introduce methods for dealing with linear programming (LP) problems
with uncertain data, using the notion of weighted analytic centers.
Our methods are based on high interaction with the decision maker (DM) and try to
find solutions which satisfy most of his/her important criteria/goals.
Starting with the drawbacks of different methods for dealing with
uncertainty in LP, we explain how our methods improve most of them. We prove
that, besides many practical advantages, our approach is theoretically
as strong as robust optimization. Interactive cuttingplane algorithms are
developed for concave and quasiconcave utility functions. We present some
probabilistic bounds for feasibility and evaluate our approach by means
of computational experiments.

62 
LowLevel Haskell Code: Measurements and Optimization TechniquesPeixotto, David 06 September 2012 (has links)
Haskell is a lazy functional language with a strong static type system and
excellent support for parallel programming. The language features of Haskell
make it easier to write correct and maintainable programs, but execution speed
often suffers from the high levels of abstraction. While much past research
focuses on highlevel optimizations that take advantage of the functional
properties of Haskell, relatively little attention has been paid to the
optimization opportunities in the lowlevel imperative code generated during
translation to machine code. One problem with current lowlevel optimizations
is that their effectiveness is limited by the obscured control flow caused by
Haskell's highlevel abstractions. My thesis is that tracebased optimization
techniques can be used to improve the effectiveness of lowlevel optimizations
for Haskell programs. I claim three unique contributions in this work.
The first contribution is to expose some properties of lowlevel Haskell codes
by looking at the mix of operations performed by the selected benchmark codes
and comparing them to the lowlevel codes coming from traditional programming
languages. The lowlevel measurements reveal that the control flow is obscured
by indirect jumps caused by the implementation of lazy evaluation,
higherorder functions, and the separately managed stacks used by Haskell
programs.
My second contribution is a study on the effectiveness of a dynamic binary
tracebased optimizer running on Haskell programs. My results show that while
viable program traces frequently occur in Haskell programs the overhead
associated with maintaing the traces in a dynamic optimization system outweigh
the benefits we get from running the traces. To reduce the runtime overheads,
I explore a way to find traces in a separate profiling step.
My final contribution is to build and evaluate a static tracebased optimizer
for Haskell programs. The static optimizer uses profiling data to find traces
in a Haskell program and then restructures the code around the traces to
increase the scope available to the lowlevel optimizer. My results show that
we can successfully build traces in Haskell programs, and the optimized code
yields a speedup over existing lowlevel optimizers of up to 86%
with an average speedup of 5% across 32 benchmarks.

63 
A QuickandDirty Approach to Robustness in Linear OptimizationKarimi, Mehdi January 2012 (has links)
We introduce methods for dealing with linear programming (LP) problems
with uncertain data, using the notion of weighted analytic centers.
Our methods are based on high interaction with the decision maker (DM) and try to
find solutions which satisfy most of his/her important criteria/goals.
Starting with the drawbacks of different methods for dealing with
uncertainty in LP, we explain how our methods improve most of them. We prove
that, besides many practical advantages, our approach is theoretically
as strong as robust optimization. Interactive cuttingplane algorithms are
developed for concave and quasiconcave utility functions. We present some
probabilistic bounds for feasibility and evaluate our approach by means
of computational experiments.

64 
Mathematical optimization techniques for cognitive radar networksRossetti, Gaia January 2018 (has links)
This thesis discusses mathematical optimization techniques for waveform design in cognitive radars. These techniques have been designed with an increasing level of sophistication, starting from a bistatic model (i.e. two transmitters and a single receiver) and ending with a cognitive network (i.e. multiple transmitting and multiple receiving radars). The environment under investigation always features strong signaldependent clutter and noise. All algorithms are based on an iterative waveformfilter optimization. The waveform optimization is based on convex optimization techniques and the exploitation of initial radar waveforms characterized by desired auto and crosscorrelation properties. Finally, robust optimization techniques are introduced to account for the assumptions made by cognitive radars on certain second order statistics such as the covariance matrix of the clutter. More specifically, initial optimization techniques were proposed for the case of bistatic radars. By maximizing the signal to interference and noise ratio (SINR) under certain constraints on the transmitted signals, it was possible to iteratively optimize both the orthogonal transmission waveforms and the receiver filter. Subsequently, the above work was extended to a convex optimization framework for a waveform design technique for bistatic radars where both radars transmit and receive to detect targets. The method exploited prior knowledge of the environment to maximize the accumulated target return signal power while keeping the disturbance power to unity at both radar receivers. The thesis further proposes convex optimization based waveform designs for multiple input multiple output (MIMO) based cognitive radars. All radars within the system are able to both transmit and receive signals for detecting targets. The proposed model investigated two complementary optimization techniques. The first one aims at optimizing the signal to interference and noise ratio (SINR) of a specific radar while keeping the SINR of the remaining radars at desired levels. The second approach optimizes the SINR of all radars using a maxmin optimization criterion. To account for possible mismatches between actual parameters and estimated ones, this thesis includes robust optimization techniques. Initially, the multistatic, signaldependent model was tested against existing worstcase and probabilistic methods. These methods appeared to be over conservative and generic for the considered signaldependent clutter scenario. Therefore a new approach was derived where uncertainty was assumed directly on the radar crosssection and Doppler parameters of the clutters. Approximations based on Taylor series were invoked to make the optimization problem convex and {subsequently} determine robust waveforms with specific SINR outage constraints. Finally, this thesis introduces robust optimization techniques for throughthewall radars. These are also cognitive but rely on different optimization techniques than the ones previously discussed. By noticing the similarities between the minimum variance distortionless response (MVDR) problem and the matchedillumination one, this thesis introduces robust optimization techniques that consider uncertainty on environmentrelated parameters. Various performance analyses demonstrate the effectiveness of all the above algorithms in providing a significant increase in SINR in an environment affected by very strong clutter and noise.

65 
Interactions of Uncertainty and Optimization: Theory, Algorithms, and Applications to Chemical Site OperationsAmaran, Satyajith 01 September 2014 (has links)
This thesis explores different paradigms for incorporating uncertainty with optimization frameworks for applications in chemical engineering and sitewide operations. First, we address the simulation optimization problem, which deals with the search for optimal input parameters to blackbox stochastic simulations which are potentially expensive to evaluate. We include a comprehensive literature survey of the stateoftheart in the area, propose a new provably convergent trust regionbased algorithm, and discuss implementation details along with extensive computational experience, including examples for chemical engineering applications. Next, we look at the problem of longterm sitewide maintenance turnaround planning. Turnarounds involve the disruption of production for significant periods of time, and may incur enormous costs in terms of maintenance manpower as well as lost sales. The problem involves (1) the simulation of profit deterioration due to wear and tear followed by the determination of how frequently a particular turnaround should take place; and (2) the consideration of site network structure and turnaround frequencies to determine how turnarounds of different plants may be coordinated over a longterm horizon. We investigate two mixedinteger models, the first of which determines optimal frequencies of individual plant turnarounds, while the second considers maximizing longterm profit through coordination of turnarounds across the site. We then turn to more conventional methods of dealing with optimization under uncertainty, and make use of a combined robust optimization and stochastic programming approach to mediumterm maintenance planning in integrated chemical sites. The nature of the uncertainty considered affects two aspects of maintenance planning, one of which is most suitably addressed through a robust optimization framework, while the other is better handled with stochastic programming models. In summary, we highlight the importance of considering uncertainty in optimization as well as the choice of approach or paradigm used through chemical engineering applications that span varied domains and time scales.

66 
Green Petroleum Refining  Mathematical Models for Optimizing Petroleum Refining Under Emission ConstraintsAli Yusuf, Yusuf 07 August 2013 (has links)
Petroleum refining processes provide the daily requirements of energy for the global market. Each refining process produces wastes that have the capacity to harm the environment if not properly disposed of. The treatment of refinery waste is one of the most complex issues faced by refinery managers. Also, the hazardous nature of these wastes makes them rather costly to dispose of for the refineries. In this thesis, system analysis tools are used to design a program that allows for the selection of the optimal control, minimization and treating options for petroleum refinery waste streams. The performance of the developed model is demonstrated via a case study. Optimal mitigation alternatives to meet the emission reduction targets were studied by evaluating their relative impact on the profitable operation of the given facility. It was found that the optimal mitigation steps was to reduce emission precursors by conducting feed switches at the refinery. In all cases, the optimal solution did not include a capital expansion of the emission control facilities and equipment.

67 
Nonsmooth analysis and optimization.January 1993 (has links)
Huang Liren. / Thesis (Ph.D.)Chinese University of Hong Kong, 1993. / Includes bibliographical references (leaves 96). / Abstract  p.1 / Introduction  p.2 / References  p.5 / Chapter Chapter 1.  Some elementary results in nonsmooth analysis and optimization  p.6 / Chapter 1.  "Some properties for ""lim sup"" and ""lim inf"""  p.6 / Chapter 2.  The directional derivative of the suptype function  p.8 / Chapter 3.  Some results in nonsmooth analysis and optimization  p.12 / References  p.19 / Chapter Chapter 2.  On generalized secondorder derivatives and Taylor expansions in nonsmooth optimization  p.20 / Chapter 1.  Introduction  p.20 / Chapter 2.  "Dinidirectional derivatives, Clark's directional derivatives and generalized secondorder directional derivatives"  p.20 / Chapter 3.  On Cominetti and Correa's conjecture  p.28 / Chapter 4.  Generalized secondorder Taylor expansion  p.36 / Chapter 5.  Detailed proof of Theorem 2.4.2  p.40 / Chapter 6.  Corollaries of Theorem 2.4.2 and Theorem 2.4.3  p.43 / Chapter 7.  Some applications in optimization  p.46 / Ref erences  p.51 / Chapter Chapter 3.  Secondorder necessary and sufficient conditions in nonsmooth optimization  p.53 / Chapter 1.  Introduction  p.53 / Chapter 2.  Secondorder necessary and sufficient conditions without constraint  p.56 / Chapter 3.  Secondorder necessary conditions with constrains  p.66 / Chapter 4.  Sufficient conditions theorem with constraints  p.77 / References  p.87 / Appendix  p.89 / References  p.96

68 
Distributionally Robust Optimization and its Applications in Machine LearningKang, Yang January 2017 (has links)
The goal of Distributionally Robust Optimization (DRO) is to minimize the cost of running a stochastic system, under the assumption that an adversary can replace the underlying baseline stochastic model by another model within a family known as the distributional uncertainty region. This dissertation focuses on a class of DRO problems which are datadriven, which generally speaking means that the baseline stochastic model corresponds to the empirical distribution of a given sample.
One of the main contributions of this dissertation is to show that the class of datadriven DRO problems that we study unify many successful machine learning algorithms, including square root Lasso, support vector machines, and generalized logistic regression, among others. A key distinctive feature of the class of DRO problems that we consider here is that our distributional uncertainty region is based on optimal transport costs. In contrast, most of the DRO formulations that exist to date take advantage of a likelihood based formulation (such as KullbackLeibler divergence, among others). Optimal transport costs include as a special case the socalled Wasserstein distance, which is popular in various statistical applications.
The use of optimal transport costs is advantageous relative to the use of divergencebased formulations because the region of distributional uncertainty contains distributions which explore samples outside of the support of the empirical measure, therefore explaining why many machine learning algorithms have the ability to improve generalization. Moreover, the DRO representations that we use to unify the previously mentioned machine learning algorithms, provide a clear interpretation of the socalled regularization parameter, which is known to play a crucial role in controlling generalization error. As we establish, the regularization parameter corresponds exactly to the size of the distributional uncertainty region.
Another contribution of this dissertation is the development of statistical methodology to study datadriven DRO formulations based on optimal transport costs. Using this theory, for example, we provide a sharp characterization of the optimal selection of regularization parameters in machine learning settings such as squareroot Lasso and regularized logistic regression.
Our statistical methodology relies on the construction of a key object which we call the robust Wasserstein profile function (RWP function). The RWP function similar in spirit to the empirical likelihood profile function in the context of empirical likelihood (EL). But the asymptotic analysis of the RWP function is different because of a certain lack of smoothness which arises in a suitable Lagrangian formulation.
Optimal transport costs have many advantages in terms of statistical modeling. For example, we show how to define a class of novel semisupervised learning estimators which are natural companions of the standard supervised counterparts (such as square root Lasso, support vector machines, and logistic regression). We also show how to define the distributional uncertainty region in a purely datadriven way. Precisely, the optimal transport formulation allows us to inform the shape of the distributional uncertainty, not only its center (which given by the empirical distribution). This shape is informed by establishing connections to the metric learning literature. We develop a class of metric learning algorithms which are based on robust optimization. We use the robustoptimizationbased metric learning algorithms to inform the distributional uncertainty region in our datadriven DRO problem. This means that we endow the adversary with additional which force him to spend effort on regions of importance to further improve generalization properties of machine learning algorithms.
In summary, we explain how the use of optimal transport costs allow constructing what we call doublerobust statistical procedures. We test all of the procedures proposed in this paper in various data sets, showing significant improvement in generalization ability over a wide range of stateoftheart procedures.
Finally, we also discuss a class of stochastic optimization algorithms of independent interest which are particularly useful to solve DRO problems, especially those which arise when the distributional uncertainty region is based on optimal transport costs.

69 
Computational studies of some static and dynamic optimisation problems.Lee, Wei R. January 1999 (has links)
In this thesis we shall investigate the numerical solutions to several important practical static and dynamic optimization problems in engineering and physics. The thesis is organized as follows.In Chapter 1 a general literature review is presented, including motivation and development of the problems, and existing results. Furthermore, some existing computational methods for optimal control problems are also discussed.In Chapter 2 the design of a semiconductor device is posed as an optimization problem: given an ideal voltagecurrent (V  I) characteristic, find one or more physical and geometrical parameters so that the VI characteristic of the device matches the ideal one optimally with respect to a prescribed performance criterion. The voltagecurrent characteristic of a semiconductor device is governed by a set of nonlinear partial differential equations (PDE), and thus a blackbox approach is taken for the numerical solution to the PDEs. Various existing numerical methods are proposed for the solution of the nonlinear optimization problem. The Jacobian of the cost function is illconditioned and a scaling technique is thus proposed to stabilize the resulting linear system. Numerical experiments, performed to show the usefulness of this approach, demonstrate that the approach always gives optimal or nearoptimal solutions to the test problems in both two and three dimensions.In Chapter 3 we propose an efficient approach to numerical integration in one and two dimensions, where a grid set with a fixed number of vertices is to be chosen so that the error between the numerical integral and the exact integral is minimized. For one dimensional problem two schemes are developed for sufficiently smooth functions based on the midpoint rectangular quadrature rule and the trapezoidal rule respectively, and another method is also developed for integrands which are not ++ / sufficiently smooth. For two dimensional problems two schemes are first developed for sufficiently smooth functions. One is based on the barycenter rule on a rectangular partition, while the other is on a triangular partition. A scheme for insufficiently smooth functions is also developed. For illustration, several examples are solved using the proposed schemes, and the numerical results show the effectiveness of the approach.Chapter 4 deals with optimal recharge and driving plans for a batterypowered electric vehicle. A major problem facing batterypowered electric vehicles is in their batteries: weight and charge capacity. Thus a batterypowered electric vehicle only has a short driving range. To travel for a longer distance, the batteries are required to be recharged frequently. In this chapter we construct a model for a batterypowered electric vehicle, in which driving strategy is to be obtained so that the total traveling time between two locations is minimized. The problem is formulated as an unconventional optimization problem. However, by using the control parameterization enhancing transformation (CPET) (see [100]) it is shown that this unconventional optimization is equivalent to a conventional optimal parameter selection problem. Numerical examples are solved using the proposed method.In Chapter 5 we consider the numerical solution to a class of optimal control problems involving variable time points in their cost functions. The CPET is first used to convert the optimal control problem with variable time points into an equivalent optimal control problem with fixed multiple characteristic times (MCT). Using the control parameterization technique, the time horizon is partitioned into several subintervals. Let the partition points also be taken as decision variables. The control functions are approximated by piecewise constant or piecewise linear functions ++ / in accordance with these variable partition points. We thus obtain a finite dimensional optimization problem. The CPET transform is again used to convert approximate optimal control problems with variable partition points into equivalent standard optimal control problems with MCT, where the control functions are piecewise constant or piecewise linear functions with prefixed partition points. The transformed problems are essentially optimal parameter selection problems with MCT. The gradient formulae are obtained for the objective function as well as the constraint functions with respect to relevant decision variables. Numerical examples are solved using the proposed method.A numerical approach is proposed in Chapter 6 for constructing an approximate optimal feedback control law of a class of nonlinear optimal control problems. In this approach, the state space is partitioned into subdivisions, and the controllers are approximated by a linear combination of the 3rd order Bspline basis functions. Furthermore, the partition points are also taken as decision variables in this formulation. To show the effectiveness of the proposed approach, a two dimensional and a three dimensional examples are solved by the approach. The numerical results demonstrate that the method is superior to the existing methods with fixed partition points.

70 
VHTR Core Shuffling Algorithm Using Particle Swarm Optimization ReloPSO3DLakshmipathy, Sathish Kumar 2012 May 1900 (has links)
Improving core performance by reshuffling/reloading the fuel blocks within the core is one of the incore fuel management methods with two major benefits: a possibility to improve core life and increase core safety. VHTR is a hexagonal annular core reactor with reflectors in the center and outside the fuel rings (3rings). With the block type fuel assemblies, there is an opportunity for mutidimensional fuel bocks movement within the core during scheduled reactor refueling operations.
As the core is symmetric, by optimizing the shuffle operation of 1/6th of the core, the same process can be repeated through the remaining 5/6th of the core. VHTR has 170 fuel blocks in the core of which 50 are control rod blocks and are not movable to regular fuel block locations. The reshuffling problem now is to find the best combination of 120 fuel blocks that has a minimized power peaking and/or increased core life under safety constraints among the 120! combinations.
For evaluating each LP during the shuffling, a fitness function that is developed from the parameters affecting the power peaking and core life is required. Calculating the power peaking at each step using Monte Carlo simulations on a whole core exact geometry model is a time consuming process and not feasible. A parameter is developed from the definitions of reactivity and power peaking factor called the localized reactivity potential that can be estimated for every block movement based on the reaction rates and atom densities of the initial core burnup at the time of shuffling.
The algorithm (ReloPSO) is based on Particle Swarm Optimization algorithm the search process by improving towards the optimum from a set of random LPs based on the fitness function developed with the reactivity potential parameter. The algorithm works as expected and the output obtained has a flatter reactivity profile than the input. The core criticality is found to increase when shuffled closer to end of life. Detailed analysis on the burn runs after shuffling at different time of core operation is required to correlate the estimated and actual values of the reactivity parameter and to optimize the time of shuffle.

Page generated in 0.1649 seconds