• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 3
  • 2
  • Tagged with
  • 44
  • 13
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Integrating Lagrangian relaxation and constraint programming for multicommodity network routing

Ouaja Ajili, Wided January 2004 (has links)
No description available.
2

Equivalence of linear, free, liberal, structured program schemas is decidable in polynomial time

Laurence, Michael Rupen January 2004 (has links)
No description available.
3

The modelling and analysis of mathematical programming problems : tools and applications

Domínguez-Ballesteros, María Belén January 2001 (has links)
No description available.
4

Structural and decomposition results for binet matrices, bidirected graphs and signed-graphic matroids

Papalamprou, Konstantinos January 2009 (has links)
In this thesis we deal with binet matrices and the class of signed-graphic matroids which is the class of matroids represented over R by binet matrices. The thesis is divided in three parts. In the first part, we provide the vast majority of the notions used throughout the thesis and some results regarding the class of binet matrices. In this part, we focus on the class of linear and integer programming problems in which the constraint matrix is binet and provide methods and algorithms which solve these problems efficiently. The main new result is that the existing combinatorial methods can not solve the {0, 1/2}-separation problem (special case of the well known separation problem) with integral binet matrices. The main new results of the whole thesis are provided in the next two parts. In the second part, we present a polynomial time algorithm to construct a bidirected graph for any totally unimodular matrix B by finding node-edge incidence matrices Q and S such that QB = S. Seymour's famous decomposition theorem for regular matroids states that any totally unimodular matrix can be constructed through a series of composition operations called k-sums starting from network matrices and their transposes and two compact representation matrices B1 and B2 of a certain ten element matroid. Given that B1 and B2 are binet matrices, we examine the k-sums of network and binet matrices (k = 1,2, 3). It is shown that the k-sum of a network and a binet matrix is a binet matrix, but binet matrices are not closed under this operation for k = 2, 3. A new class of matrices is introduced, the so-called tour matrices, which generalises network and totally unimodular matrices. For any such matrix there exists a bidirected graph such that the columns represent a collection of closed tours in the graph. It is shown that tour matrices are closed under 1-, 2- and 3-sum as well as under elementary operations on their rows and columns. Given the constructive proofs of the above results regarding the k-sum operations and existing recognition algorithms for network and binet matrices, an algorithm is presented which constructs a bidirected graph for any totally unimodular matrix. In the third part of this thesis we deal with the frame matroid of a signed graph, or simply the signed-graphic matroid. Several new results are provided in this last part of the thesis. Specifically, given a signed graph, we provide methods to find representation matrices of the associated signed-graphic matroid over GF(2), GF(3) and R. Furthermore, two new matroid recognition algorithms are presented in this last part. The first one determines whether a binary matroid is signed-graphic or not and the second one determines whether a (general) matroid is binary signed-graphic or not. Finally, one of the most important new results of this thesis is the decomposition theory for the class of binary signed-graphic matroids which is provided in the last chapter. In order to achieve this result, we employed Tutte's theory of bridges. The proposed decomposition differs from previous decomposition results on matroids that have appeared in the literature in the sense that it is not based on k-sums, but rather on the operation of deletion of a cocircuit. Specifically, it is shown that certain minors resulting from the deletion of a cocircuit of a binary matroid will be graphic matroids except for one that will be signed-graphic if and only if the matroid is signed-graphic. The decomposition theory for binary signed-graphic matroids is a joint work with G. Appa and L. Pitsoulis.
5

A general algebra of business rules for heterogeneous systems

Ramsey, Frederick V. January 2007 (has links)
No description available.
6

A tolerance-tube approach to sequential quadratic programming with applications

Zoppke-Donaldson, Christine January 1995 (has links)
Many optimization problems can be modelled as nonlinear programming problems (NLP). Optimization problems arise from a large number of different applications, such as chemical and electrical engineering and economics. For example, in chemical engineering the aim is to maximize the efficiency of a chemical plant under certain operating restrictions. The model of the efficiency forms the objective function and the operating restrictions form the constraints. In electrical engineering, fuel costs or system losses are minimized in electrical networks. The fuel cost can be expressed as a function of the generated power. This function is used as the objective function. The constraints are given by the consumer demand for power, which has to be met and by certain system restrictions. In order to provide an accurate enough model for the plant or network, it is often necessary to use nonlinear functions for both objective function and constraint functions. Higher accuracy of the model also leads to a larger number of variables and constraints. Some models even require non-smooth constraint functions. If this is the case, N LP methods cannot solve the problem, because first and second derivatives do not exist for all points of the constraint functions.
7

Importance sampling for stochastic programming

Tran, Quang Kha January 2016 (has links)
Stochastic programming models are large-scale optimization problems that are used to facilitate decision-making under uncertainty. Optimization algorithms for such problems need to evaluate the exptected future costs of current decisions, often referred to as the recourse function. In practice, this calculation is computationally difficult as it requires the evaluation of a multidimensional integral whose integrand is an optimization problem. In turn, the recourse function has to be estimated using techniques such as scenario trees or Monte Carlo methods, both of which require numerous function evaluations to produce accurate results for large-scale problems with multiple periods and high-dimensional uncertainty. In this thesis, we introduce an Importance Sampling framework for stochastic programming that can produce accurate estimates of the recourse function using a small number of samples. Previous approaches for importance sampling in stochastic programming were limited to problems where the uncertainty was modelled using discrete random variables, and the recourse function was additively separable in the uncertain dimensions. Our framework avoids these restrictions by pairing Markov Chain Monte Carlo methods with Kernel Density Estimation algorithms to build a non-parametric Importance Sampling distribution, which can then be used to produce a low-variance estimate of the recourse function. We demonstrate the increased accuracy and efficiency of our approach using variants of well-known multistage stochastic programming problems. Our numerical results show that our framework produces more accurate estimates of the optimal value of stochastic programming models, especially for problems with moderate-to-high variance distributions or rare-event distributions. For example, in some applications, we found that if the random variables are drawn from a rare-event distribution, our proposed algorithm can achieve four times reduction in the mean square error and variance given by other existing methods (e.g.: SDDP with Crude Monte Carlo or SDDP with Quasi Monte Carlo method) for the same number of samples. Or when the random variables are drawn from the high variance distribution, our proposed algorithm can reduce the variance averagely by two times compared to the results obtained by other methods for approximately the same level of mean square error and a fixed number of samples. We use our proposed algorithm to solve a capacity expansion planning problem in the electric power industry. The model includes the unit commitment problem and maintenance scheduling. It allows the investors to make optimal decisions on the capacity and the type of generators to build in order to minimize the capital cost and operating cost over a long period of time. Our model computes the optimal schedule for each of the generators while meeting the demand and respecting the engineering constraints of each generator. We use an aggregation method to group generators of similar features, in order to reduce the problem size. The numerical experiment shows that by clustering the generators of the same technology with similar size together and apply the SDDP algorithm with our proposed sampling framework on this simplified formulation, we are able to solve the problem using only one fourth the amount of time to solve the original problem by conventional algorithms. The speed-up is achieved without a significant reduction in the quality of the solution.
8

Temporal planning for rich numeric contexts

Bajada, Josef January 2016 (has links)
Real-world planning problems often feature complex temporal and numeric characteristics. These include concurrent activities and also effects that involve continuous change. This work presents the formalism behind reasoning with required concurrency that involves continuous change in temporal planning problems, together with a set of techniques to solve a class of tasks that to date are hard to solve with current state-of-the-art temporal planners. The motivation for this work is scenarios where planning actions have rich numeric effects on some physical system. One such example is automated demand dispatch for electricity provision, where actions that fulfil customer requirements have an effect on various metrics, such as wattage or costs, which could be subject to operational or commercial constraints. An algorithm that handles discrete interference of linear continuous effects, referred to as constants in context, is presented. This technique allows discrete actions to update the rate of change of a continuous effect taking place concurrently. This work builds on techniques used in current temporal planners that make use of linear programming, and also extends the heuristic to guide the search to a solution. This algorithm was implemented in a new temporal and numeric planner called DICE and evaluated with some benchmark domains. PDDL, the current de facto standard language for planning domains and corresponding planning tasks, was extended to support interactions with external class modules. The proposed extension, PDDLx, defines a generic planner-solver interface for both discrete and continuous effects. This enables planners that implement this interface to interact with external solvers and incorporate context-specific effects in a black-box fashion, enabling complex numeric behaviour to be encapsulated within such modules. Non-linear monotonic continuous effects, defined in the proposed PDDLx extension, are integrated within the planner using a non-linear iterative convergence algorithm. It searches for a linear approximation within an acceptable configurable error margin, which is then used within the linear program computed for each temporal state. This algorithm proves to be effective in various domains where non-linear continuous behaviour is prevalent. This technique was implemented as an extension to DICE, called uNICOrn, which performs non-linear iterative convergence for continuous effects whose duration needs to be determined by the planner. uNICOrn was also evaluated with some benchmark non-linear domains. A case study on the automated demand dispatch domain is presented to demonstrate the use of the planning algorithms proposed in this thesis. Linear and non-linear planning problems are evaluated and the performance of uNICOrn on these problem instances was analysed. This work builds on current techniques used for temporal planning with continuous numeric behaviour using linear programming, and enhances them to remove some of their intrinsic limitations. The result is a set of algorithms that are more effective in solving real-world applications that involve continuous change and rich numeric behaviour.
9

Carathéodory cubature measures

Tchernychova, Maria January 2015 (has links)
We introduce an efficient algorithm for computing Carathéodory cubature measures, which are defined as interpolatory cubature measures with positive weights, whose cardinality grows polynomially with dimension, as proved in [16]. We discuss two Carathéodory cubature problem formulations. Both are based on thinning the support of the Cartesian product cubature measure, whose cardinality grows exponentially with dimension, via a formulation of a suitable feasibility LP (Linear Programming) problem. A basic feasible solution to the latter fully characterises a Carathéodory cubature measure. The first problem formulation, initially presented in [48], employes the Simplex Algorithm or Interior Point Method to construct a basic feasible solution to the aforementioned LP problem. The complexity of this method is dependent on the number of nodes in the Cartesian product cubature and thus grows exponentially with dimension. The second problem formulation constitutes the main contribution of the present work. Starting from the LP problem, arising from the Cartesian product cubature construction, we employ a hierarchical cluster representation of the underlying constraint matrix and the strictly feasible solution, arising from the weights of the Cartesian product cubature. Applying the Recombination Algorithm, introduced in [96], to this hierarchical data structure, we recursively generate a sequence of smaller LP problems. We construct a basic feasible solution to each LP problem in turn, by employing a novel algorithm, based on the SVD (Singular Value Decomposition) of the constraint matrix, culminating in a basic feasible solution for the original LP problem. The complexity of this algorithm, is independent of the number of nodes in the Cartesian product cubature, and can be shown to grow polynomially rather than exponentially with dimension. Moreover, the novel SVD-based method for computing basic feasible solutions, produces a one order of magnitude speed-up of the overall algorithm, when compared to the algorithm in [96], and is therefore preferable.
10

Problem-driven scenario generation for stochastic programs

Fairbrother, Jamie January 2016 (has links)
Stochastic programming concerns mathematical programming in the presence of uncertainty. In a stochastic program uncertain parameters are modeled as random vectors and one aims to minimize the expectation, or some risk measure, of a loss function. However, stochastic programs are computationally intractable when the underlying uncertain parameters are modeled by continuous random vectors. Scenario generation is the construction of a finite discrete random vector to use within a stochastic program. Scenario generation can consist of the discretization of a parametric probabilistic model, or the direct construction of a discrete distribution. There is typically a trade-off here in the number of scenarios that are used: one must use enough to represent the uncertainty faithfully but not so many that the resultant problem is computationally intractable. Standard scenario generation methods are distribution-based, that is they do not take into account the underlying problem when constructing the discrete distribution. In this thesis we promote the idea of problem-based scenario generation. By taking into account the structure of the underlying problem one may be able to represent uncertainty in a more parsimonious way. The first two papers of this thesis focus on scenario generation for problems which use a tail-risk measure, such as the conditional value-at-risk, focusing in particular on portfolio selection problems. In the final paper we present a constraint driven approach to scenario generation for simple recourse problems, a class of stochastic programs for minimizing the expected shortfall and surplus of some resources with respect to uncertain demands.

Page generated in 0.0354 seconds