• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 403
  • 315
  • 50
  • 46
  • 24
  • 12
  • 10
  • 10
  • 9
  • 7
  • 7
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1041
  • 1041
  • 338
  • 279
  • 277
  • 186
  • 129
  • 114
  • 106
  • 100
  • 94
  • 94
  • 83
  • 80
  • 80
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
211

Decomposition and diet problems

Hamilton, Daniel January 2010 (has links)
The purpose of this thesis is to efficiently solve real life problems. We study LPs. We study an NLP and an MINLP based on what is known as the generalised pooling problem (GPP), and we study an MIP that we call the cattle mating problem. These problems are often very large or otherwise difficult to solve by direct methods, and are best solved by decomposition methods. During the thesis we introduce algorithms that exploit the structure of the problems to decompose them. We are able to solve row-linked, column-linked and general LPs efficiently by modifying the tableau simplex method, and suggest how this work could be applied to the revised simplex method. We modify an existing sequential linear programming solver that is currently used by Format International to solve GPPs, and show the modified solver takes less time and is at least as likely to find the global minimum as the old solver. We solve multifactory versions of the GPP by augmented Lagrangian decomposition, and show this is more efficient than solving the problems directly. We introduce a decomposition algorithm to solve a MINLP version of the GPP by decomposing it into NLP and ILP subproblems. This is able to solve large problems that could not be solved directly. We introduce an efficient decomposition algorithm to solve the MIP cattle mating problem, which has been adopted for use by the Irish Cattle Breeding Federation. Most of the solve methods we introduce are designed only to find local minima. However, for the multifactory version of the GPP we introduce two methods that give a good chance of finding the global minimum, both of which succeed in finding the global minimum on test problems.
212

An enhanced implementation of models for electric power grid interdiction

Carnal, David D. 09 1900 (has links)
This thesis evaluates the ability of the Xpress-MP software package to solve complex, iterative mathematicalprogramming problems. The impetus is the need to improve solution times for the VEGA software package, which identifies vulnerabilities to terrorist attacks in electric power grids. VEGA employs an iterative, optimizing heuristic, which may need to solve hundreds of related linear programs. This heuristic has been implemented in GAMS (General Algebraic Modeling System), whose inefficiencies in data handling and model generation mean that a modest, 50-iteration solution of a real-world problem can require over five hours to run. This slowness defeats VEGA's ultimate purpose, evaluating vulnerability-reducing structural improvements to a power grid. We demonstrate that Xpress-MP can reduce run times by 60%-85% because of its more efficient data handling, faster model generation, and the ability, lacking entirely in GAMS, to solve related models without regenerating each from scratch. Xpress-MP's modeling language, Mosel, encompasses a full-featured procedural language, also lacking in GAMS. This language enables a simpler, more modular and more maintainable implementation. We also demonstrate the value of VEGA's optimizing heuristic by comparing it to rule-based heuristics rules adapted from the literature. The optimizing heuristic is much more powerful.
213

Approximate dynamic programming and aerial refueling

Panos, Dennis C. 06 1900 (has links)
Aerial refueling is an integral part of the United States military's ability to strike targets around the world with an overwhelming and continuous projection of force. However, with an aging fleet of refueling tankers and an indefinite replacement schedule the optimization of tanker usage is vital to national security. Optimizing tanker and receiver refueling operations is a complicated endeavor as it can involve over a thousand of missions during a 24 hour period, as in Operation Iraqi Freedom and Operation Enduring Freedom. Therefore, a planning model which increases receiver mission capability, while reducing demands on tankers, can be used by the military to extend the capabilities of the current tanker fleet. Aerial refueling optimization software, created in CASTLE Laboratory, solves the aerial refueling problem through a multi-period approximation dynamic programming approach. The multi-period approach is built around sequential linear programs, which incorporate value functions, to find the optimal refueling tracks for receivers and tankers. The use of value functions allows for a solution which optimizes over the entire horizon of the planning period. This approach varies greatly from the myopic optimization currently in use by the Air Force and produces superior results. The aerial refueling model produces fast, consistent, robust results which require fewer tankers than current planning methods. The results are flexible enough to incorporate stochastic inputs, such as: varying refueling times and receiver mission loads, while still meeting all receiver refueling requirements. The model's ability to handle real world uncertainties while optimizing better than current methods provides a great leap forward in aerial refueling optimization. The aerial refueling model, created in CASTLE Lab, can extend the capabilities of the current tanker fleet. / Contract number: N00244-99-G-0019 / US Navy (USN) author.
214

A Time-Evolving Optimization Model for an Intermodal Distribution Supply Chain Network:!A Case Study at a Healthcare Company

Johansson, Sara, Westberg, My January 2016 (has links)
Enticed by the promise of larger sales and better access to customers, consumer goods compa- nies (CGCs) are increasingly looking to evade traditional retailers and reach their customers directly–with direct-to-customer (DTC) policy. DTC trend has emerged to have major im- pact on logistics operations and distribution channels. It oers significant opportunities for CGCs and wholesale brands to better control their supply chain network by circumventing the middlemen or retailers. However, to do so, CGCs may need to develop their omni-channel strategies and fortify their supply chains parameters, such as fulfillment, inventory flow, and goods distribution. This may give rise to changes in the supply chain network at all strategic, tactical and operational levels. Motivated by recent interests in DTC trend, this master thesis considers the time-evolving supply chain system of an international healthcare company with preordained configuration. The input is bottleneck part of the company’s distribution network and involves 20% ≠ 25% of its total market. A mixed-integer linear programming (MILP) multiperiod optimization model is developed aiming to make tactical decisions for designing the distribution network, or more specifically, for determining the best strategy for distributing the products from manufacturing plant to primary distribution center and/or regional distribution centers and from them to customers. The company has got one manufacturing site (Mfg), one primary distribution center (PDP) and three dierent regional distribution centers (RDPs) worldwide, and the customers can be supplied from dierent plants with various transportation modes on dierent costs and lead times. The company’s motivation is to investigate the possibility of reduction in distribution costs by in-time supplying most of their demand directly from the plants. The model selects the best option for each customer by making trade-os among criteria involving distribution costs and lead times. Due to the seasonal variability and to account the market fluctuability, the model considers the full time horizon of one year. The model is analyzed and developed step by step, and its functionality is demonstrated by conducting experiments on the distribution network from our case study. In addition, the case study distribution network topology is utilized to create random instances with random parameters and the model is also evaluated on these instances. The computational experiments on instances show that the model finds good quality solutions, and demonstrate that significant cost reduction and modality improvement can be achieved in the distribution network. Using one-year actual data, it has been shown that the ratio of direct shipments could substantially improve. However, there may be many factors that can impact the results, such as short-term decisions at operational level (like scheduling) as well as demand fluctuability, taxes, business rules etc. Based on the results and managerial considerations, some possible extensions and final recommendations for distribution chain are oered. Furthermore, an extensive sensitivity analysis is conducted to show the eect of the model’s parameters on its performance. The sensitivity analysis employs a set of data from our case study and randomly generated data to highlight certain features of the model and provide some insights regarding its behaviour.
215

Applications of optimization to sovereign debt issuance

Abdel-Jawad, Malek January 2013 (has links)
This thesis investigates different issues related to the issuance of debt by sovereign bodies such as governments, under uncertainty about the future interest rates. Several dynamic models of interest rates are presented, along with extensive numerical experiments for calibration of models and comparison of performance on real financial market data. The main contribution of the thesis is the construction and demonstration of a stochastic optimisation model for debt issuance under interest rate uncertainty. When the uncertainty is modelled using a model from a certain class of single factor interest rate models, one can construct a scenario tree such that the number of scenarios grows linearly with time steps. An optimization model is constructed using such a one factor scenario tree. For a real government debt issuance remit, a multi-stage stochastic optimization is performed to choose the type and the amount of debt to be issued and the results are compared with the real issuance. The currently used simulation models by the government, which are in public domain, are also reviewed. Apparently, using an optimization model, such as the one proposed in this work, can lead to substantial savings in the servicing costs of the issued debt
216

Energy system analysis

Soundararajan, Ranjith January 2017 (has links)
The purpose of this thesis is to use a model to optimize the heat exchanger network for process industry and to estimate the minimum cost required for the heat exchanger network without compromising the energy demand by each stream as much as possible with the help of MATLAB programming software. Here, the optimization is done without considering stream splitting and stream combining. The first phase involves with deriving a simple heat exchanger network consisting of four streams i.e... Two hot streams and two cold streams required for the heat exchanger using the traditional Pinch Analysis method. The second phase of this work deals with randomly placing the heat exchanger network between the hot and cold streams and calculating the minimum cost of the heat exchanger network using genetic coding which is nothing but thousands of randomly created heat exchangers which are evolved over series of population.
217

Effective Network Partitioning to Find MIP Solutions to the Train Dispatching Problem

Snellings, Christopher 19 June 2013 (has links)
Each year the Railway Applications Section (RAS) of the Institution for Operations Research and the Management Sciences (INFORMS) posits a research problem to the world in the form of a competition. For 2012, the contest involved solving the Train Dispatching Problem (TDP) on a realistic 85 edge network for three different sets of input data. This work is an independent attempt to match or improve upon the results of the top three finishers in the contest using mixed integer programming (MIP) techniques while minimizing the use of heuristics. The primary focus is to partition the network in a manner that reduces the number of binary variables in the formulation as much as possible without compromising the ability to satisfy any of the contest requirements. This resulted in the ability to optimally solve this model for RAS Data Set 1 in 29 seconds without any problem-specific heuristics, variable restrictions, or variable fixing. Applying some assumptions about train movements allowed the same Data Set 1 solution to be found in 5.4 seconds. After breaking the larger Data Sets 2 and 3 into smaller sub-problems, solutions for Data Sets 2 and 3 were 28% and 1% better, respectively, than those of the competition winner. The time to obtain solutions for Data Sets 2 and 3 was 90 and 318 seconds, respectively.
218

Deriving Consensus Rankings from Benchmarking Experiments

Hornik, Kurt, Meyer, David January 2006 (has links) (PDF)
Whereas benchmarking experiments are very frequently used to investigate the performance of statistical or machine learning algorithms for supervised and unsupervised learning tasks, overall analyses of such experiments are typically only carried out on a heuristic basis, if at all. We suggest to determine winners, and more generally, to derive a consensus ranking of the algorithms, as the linear order on the algorithms which minimizes average symmetric distance (Kemeny-Snell distance) to the performance relations on the individual benchmark data sets. This leads to binary programming problems which can typically be solved reasonably efficiently. We apply the approach to a medium-scale benchmarking experiment to assess the performance of Support Vector Machines in regression and classification problems, and compare the obtained consensus ranking with rankings obtained by simple scoring and Bradley-Terry modeling. / Series: Research Report Series / Department of Statistics and Mathematics
219

What is the Minimal Systemic Risk in Financial Exposure Networks? INET Oxford Working Paper, 2019-03

Diem, Christian, Pichler, Anton, Thurner, Stefan January 2019 (has links) (PDF)
Management of systemic risk in financial markets is traditionally associated with setting (higher) capital requirements for market participants. There are indications that while equity ratios have been increased massively since the financial crisis, systemic risk levels might not have lowered, but even increased (see ECB data 1 ; SRISK time series 2 ). It has been shown that systemic risk is to a large extent related to the underlying network topology of financial exposures. A natural question arising is how much systemic risk can be eliminated by optimally rearranging these networks and without increasing capital requirements. Overlapping portfolios with minimized systemic risk which provide the same market functionality as empir- ical ones have been studied by Pichler et al. (2018). Here we propose a similar method for direct exposure networks, and apply it to cross-sectional interbank loan networks, consisting of 10 quarterly observations of the Austrian interbank market. We show that the suggested framework rearranges the network topol- ogy, such that systemic risk is reduced by a factor of approximately 3.5, and leaves the relevant economic features of the optimized network and its agents unchanged. The presented optimization procedure is not intended to actually re-configure interbank markets, but to demonstrate the huge potential for systemic risk management through rearranging exposure networks, in contrast to increasing capital requirements that were shown to have only marginal effects on systemic risk (Poledna et al., 2017). Ways to actually incentivize a self-organized formation toward optimal network configurations were introduced in Thurner and Poledna (2013) and Poledna and Thurner (2016). For regulatory policies concerning financial market stability the knowledge of minimal systemic risk for a given economic environment can serve as a benchmark for monitoring actual systemic risk in markets.
220

Heurí­sticas de programação linear inteira para resolução de problemas de programação de frota com restrições de sincronização. / Integer linear programming heuristics to solve fleet scheduling problems with synchronization constraints.

Tamura, Kelvin Yuso 09 May 2019 (has links)
A presente pesquisa aborda um problema de programação de veículos rico, em que a característica mais importante é a demanda de múltiplas embarcações para atendimento a uma única tarefa. Trata-se de uma aplicação real do setor de apoio marítimo \"offshore\", das embarcações que fazem o reboque e o lançamento de linhas de ancoragem de sondas de perfuração e unidades de produção. Como método de solução, aplicaram-se duas heurísticas com uma abordagem híbrida que incluem uma inserção baseada em programação linear inteira, visando a minimização do custo total da operação, dentro de um tempo de processamento aceitável. / This research deals with a rich vehicle scheduling problem, having as the most important feature the demand of multiple vessels per task. It is a real problem present in the oil industry related to the vessels that undertake the towing and the launching of mooring lines of drilling and production units. As a solution method, two heuristics with a hybrid approach were applied which include an insertion based on integer linear programming, aiming at minimizing the total cost of the operation, within an acceptable processing time.

Page generated in 0.0963 seconds