Spelling suggestions: "subject:"1inear programming"" "subject:"cinear programming""
281 |
Complex question answering : minimizing the gaps and beyondHasan, Sheikh Sadid Al January 2013 (has links)
Current Question Answering (QA) systems have been significantly advanced in demonstrating
finer abilities to answer simple factoid and list questions. Such questions are easier
to process as they require small snippets of texts as the answers. However, there is
a category of questions that represents a more complex information need, which cannot
be satisfied easily by simply extracting a single entity or a single sentence. For example,
the question: “How was Japan affected by the earthquake?” suggests that the inquirer is
looking for information in the context of a wider perspective. We call these “complex questions”
and focus on the task of answering them with the intention to minimize the existing
gaps in the literature.
The major limitation of the available search and QA systems is that they lack a way of
measuring whether a user is satisfied with the information provided. This was our motivation
to propose a reinforcement learning formulation to the complex question answering
problem. Next, we presented an integer linear programming formulation where sentence
compression models were applied for the query-focused multi-document summarization
task in order to investigate if sentence compression improves the overall performance.
Both compression and summarization were considered as global optimization problems.
We also investigated the impact of syntactic and semantic information in a graph-based
random walk method for answering complex questions. Decomposing a complex question
into a series of simple questions and then reusing the techniques developed for answering
simple questions is an effective means of answering complex questions. We proposed a
supervised approach for automatically learning good decompositions of complex questions
in this work. A complex question often asks about a topic of user’s interest. Therefore, the
problem of complex question decomposition closely relates to the problem of topic to question
generation. We addressed this challenge and proposed a topic to question generation
approach to enhance the scope of our problem domain. / xi, 192 leaves : ill. ; 29 cm
|
282 |
Decomposition and diet problemsHamilton, Daniel January 2010 (has links)
The purpose of this thesis is to efficiently solve real life problems. We study LPs. We study an NLP and an MINLP based on what is known as the generalised pooling problem (GPP), and we study an MIP that we call the cattle mating problem. These problems are often very large or otherwise difficult to solve by direct methods, and are best solved by decomposition methods. During the thesis we introduce algorithms that exploit the structure of the problems to decompose them. We are able to solve row-linked, column-linked and general LPs efficiently by modifying the tableau simplex method, and suggest how this work could be applied to the revised simplex method. We modify an existing sequential linear programming solver that is currently used by Format International to solve GPPs, and show the modified solver takes less time and is at least as likely to find the global minimum as the old solver. We solve multifactory versions of the GPP by augmented Lagrangian decomposition, and show this is more efficient than solving the problems directly. We introduce a decomposition algorithm to solve a MINLP version of the GPP by decomposing it into NLP and ILP subproblems. This is able to solve large problems that could not be solved directly. We introduce an efficient decomposition algorithm to solve the MIP cattle mating problem, which has been adopted for use by the Irish Cattle Breeding Federation. Most of the solve methods we introduce are designed only to find local minima. However, for the multifactory version of the GPP we introduce two methods that give a good chance of finding the global minimum, both of which succeed in finding the global minimum on test problems.
|
283 |
An enhanced implementation of models for electric power grid interdictionCarnal, David D. 09 1900 (has links)
This thesis evaluates the ability of the Xpress-MP software package to solve complex, iterative mathematicalprogramming problems. The impetus is the need to improve solution times for the VEGA software package, which identifies vulnerabilities to terrorist attacks in electric power grids. VEGA employs an iterative, optimizing heuristic, which may need to solve hundreds of related linear programs. This heuristic has been implemented in GAMS (General Algebraic Modeling System), whose inefficiencies in data handling and model generation mean that a modest, 50-iteration solution of a real-world problem can require over five hours to run. This slowness defeats VEGA's ultimate purpose, evaluating vulnerability-reducing structural improvements to a power grid. We demonstrate that Xpress-MP can reduce run times by 60%-85% because of its more efficient data handling, faster model generation, and the ability, lacking entirely in GAMS, to solve related models without regenerating each from scratch. Xpress-MP's modeling language, Mosel, encompasses a full-featured procedural language, also lacking in GAMS. This language enables a simpler, more modular and more maintainable implementation. We also demonstrate the value of VEGA's optimizing heuristic by comparing it to rule-based heuristics rules adapted from the literature. The optimizing heuristic is much more powerful.
|
284 |
Approximate dynamic programming and aerial refuelingPanos, Dennis C. 06 1900 (has links)
Aerial refueling is an integral part of the United States military's ability to strike targets around the world with an overwhelming and continuous projection of force. However, with an aging fleet of refueling tankers and an indefinite replacement schedule the optimization of tanker usage is vital to national security. Optimizing tanker and receiver refueling operations is a complicated endeavor as it can involve over a thousand of missions during a 24 hour period, as in Operation Iraqi Freedom and Operation Enduring Freedom. Therefore, a planning model which increases receiver mission capability, while reducing demands on tankers, can be used by the military to extend the capabilities of the current tanker fleet. Aerial refueling optimization software, created in CASTLE Laboratory, solves the aerial refueling problem through a multi-period approximation dynamic programming approach. The multi-period approach is built around sequential linear programs, which incorporate value functions, to find the optimal refueling tracks for receivers and tankers. The use of value functions allows for a solution which optimizes over the entire horizon of the planning period. This approach varies greatly from the myopic optimization currently in use by the Air Force and produces superior results. The aerial refueling model produces fast, consistent, robust results which require fewer tankers than current planning methods. The results are flexible enough to incorporate stochastic inputs, such as: varying refueling times and receiver mission loads, while still meeting all receiver refueling requirements. The model's ability to handle real world uncertainties while optimizing better than current methods provides a great leap forward in aerial refueling optimization. The aerial refueling model, created in CASTLE Lab, can extend the capabilities of the current tanker fleet. / Contract number: N00244-99-G-0019 / US Navy (USN) author.
|
285 |
Linear programming to determine molecular orientation at surfaces through vibrational spectroscopyChen, Fei 03 May 2017 (has links)
Applying linear programming (LP) to spectroscopy techniques, such as IR, Raman and SFG, is a new approach to extract the molecular orientation information at surfaces. In Hung’s previous research, he has shown how applying LP results in the computational gain from O(n!) to O(n). However, this LP approach does not always return the known molecular orientation distribution information when mock spectral information is used to build the instance of the model. The first goal of our study is to figure out the cause for the failed LP instances. After that, we also want to know for different cases with what spectral information, can the correct molecular orientation be expected when using LP. To achieve these goals, a simplified molecular model is designated to study the nature of our LP model. With the information gained, we further apply the LP approach to various test cases in order to verify whether it can be systematically applied to different circumstances. We have achieved the following conclusions: with the help of simplified molecular model, the inability to extract a sufficient data set from the given spectral information to build the LP instances is the reason that the LP solver does not return the target composition. When candidates coming from one same molecule, even combining all three spectral information of IR, Raman and SFG, the data set extracted is still not sufficient in order to obtain the target composition for most cases. When candidates are coming from different molecules, Raman or SFG spectral information alone contains sufficient data set to obtain the target composition when candidates of each molecule expanded in [0◦, 90◦) on θ. When candidates of each molecule expanded in [0◦, 180◦] on θ, excluding 90◦, SFG spectral information needs to combine with IR or Raman in order to obtain the sufficient data set to obtain the target composition. When the slack variable is introduced to each spectral technique, for the case of candidates coming from different molecules, when candidates expanded in [0◦, 90◦) on θ, Raman spectral information carries sufficient data set to obtain the target composition. When candidates expanded in [0◦, 180◦] on θ, excluding 90◦, SFG and Raman spectral information together carries sufficient data set in order to obtain the target composition. / Graduate / chenfei.cp@gmail.com
|
286 |
A Time-Evolving Optimization Model for an Intermodal Distribution Supply Chain Network:!A Case Study at a Healthcare CompanyJohansson, Sara, Westberg, My January 2016 (has links)
Enticed by the promise of larger sales and better access to customers, consumer goods compa- nies (CGCs) are increasingly looking to evade traditional retailers and reach their customers directly–with direct-to-customer (DTC) policy. DTC trend has emerged to have major im- pact on logistics operations and distribution channels. It oers significant opportunities for CGCs and wholesale brands to better control their supply chain network by circumventing the middlemen or retailers. However, to do so, CGCs may need to develop their omni-channel strategies and fortify their supply chains parameters, such as fulfillment, inventory flow, and goods distribution. This may give rise to changes in the supply chain network at all strategic, tactical and operational levels. Motivated by recent interests in DTC trend, this master thesis considers the time-evolving supply chain system of an international healthcare company with preordained configuration. The input is bottleneck part of the company’s distribution network and involves 20% ≠ 25% of its total market. A mixed-integer linear programming (MILP) multiperiod optimization model is developed aiming to make tactical decisions for designing the distribution network, or more specifically, for determining the best strategy for distributing the products from manufacturing plant to primary distribution center and/or regional distribution centers and from them to customers. The company has got one manufacturing site (Mfg), one primary distribution center (PDP) and three dierent regional distribution centers (RDPs) worldwide, and the customers can be supplied from dierent plants with various transportation modes on dierent costs and lead times. The company’s motivation is to investigate the possibility of reduction in distribution costs by in-time supplying most of their demand directly from the plants. The model selects the best option for each customer by making trade-os among criteria involving distribution costs and lead times. Due to the seasonal variability and to account the market fluctuability, the model considers the full time horizon of one year. The model is analyzed and developed step by step, and its functionality is demonstrated by conducting experiments on the distribution network from our case study. In addition, the case study distribution network topology is utilized to create random instances with random parameters and the model is also evaluated on these instances. The computational experiments on instances show that the model finds good quality solutions, and demonstrate that significant cost reduction and modality improvement can be achieved in the distribution network. Using one-year actual data, it has been shown that the ratio of direct shipments could substantially improve. However, there may be many factors that can impact the results, such as short-term decisions at operational level (like scheduling) as well as demand fluctuability, taxes, business rules etc. Based on the results and managerial considerations, some possible extensions and final recommendations for distribution chain are oered. Furthermore, an extensive sensitivity analysis is conducted to show the eect of the model’s parameters on its performance. The sensitivity analysis employs a set of data from our case study and randomly generated data to highlight certain features of the model and provide some insights regarding its behaviour.
|
287 |
Applications of optimization to sovereign debt issuanceAbdel-Jawad, Malek January 2013 (has links)
This thesis investigates different issues related to the issuance of debt by sovereign bodies such as governments, under uncertainty about the future interest rates. Several dynamic models of interest rates are presented, along with extensive numerical experiments for calibration of models and comparison of performance on real financial market data. The main contribution of the thesis is the construction and demonstration of a stochastic optimisation model for debt issuance under interest rate uncertainty. When the uncertainty is modelled using a model from a certain class of single factor interest rate models, one can construct a scenario tree such that the number of scenarios grows linearly with time steps. An optimization model is constructed using such a one factor scenario tree. For a real government debt issuance remit, a multi-stage stochastic optimization is performed to choose the type and the amount of debt to be issued and the results are compared with the real issuance. The currently used simulation models by the government, which are in public domain, are also reviewed. Apparently, using an optimization model, such as the one proposed in this work, can lead to substantial savings in the servicing costs of the issued debt
|
288 |
Energy system analysisSoundararajan, Ranjith January 2017 (has links)
The purpose of this thesis is to use a model to optimize the heat exchanger network for process industry and to estimate the minimum cost required for the heat exchanger network without compromising the energy demand by each stream as much as possible with the help of MATLAB programming software. Here, the optimization is done without considering stream splitting and stream combining. The first phase involves with deriving a simple heat exchanger network consisting of four streams i.e... Two hot streams and two cold streams required for the heat exchanger using the traditional Pinch Analysis method. The second phase of this work deals with randomly placing the heat exchanger network between the hot and cold streams and calculating the minimum cost of the heat exchanger network using genetic coding which is nothing but thousands of randomly created heat exchangers which are evolved over series of population.
|
289 |
Effective Network Partitioning to Find MIP Solutions to the Train Dispatching ProblemSnellings, Christopher 19 June 2013 (has links)
Each year the Railway Applications Section (RAS) of the Institution for Operations Research and the Management Sciences (INFORMS) posits a research problem to the world in the form of a competition. For 2012, the contest involved solving the Train Dispatching Problem (TDP) on a realistic 85 edge network for three different sets of input data. This work is an independent attempt to match or improve upon the results of the top three finishers in the contest using mixed integer programming (MIP) techniques while minimizing the use of heuristics. The primary focus is to partition the network in a manner that reduces the number of binary variables in the formulation as much as possible without compromising the ability to satisfy any of the contest requirements. This resulted in the ability to optimally solve this model for RAS Data Set 1 in 29 seconds without any problem-specific heuristics, variable restrictions, or variable fixing. Applying some assumptions about train movements allowed the same Data Set 1 solution to be found in 5.4 seconds. After breaking the larger Data Sets 2 and 3 into smaller sub-problems, solutions for Data Sets 2 and 3 were 28% and 1% better, respectively, than those of the competition winner. The time to obtain solutions for Data Sets 2 and 3 was 90 and 318 seconds, respectively.
|
290 |
Deriving Consensus Rankings from Benchmarking ExperimentsHornik, Kurt, Meyer, David January 2006 (has links) (PDF)
Whereas benchmarking experiments are very frequently used to investigate the performance of statistical or machine learning algorithms for supervised and unsupervised learning tasks, overall analyses of such experiments are typically only carried out on a heuristic basis, if at all. We suggest to determine winners, and more generally, to derive a consensus ranking of the algorithms, as the linear order on the algorithms which minimizes average symmetric distance (Kemeny-Snell distance) to the performance relations on the individual benchmark data sets. This leads to binary programming problems which can typically be solved reasonably efficiently. We apply the approach to a medium-scale benchmarking experiment to assess the performance of Support Vector Machines in regression and classification problems, and compare the obtained consensus ranking with rankings obtained by simple scoring and Bradley-Terry modeling. / Series: Research Report Series / Department of Statistics and Mathematics
|
Page generated in 0.0817 seconds