Spelling suggestions: "subject:"integer"" "subject:"nteger""
281 |
Improving the Scalability of an Exact Approach for Frequent Item Set HidingLaMacchia, Carolyn 01 January 2013 (has links)
Technological advances have led to the generation of large databases of organizational data recognized as an information-rich, strategic asset for internal analysis and sharing with trading partners. Data mining techniques can discover patterns in large databases including relationships considered strategically relevant to the owner of the data. The frequent item set hiding problem is an area of active research to study approaches for hiding the sensitive knowledge patterns before disclosing the data outside the organization. Several methods address hiding sensitive item sets including an exact approach that generates an extension to the original database that, when combined with the original database, limits the discovery of sensitive association rules without impacting other non-sensitive information. To generate the database extension, this method formulates a constraint optimization problem (COP). Solving the COP formulation is the dominant factor in the computational resource requirements of the exact approach. This dissertation developed heuristics that address the scalability of the exact hiding method. The heuristics are directed at improving the performance of COP solver by reducing the size of the COP formulation without significantly affecting the quality of the solutions generated. The first heuristic decomposes the COP formulation into multiple smaller problem instances that are processed separately by the COP solver to generate partial extensions of the database. The smaller database extensions are then combined to form a database extension that is close to the database extension generated with the original, larger COP formulation. The second heuristic evaluates the revised border used to formulate the COP and reduces the number of variables and constraints by selectively substituting multiple item sets with composite variables. Solving the COP with fewer variables and constraints reduces the computational cost of the processing. Results of heuristic processing were compared with an existing exact approach based on the size of the database extension, the ability to hide sensitive data, and the impact on nonsensitive data.
|
282 |
FINITE DISJUNCTIVE PROGRAMMING METHODS FOR GENERAL MIXED INTEGER LINEAR PROGRAMSChen, Binyuan January 2011 (has links)
In this dissertation, a finitely convergent disjunctive programming procedure, the Convex Hull Tree (CHT) algorithm, is proposed to obtain the convex hull of a general mixed–integer linear program with bounded integer variables. The CHT algorithm constructs a linear program that has the same optimal solution as the associated mixed-integer linear program. The standard notion of sequential cutting planes is then combined with ideasunderlying the CHT algorithm to help guide the choice of disjunctions to use within a new cutting plane method, the Cutting Plane Tree (CPT) algorithm. We show that the CPT algorithm converges to an integer optimal solution of the general mixed-integer linear program with bounded integer variables in finitely many steps. We also enhance the CPT algorithm with several techniques including a “round-of-cuts” approach and an iterative method for solving the cut generation linear program (CGLP). Two normalization constraints are discussed in detail for solving the CGLP. For moderately sized instances, our study shows that the CPT algorithm provides significant gap closures with a pure cutting plane method.
|
283 |
Complex question answering : minimizing the gaps and beyondHasan, Sheikh Sadid Al January 2013 (has links)
Current Question Answering (QA) systems have been significantly advanced in demonstrating
finer abilities to answer simple factoid and list questions. Such questions are easier
to process as they require small snippets of texts as the answers. However, there is
a category of questions that represents a more complex information need, which cannot
be satisfied easily by simply extracting a single entity or a single sentence. For example,
the question: “How was Japan affected by the earthquake?” suggests that the inquirer is
looking for information in the context of a wider perspective. We call these “complex questions”
and focus on the task of answering them with the intention to minimize the existing
gaps in the literature.
The major limitation of the available search and QA systems is that they lack a way of
measuring whether a user is satisfied with the information provided. This was our motivation
to propose a reinforcement learning formulation to the complex question answering
problem. Next, we presented an integer linear programming formulation where sentence
compression models were applied for the query-focused multi-document summarization
task in order to investigate if sentence compression improves the overall performance.
Both compression and summarization were considered as global optimization problems.
We also investigated the impact of syntactic and semantic information in a graph-based
random walk method for answering complex questions. Decomposing a complex question
into a series of simple questions and then reusing the techniques developed for answering
simple questions is an effective means of answering complex questions. We proposed a
supervised approach for automatically learning good decompositions of complex questions
in this work. A complex question often asks about a topic of user’s interest. Therefore, the
problem of complex question decomposition closely relates to the problem of topic to question
generation. We addressed this challenge and proposed a topic to question generation
approach to enhance the scope of our problem domain. / xi, 192 leaves : ill. ; 29 cm
|
284 |
Decomposition and diet problemsHamilton, Daniel January 2010 (has links)
The purpose of this thesis is to efficiently solve real life problems. We study LPs. We study an NLP and an MINLP based on what is known as the generalised pooling problem (GPP), and we study an MIP that we call the cattle mating problem. These problems are often very large or otherwise difficult to solve by direct methods, and are best solved by decomposition methods. During the thesis we introduce algorithms that exploit the structure of the problems to decompose them. We are able to solve row-linked, column-linked and general LPs efficiently by modifying the tableau simplex method, and suggest how this work could be applied to the revised simplex method. We modify an existing sequential linear programming solver that is currently used by Format International to solve GPPs, and show the modified solver takes less time and is at least as likely to find the global minimum as the old solver. We solve multifactory versions of the GPP by augmented Lagrangian decomposition, and show this is more efficient than solving the problems directly. We introduce a decomposition algorithm to solve a MINLP version of the GPP by decomposing it into NLP and ILP subproblems. This is able to solve large problems that could not be solved directly. We introduce an efficient decomposition algorithm to solve the MIP cattle mating problem, which has been adopted for use by the Irish Cattle Breeding Federation. Most of the solve methods we introduce are designed only to find local minima. However, for the multifactory version of the GPP we introduce two methods that give a good chance of finding the global minimum, both of which succeed in finding the global minimum on test problems.
|
285 |
Optimizing cost versus time shipping of U.S. Navy retrograde materielColbert, Charles W. 03 1900 (has links)
Approved for public release, distribution is unlimited / .08B in shipping and redistribution costs of Not Ready for Issue (NRFI) materiel. This thesis models the NAVICP shipping of unserviceable but repairable (retrograde) Navy materiel or Depot Level Repairables (DLRs). It develops an integer linear program to prescribe minimum cost shipment recommendations of DLRs from fleet to repair locations within the NAVICP and Defense Logistics Agency (DLA) distribution system subject to constraints on average shipping time (AveTime). NAVICP provided data on DLR shipments for one year from which we construct six representative DLRs, 3 of aviation and 3 of maritime cognizance. We find a cost and time savings can be achieved for all representative DLRs by avoiding the use of DLA as storage prior to induction for repair. In this study we compare shipping costs for each of the six DLRs when we constrain AveTime, from 2 to 8 days. We find 2-day constrained AveTime shipping, on average, costs 18 times that of 7-day AveTime shipping, twice that of 3-day shipping and a minimum of 5 times and a maximum of 11 times that of the costs of 4 through 6-day shipping. / Lieutenant Commander, United States Navy
|
286 |
A Time-Evolving Optimization Model for an Intermodal Distribution Supply Chain Network:!A Case Study at a Healthcare CompanyJohansson, Sara, Westberg, My January 2016 (has links)
Enticed by the promise of larger sales and better access to customers, consumer goods compa- nies (CGCs) are increasingly looking to evade traditional retailers and reach their customers directly–with direct-to-customer (DTC) policy. DTC trend has emerged to have major im- pact on logistics operations and distribution channels. It oers significant opportunities for CGCs and wholesale brands to better control their supply chain network by circumventing the middlemen or retailers. However, to do so, CGCs may need to develop their omni-channel strategies and fortify their supply chains parameters, such as fulfillment, inventory flow, and goods distribution. This may give rise to changes in the supply chain network at all strategic, tactical and operational levels. Motivated by recent interests in DTC trend, this master thesis considers the time-evolving supply chain system of an international healthcare company with preordained configuration. The input is bottleneck part of the company’s distribution network and involves 20% ≠ 25% of its total market. A mixed-integer linear programming (MILP) multiperiod optimization model is developed aiming to make tactical decisions for designing the distribution network, or more specifically, for determining the best strategy for distributing the products from manufacturing plant to primary distribution center and/or regional distribution centers and from them to customers. The company has got one manufacturing site (Mfg), one primary distribution center (PDP) and three dierent regional distribution centers (RDPs) worldwide, and the customers can be supplied from dierent plants with various transportation modes on dierent costs and lead times. The company’s motivation is to investigate the possibility of reduction in distribution costs by in-time supplying most of their demand directly from the plants. The model selects the best option for each customer by making trade-os among criteria involving distribution costs and lead times. Due to the seasonal variability and to account the market fluctuability, the model considers the full time horizon of one year. The model is analyzed and developed step by step, and its functionality is demonstrated by conducting experiments on the distribution network from our case study. In addition, the case study distribution network topology is utilized to create random instances with random parameters and the model is also evaluated on these instances. The computational experiments on instances show that the model finds good quality solutions, and demonstrate that significant cost reduction and modality improvement can be achieved in the distribution network. Using one-year actual data, it has been shown that the ratio of direct shipments could substantially improve. However, there may be many factors that can impact the results, such as short-term decisions at operational level (like scheduling) as well as demand fluctuability, taxes, business rules etc. Based on the results and managerial considerations, some possible extensions and final recommendations for distribution chain are oered. Furthermore, an extensive sensitivity analysis is conducted to show the eect of the model’s parameters on its performance. The sensitivity analysis employs a set of data from our case study and randomly generated data to highlight certain features of the model and provide some insights regarding its behaviour.
|
287 |
Applications of optimization to sovereign debt issuanceAbdel-Jawad, Malek January 2013 (has links)
This thesis investigates different issues related to the issuance of debt by sovereign bodies such as governments, under uncertainty about the future interest rates. Several dynamic models of interest rates are presented, along with extensive numerical experiments for calibration of models and comparison of performance on real financial market data. The main contribution of the thesis is the construction and demonstration of a stochastic optimisation model for debt issuance under interest rate uncertainty. When the uncertainty is modelled using a model from a certain class of single factor interest rate models, one can construct a scenario tree such that the number of scenarios grows linearly with time steps. An optimization model is constructed using such a one factor scenario tree. For a real government debt issuance remit, a multi-stage stochastic optimization is performed to choose the type and the amount of debt to be issued and the results are compared with the real issuance. The currently used simulation models by the government, which are in public domain, are also reviewed. Apparently, using an optimization model, such as the one proposed in this work, can lead to substantial savings in the servicing costs of the issued debt
|
288 |
Contingency-constrained unit commitment with post-contingency corrective recourseChen, Richard Li-Yang, Fan, Neng, Pinar, Ali, Watson, Jean-Paul 05 December 2014 (has links)
We consider the problem of minimizing costs in the generation unit commitment problem, a cornerstone in electric power system operations, while enforcing an -- reliability criterion. This reliability criterion is a generalization of the well-known - criterion and dictates that at least fraction of the total system demand (for ) must be met following the failure of or fewer system components. We refer to this problem as the contingency-constrained unit commitment problem, or CCUC. We present a mixed-integer programming formulation of the CCUC that accounts for both transmission and generation element failures. We propose novel cutting plane algorithms that avoid the need to explicitly consider an exponential number of contingencies. Computational studies are performed on several IEEE test systems and a simplified model of the Western US interconnection network. These studies demonstrate the effectiveness of our proposed methods relative to current state-of-the-art.
|
289 |
Automated Selected of Mixed Integer Program Solver ParametersStewart, Charles 30 April 2010 (has links)
This paper presents a method that uses designed experiments and statistical models to extract information about how solver parameter settings perform for classes of mixed integer programs. The use of experimental design facilitates fitting a model that describes the response surface across all combinations of parameter settings, even those not explicitly tested, allowing identification of both desirable and poor settings. Identifying parameter settings that give the best expected performance for a specific class of instances and a specific solver can be used to more efficiently solve a large set of similar instances, or to ensure solvers are being compared at their best.
|
290 |
A Case Study of Network Design for Middle East Water DistributionBullene, Rachel 28 May 2010 (has links)
The Middle Eastern region encompassing Israel, Jordan, and the Palestinian Territories (West Bank and Gaza) is an arid region with fast growing populations. Adequate and equitable access to water for all the people of the region is crucial to the future of Middle East peace. However, the current water distribution system not only fails to provide an adequate and equitable allocation of water, but also results adverse impacts on the environment. This project involves building a mathematical model to aid decision-makers in designing an optimal water distribution network. A new method for incorporating uncertainty in optimization that is based on Bayesian simulation of posterior predictive distributions is used to represent uncertainty in demands and costs. The output of the model is a most-probable least-cost modication to the existing water distribution infrastructure. Additionally, the model output includes the probability that a network component (new desalination plant, new pipe, new canal) is part of a least-cost installation.
|
Page generated in 0.0479 seconds