• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 46
  • 15
  • 11
  • 8
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 109
  • 78
  • 35
  • 33
  • 25
  • 25
  • 21
  • 19
  • 17
  • 16
  • 15
  • 13
  • 12
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Reducing waste with an optimized trimming model in production planning

Hallbäck, Sofia, Paulsson, Ellen January 2020 (has links)
In which ways can the process of trimming dispersion coated board products be optimized so as to reduce material waste and increase production efficiency? This is the question that this master thesis report seeks to answer. In paper production, alot of waste is generated when cutting production reels into customer reels. Some material waste are necessary in order to ensure good quality, however a large amount of the wastecould be reduced if the cutting process was to be optimized. During this project, carried out at a forest company, a mathematical optimization model was developed in order to reduce waste and save costs. This model is based on a cutting stock problem using column generation approach. It provides as its output cutting patterns and an optimal allocation of rolls for production purposes, which implies minimizing the number production rolls needed.The visualization of the results could also be used to achieve optimal stock levels, and easier keep track on how to use the material available in stock. Findings show that there are potential savings to be done. Simulations suggest an implementation of this model could result in material savings of around 7 %. This could also translateto environmental savings in CO2, where every decrease of one tonne material equals to adecrease in CO2emissions of 500 kg
62

Akcelerace částicových rojů PSO pomocí GPU / Particle Swarm Optimization on GPUs

Záň, Drahoslav January 2013 (has links)
This thesis deals with a population based stochastic optimization technique PSO (Particle Swarm Optimization) and its acceleration. This simple, but very effective technique is designed for solving difficult multidimensional problems in a wide range of applications. The aim of this work is to develop a parallel implementation of this algorithm with an emphasis on acceleration of finding a solution. For this purpose, a graphics card (GPU) providing massive performance was chosen. To evaluate the benefits of the proposed implementation, a CPU and GPU implementation were created for solving a problem derived from the known NP-hard Knapsack problem. The GPU application shows 5 times average and almost 10 times the maximum speedup of computation compared to an optimized CPU application, which it is based on.
63

New Heuristics For The 0-1 Multi-dimensional Knapsack Problems

Akin, Haluk 01 January 2009 (has links)
This dissertation introduces new heuristic methods for the 0-1 multi-dimensional knapsack problem (0-1 MKP). 0-1 MKP can be informally stated as the problem of packing items into a knapsack while staying within the limits of different constraints (dimensions). Each item has a profit level assigned to it. They can be, for instance, the maximum weight that can be carried, the maximum available volume, or the maximum amount that can be afforded for the items. One main assumption is that we have only one item of each type, hence the problem is binary (0-1). The single dimensional version of the 0-1 MKP is the uni-dimensional single knapsack problem which can be solved in pseudo-polynomial time. However the 0-1 MKP is a strongly NP-Hard problem. Reduced cost values are rarely used resources in 0-1 MKP heuristics; using reduced cost information we introduce several new heuristics and also some improvements to past heuristics. We introduce two new ordering strategies, decision variable importance (DVI) and reduced cost based ordering (RCBO). We also introduce a new greedy heuristic concept which we call the "sliding concept" and a sub-branch of the "sliding concept" which we call "sliding enumeration". We again use the reduced cost values within the sliding enumeration heuristic. RCBO is a brand new ordering strategy which proved useful in several methods such as improving Pirkul's MKHEUR, a triangular distribution based probabilistic approach, and our own sliding enumeration. We show how Pirkul's shadow price based ordering strategy fails to order the partial variables. We present a possible fix to this problem since there tends to be a high number of partial variables in hard problems. Therefore, this insight will help future researchers solve hard problems with more success. Even though sliding enumeration is a trivial method it found optima in less than a few seconds for most of our problems. We present different levels of sliding enumeration and discuss potential improvements to the method. Finally, we also show that in meta-heuristic approaches such as Drexl's simulated annealing where random numbers are abundantly used, it would be better to use better designed probability distributions instead of random numbers.
64

An efficient group-theoretic algorithm for an assignment problem with a single knapsack constraint

Dhamankar, Sunil Yashwant January 1991 (has links)
No description available.
65

New Heuristic And Metaheuristic Approaches Applied To The Multiple-choice Multidimensional Knapsack Problem

Hiremath, Chaitr 29 February 2008 (has links)
No description available.
66

Valid Inequalities for The 0-1 Mixed Knapsack Polytope with Upper Bounds

Cimren, Emrah 30 July 2010 (has links)
No description available.
67

An Efficient Knapsack-Based Approach for Calculating the Worst-Case Demand of AVR Tasks

Bijinemula, Sandeep Kumar 01 February 2019 (has links)
Engine-triggered tasks are real-time tasks that are released when the crankshaft arrives at certain positions in its path of rotation. This makes the rate of release of these jobs a function of the crankshaft's angular speed and acceleration. In addition, several properties of the engine triggered tasks like the execution time and deadlines are dependent on the speed profile of the crankshaft. Such tasks are referred to as adaptive-variable rate (AVR) tasks. Existing methods to calculate the worst-case demand of AVR tasks are either inaccurate or computationally intractable. We propose a method to efficiently calculate the worst-case demand of AVR tasks by transforming the problem into a variant of the knapsack problem. We then propose a framework to systematically narrow down the search space associated with finding the worst-case demand of AVR tasks. Experimental results show that our approach is at least 10 times faster, with an average runtime improvement of 146 times for randomly generated task sets when compared to the state-of-the-art technique. / Master of Science / Real-time systems require temporal correctness along with accuracy. This notion of temporal correctness is achieved by specifying deadlines to each of the tasks. In order to ensure that all the deadlines are met, it is important to know the processor requirement, also known as demand, of a task over a given interval. For some tasks, the demand is not constant, instead it depends on several external factors. For such tasks, it becomes necessary to calculate the worst-case demand. Engine-triggered tasks are activated when the crankshaft in an engine is at certain points in its path of rotation. This makes their activation rate dependent on the angular speed and acceleration of the crankshaft. In addition, several properties of the engine triggered tasks like the execution time and deadlines are dependent on the speed profile of the crankshaft. Such tasks are referred to as adaptive-variable rate (AVR) tasks. Existing methods to calculate the worst-case demand of AVR tasks are either inaccurate or computationally intractable. We propose a method to efficiently calculate the worst-case demand of AVR tasks by transforming the problem into a variant of the knapsack problem. We then propose a framework to systematically narrow down the search space associated with finding the worst-case demand of AVR tasks. Experimental results show that our approach is at least 10 times faster, with an average runtime improvement of 146 times for randomly generated task sets when compared to the state-of-the-art technique.
68

Exact synchronized simultaneous uplifting over arbitrary initial inequalities for the knapsack polytope

Beyer, Carrie Austin January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Integer programs (IPs) are mathematical models that can provide an optimal solution to a variety of different problems. They have been used to reduce costs and optimize organizations. Additionally, IPs are NP-complete resulting in many IPs that cannot be solved. Cutting planes or valid inequalities have been used to decrease the time required to solve IPs. Lifting is a technique that strengthens existing valid inequalities. Lifting inequalities can result in facet defining inequalities, which are the theoretically strongest valid inequalities. Because of these properties, lifting procedures are used in software to reduce the time required to solve an IP. The thesis introduces a new algorithm for exact synchronized simultaneous uplifting over an arbitrary initial inequality for knapsack problems. Synchronized Simultaneous Lifting (SSL) is a pseudopolynomial time algorithm requiring O(nb+n[superscript]3) effort to solve. It exactly uplifts two sets simultaneously into an initial arbitrary valid inequality and creates multiple inequalities of a particular form. This previously undiscovered class of inequalities generated by SSL can be facet defining. A small computational study shows that SSL is quick to execute, requiring on average less than a quarter of a second. Additionally, applying SSL inequalities to a knapsack problem enabled commercial software to solve problems that it could not solve without them.
69

Lattices and Their Application: A Senior Thesis

Goodwin, Michelle 01 January 2016 (has links)
Lattices are an easy and clean class of periodic arrangements that are not only discrete but associated with algebraic structures. We will specifically discuss applying lattices theory to computing the area of polygons in the plane and some optimization problems. This thesis will details information about Pick's Theorem and the higher-dimensional cases of Ehrhart Theory. Closely related to Pick's Theorem and Ehrhart Theory is the Frobenius Problem and Integer Knapsack Problem. Both of these problems have higher-dimension applications, where the difficulties are similar to those of Pick's Theorem and Ehrhart Theory. We will directly relate these problems to optimization problems and operations research.
70

Improving the solution time of integer programs by merging knapsack constraints with cover inequalities

Vitor, Fabio Torres January 1900 (has links)
Master of Science / Department of Industrial and Manufacturing Systems Engineering / Todd Easton / Integer Programming is used to solve numerous optimization problems. This class of mathematical models aims to maximize or minimize a cost function restricted to some constraints and the solution must be integer. One class of widely studied Integer Program (IP) is the Multiple Knapsack Problem (MKP). Unfortunately, both IPs and MKPs are NP-hard, potentially requiring an exponential time to solve these problems. Utilization of cutting planes is one common method to improve the solution time of IPs. A cutting plane is a valid inequality that cuts off a portion of the linear relaxation space. This thesis presents a new class of cutting planes referred to as merged knapsack cover inequalities (MKCI). These valid inequalities combine information from a cover inequality with a knapsack constraint to generate stronger inequalities. Merged knapsack cover inequalities are generated by the Merging Knapsack Cover Algorithm (MKCA), which runs in linear time. These inequalities may be improved by the Exact Improvement Through Dynamic Programming Algorithm (EITDPA) in order to make them stronger inequalities. Theoretical results have demonstrated that this new class of cutting planes may cut off some space of the linear relaxation region. A computational study was performed to determine whether implementation of merged knapsack cover inequalities is computationally effective. Results demonstrated that MKCIs decrease solution time an average of 8% and decrease the number of ticks in CPLEX, a commercial IP solver, approximately 4% when implemented in appropriate instances.

Page generated in 0.3049 seconds