• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 501
  • 273
  • 82
  • 59
  • 25
  • 11
  • 11
  • 9
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 1241
  • 981
  • 501
  • 432
  • 360
  • 229
  • 194
  • 185
  • 162
  • 132
  • 113
  • 113
  • 109
  • 108
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Modeling the Homeschool timetabling problem using Integer programming

Srinivasan, Subhashini 14 June 2011 (has links)
Home schooling has steadily been increasing in the past decade. According to a survey in 2007, about 2.5 million children were being home schooled in the US. Typically, parents provide education at the convenience of their home and in some cases an instructor is appointed for the same. The Home School Timetabling problem (HSTP) deals with assigning subjects, timeslots and rooms to every student. In doing so, there are certain hard and specialty constraints that are to be satisfied. Integer programming (IP) has been used in solving the HSTP as it has the advantage of being able to provide information about the relative significance of each constraint with respect to the objective. A prototype in the form of a GUI has been built such that the parent can enter each student’s name, his/her subjects, duration, days and time for each subject, availability times of the parent etc. This data is then fed into the IP model so that it can generate a feasible timetable satisfying all of the constraints. When a solution is found it is formatted to provide the weekly timetable for each student, individually, as well as a complete timetable for all students each day.
292

Effective Network Partitioning to Find MIP Solutions to the Train Dispatching Problem

Snellings, Christopher 19 June 2013 (has links)
Each year the Railway Applications Section (RAS) of the Institution for Operations Research and the Management Sciences (INFORMS) posits a research problem to the world in the form of a competition. For 2012, the contest involved solving the Train Dispatching Problem (TDP) on a realistic 85 edge network for three different sets of input data. This work is an independent attempt to match or improve upon the results of the top three finishers in the contest using mixed integer programming (MIP) techniques while minimizing the use of heuristics. The primary focus is to partition the network in a manner that reduces the number of binary variables in the formulation as much as possible without compromising the ability to satisfy any of the contest requirements. This resulted in the ability to optimally solve this model for RAS Data Set 1 in 29 seconds without any problem-specific heuristics, variable restrictions, or variable fixing. Applying some assumptions about train movements allowed the same Data Set 1 solution to be found in 5.4 seconds. After breaking the larger Data Sets 2 and 3 into smaller sub-problems, solutions for Data Sets 2 and 3 were 28% and 1% better, respectively, than those of the competition winner. The time to obtain solutions for Data Sets 2 and 3 was 90 and 318 seconds, respectively.
293

Dispatch, Delivery, and Location Logistics for the Aeromedical Evacuation of Time-Sensitive Military Casualties Under Uncertainty

Grannan, Benjamin 01 January 2014 (has links)
Effective aeromedical evacuation of casualties is one of the most important problems in military medical systems because high-priority casualties will not survive without timely medical care. The decision making process for aeromedical evacuation consists of the following components: (1) identifying which aeromedical evacuation asset (see figure 1) to dispatch to the casualty, (2) locating aeromedical evacuation assets strategically in anticipation of incoming demand, and (3) deciding which medical treatment facility to transport the casualty. These decisions are further complicated because prioritization of casualties is based on severity of injury while aeromedical evacuation assets and medical treatment facilities operate with varying capabilities. In this dissertation, discrete optimization models are developed to examine dispatch, delivery, and location logistics for the effective aeromedical evacuation of casualties in military medical systems.
294

Petroleum refinery scheduling with consideration for uncertainty

Hamisu, Aminu Alhaji January 2015 (has links)
Scheduling refinery operation promises a big cut in logistics cost, maximizes efficiency, organizes allocation of material and resources, and ensures that production meets targets set by planning team. Obtaining accurate and reliable schedules for execution in refinery plants under different scenarios has been a serious challenge. This research was undertaken with the aim to develop robust methodologies and solution procedures to address refinery scheduling problems with uncertainties in process parameters. The research goal was achieved by first developing a methodology for short-term crude oil unloading and transfer, as an extension to a scheduling model reported by Lee et al. (1996). The extended model considers real life technical issues not captured in the original model and has shown to be more reliable through case studies. Uncertainties due to disruptive events and low inventory at the end of scheduling horizon were addressed. With the extended model, crude oil scheduling problem was formulated under receding horizon control framework to address demand uncertainty. This work proposed a strategy called fixed end horizon whose efficiency in terms of performance was investigated and found out to be better in comparison with an existing approach. In the main refinery production area, a novel scheduling model was developed. A large scale refinery problem was used as a case study to test the model with scheduling horizon discretized into a number of time periods of variable length. An equivalent formulation with equal interval lengths was also presented and compared with the variable length formulation. The results obtained clearly show the advantage of using variable timing. A methodology under self-optimizing control (SOC) framework was then developed to address uncertainty in problems involving mixed integer formulation. Through case study and scenarios, the approach has proven to be efficient in dealing with uncertainty in crude oil composition.
295

Generating cutting planes through inequality merging for integer programming problems

Hickman, Randal Edward January 1900 (has links)
Doctor of Philosophy / Department of Industrial and Manufacturing Systems Engineering / Todd Easton / Integer Programming (IP) problems are a common type of optimization problem used to solve numerous real world problems. IPs can require exponential computational effort to solve using the branch and bound technique. A popular method to improve solution times is to generate valid inequalities that serve as cutting planes. This dissertation introduces a new category of cutting planes for general IPs called inequality merging. The inequality merging technique combines two or more low dimensional inequalities, yielding valid inequalities of potentially higher dimension. The dissertation describes several theoretical results of merged inequalities. This research applies merging inequalities to a frequently used class of IPs called multiple knapsack (MK) problems. Theoretical results related to merging cover inequalities are presented. These results include: conditions for validity, conditions for facet defining inequalities, merging simultaneously over multiple cover inequalities, sequentially merging several cover inequalities on multiple variables, and algorithms that facilitate the development of merged inequalities. Examples demonstrate each of the theoretical discoveries. A computational study experiments with inequality merging techniques using benchmark MK instances. This computational study provides recommendations for implementing merged inequalities, which results in an average decrease of about 9% in computational time for both small and large MK instances. The research validates the effectiveness of using merged inequalities for MK problems and motivates substantial theoretical and computational extensions as future research.
296

The existence and usefulness of equality cuts in the multi-demand multidimensional knapsack problem

DeLissa, Levi January 1900 (has links)
Master of Science / Department of Industrial and Manufacturing Systems Engineering / Todd Easton / Integer programming (IP) is a class of mathematical models useful for modeling and optimizing many theoretical and industrial problems. Unfortunately, IPs are NP-complete, and many integer programs cannot currently be solved. Valid inequalities and their respective cuts are commonly used to reduce the effort required to solve IPs. This thesis poses the questions, do valid equality cuts exist and can they be useful for solving IPs? Several theoretical results related to valid equalities are presented in this thesis. It is shown that equality cuts exist if and only if the convex hull is not full dimensional. Furthermore, the addition of an equality cut can arbitrarily reduce the dimension of the linear relaxation. In addition to the theory on equality cuts, the idea of infeasibility conditions are presented. Infeasibility conditions introduce a set of valid inequalities whose intersection is the empty set. infeasibility conditions can be used to rapidly terminate a branch and cut algorithm. Applying the idea of equality cuts to the multi-demand multidimensional knapsack problem resulted in a new class of cutting planes named anticover cover equality (ACE) cuts. A simple algorithm, FACEBT, is presented for finding ACE cuts in a branching tree with complexity O(m n log n). A brief computational study shows that using ACE cuts exist frequently in the MDMKP instances studied. Every instance had at least one equality cut, while one instance had over 500,000. Additionally, computationally challenging instances saw an 11% improvement in computational effort. Therefore, equality cuts are a new topic of research in IP that is beneficial for solving some IP instances.
297

The NFL true fan problem

Whittle, Scott January 1900 (has links)
Master of Science / Department of Industrial and Manufacturing Systems Engineering / Todd Easton / Throughout an NFL season, 512 games are played in 17 weeks. For a given fan that follows one team, only 16 of those games usually matter, and the rest of the games carry little significance. The goal of this research is to provide substantial reasons for fans to watch other games. This research finds the easiest path to a division championship for each team. This easiest path requires winning the least number of games. Due to NFL’s complicated tiebreaker rules, games not involving the fan’s team can have major implications for that team. The research calls these games critical because if the wrong team wins, then the fan’s team must win additional games to become the division champion. To identify both the easiest path and the critical games, integer programming is used. Given the amount of two-team, three-team, and four-team division tie scenarios that can occur, 31 separate integer programs are solved for each team to identify the easiest path to the division championship. A new algorithm, Shortest Path of Remaining Teams (SPORT) is used to iteratively search through every game of the upcoming week to determine critical games. These integer programs and the SPORT algorithm were used with the data from the previous 2 NFL seasons. Throughout these 2 seasons, it was found that the earliest a team was eliminated from the possibility of winning a division championship was week 12, and occurred in 2012 and 2013. Also, throughout these 2 seasons, there was an average of 65 critical games per season, with more critical games occurring in the 2013-2014 season. Additionally, the 2012 season was used to compare flexed scheduled games with the critical games for those weeks and it was found that the NFL missed three weeks of potentially scheduling a critical game.
298

Octanary branching algorithm

Bailey, James Patrick January 1900 (has links)
Master of Science / Department of Industrial and Manufacturing Systems Engineering / Todd Easton / Integer Programs (IP) are a class of discrete optimization that have been used commercially to improve various systems. IPs are often used to reach an optimal financial objective with constraints based upon resources, operations and other restrictions. While incredibly beneficial, IPs have been shown to be NP-complete with many IPs remaining unsolvable. Traditionally, Branch and Bound (BB) has been used to solve IPs. BB is an iterative algorithm that enumerates all potential integer solutions for a given IP. BB can guarantee an optimal solution, if it exists, in finite time. However, BB can require an exponential number of nodes to be evaluated before terminating. As a result, the memory of a computer using BB can be exceeded or it can take an excessively long time to find the solution. This thesis introduces a modified BB scheme called the Octanary Branching Algorithm (OBA). OBA introduces eight children in each iteration to more effectively partition the feasible region of the linear relaxation of the IP. OBA also introduces equality constraints in four of the children in order to reduce the dimension of the remaining nodes. OBA can guarantee an optimal solution, if it exists, in finite time. In addition, OBA has been shown to have some theoretical improvements over traditional BB. During computational tests, OBA was able to find the first, second and third integer solution with 64.8%, 27.9% and 29.3% fewer nodes evaluated, respectively, than CPLEX. These integers were 44.9%, 54.7% and 58.2% closer to the optimal solution, respectively, when compared to CPLEX. It is recommended that commercial solvers incorporate OBA in the initialization and random diving phases of BB.
299

Metoda tvorby tras přepravní úlohy / Method of generation transport routes

Bartásková, Petra January 2010 (has links)
This thesis is focused on optimizing the routes which are implemented in our country at night. Goods are transporting between designated central cities. It deals with creating cyclic routs, along which the goods should be effectively transported, with the respect of the cost. The instruction how to create these paths represents a heuristic method for generating cyclic paths. The algorithm uses the results provided by model that is based on a search for multiple product chart. The chart contains the minimum number of vehicles that provide transport and individual amount of transported goods. The principle of this heuristic method is to create cyclic paths in such a way to be able to serve all transportation requirements with the lowest number of reloads. This approach leads to the fact that the direct paths are preferred.
300

What is the Minimal Systemic Risk in Financial Exposure Networks? INET Oxford Working Paper, 2019-03

Diem, Christian, Pichler, Anton, Thurner, Stefan January 2019 (has links) (PDF)
Management of systemic risk in financial markets is traditionally associated with setting (higher) capital requirements for market participants. There are indications that while equity ratios have been increased massively since the financial crisis, systemic risk levels might not have lowered, but even increased (see ECB data 1 ; SRISK time series 2 ). It has been shown that systemic risk is to a large extent related to the underlying network topology of financial exposures. A natural question arising is how much systemic risk can be eliminated by optimally rearranging these networks and without increasing capital requirements. Overlapping portfolios with minimized systemic risk which provide the same market functionality as empir- ical ones have been studied by Pichler et al. (2018). Here we propose a similar method for direct exposure networks, and apply it to cross-sectional interbank loan networks, consisting of 10 quarterly observations of the Austrian interbank market. We show that the suggested framework rearranges the network topol- ogy, such that systemic risk is reduced by a factor of approximately 3.5, and leaves the relevant economic features of the optimized network and its agents unchanged. The presented optimization procedure is not intended to actually re-configure interbank markets, but to demonstrate the huge potential for systemic risk management through rearranging exposure networks, in contrast to increasing capital requirements that were shown to have only marginal effects on systemic risk (Poledna et al., 2017). Ways to actually incentivize a self-organized formation toward optimal network configurations were introduced in Thurner and Poledna (2013) and Poledna and Thurner (2016). For regulatory policies concerning financial market stability the knowledge of minimal systemic risk for a given economic environment can serve as a benchmark for monitoring actual systemic risk in markets.

Page generated in 0.052 seconds