• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 501
  • 273
  • 82
  • 59
  • 25
  • 11
  • 11
  • 9
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • Tagged with
  • 1241
  • 981
  • 501
  • 432
  • 360
  • 229
  • 194
  • 185
  • 162
  • 132
  • 113
  • 113
  • 109
  • 108
  • 101
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
271

Planning a Public Transportation System with a View Towards Passengers' Convenience

Harbering, Jonas 01 February 2016 (has links)
No description available.
272

Simultaneously lifting multiple sets in binary knapsack integer programs

Kubik, Lauren Ashley January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / Integer programs (IPs) are mathematical models that can provide organizations with the ability to optimally obtain their goals through appropriate utilization and allocation of available resources. Unfortunately, IPs are NP-complete in the strong sense, and many integer programs cannot be solved. Introduced by Gomory, lifting is a technique that takes a valid inequality and strengthens it. Lifting can result in facet defining inequalities, which are the theoretically strongest inequalities; because of this, lifting techniques are commonly used in commercial IP software to reduce the time required to solve an IP. This thesis introduces two new algorithms for exact simultaneous up lifting multiple sets into binary knapsack problems and introduces sequential simultaneous lifting. The Dynamic Programming Multiple Lifting Set Algorithm (DPMLSA) is a pseudopolynomial time algorithm bounded by O(nb) effort that can exactly uplift an arbitrary number of sets. The Three Set Simultaneous Lifting Algorithm (TSSLA) is a polynomial time algorithm bounded by O(n2) and can exact simultaneously up lift three sets. The simultaneously lifted inequalities generated by the DPMLSA and the TSSLA can be facet defining, and neither algorithm requires starting with a minimal cover. A brief computational study shows that the DPMLSA is fast and required an average of only 0.070 seconds. The computational study also shows these sequential simultaneously lifted inequalities are useful, as the solution time decreased by an overall average of 18.4%. Therefore, implementing the DPMLSA should be beneficial for large IPs.
273

Simulating epidemics in rural areas and optimizing preplanned quarantine areas using a clustering heuristic

Anderson, Joseph Edward January 1900 (has links)
Master of Science / Department of Industrial & Manufacturing Systems Engineering / Todd W. Easton / With the present threat of bioterrorist attacks and new natural disease strains developing, efficient and effective countermeasures must be in place in case of an epidemic outbreak. The best strategy is to stop the attack or natural phenomenon before it happens, but governments and individual citizens must have measures in place to limit the spread of a biological threat or infectious disease if it is ever introduced into society. The objective of this research is to know, before an outbreak, the best quarantine areas. Quarantines force similar individuals together and can be mathematically modeled as clustering people into distinct groups. In order to effectively determine the clustering solution to use as a quarantine plan, this research developed a simulation core that is highly adaptable to different disease types and different contact networks. The input needed for the simulation core is the characteristics of the disease as well as the contact network of the area to be modeled. Clustering is a mathematical problem that groups entities based on their similarities while keeping dissimilar entities in separate groups. Clustering has been widely used by civilian and military researchers to provide quality solutions to numerous problems. This research builds a mathematical model to find clusters from a community’s contact network. These clusters are then the preplanned quarantine areas. To find quality clusters a Clustering Heuristic using Integer Programming (CHIP) is developed. CHIP is a large neighborhood, hill-climbing heuristic and some computational results verify that it quickly generates good clustering solutions. CHIP is an effective heuristic to group people into clusters to be used as quarantine areas prior to the development of a disease or biological attack. Through a small computational study, CHIP is shown to produce clustering solutions that are about 25% better than the commonly used K-means clustering heuristic. CHIP provides an effective tool to combat the spread of an infectious disease or a biological terroristic attack and serves as a potential deterrent to possible terrorist attacks due to the fact that it would limit their destructive power. CHIP leads to the next level of preparation that could save countless lives in the event of an epidemic.
274

Using integer programming and constraint programming to solve sports scheduling problems

Easton, Kelly King 12 1900 (has links)
No description available.
275

Optimization methods for physician scheduling

Smalley, Hannah Kolberg 24 August 2012 (has links)
This thesis considers three physician scheduling problems in health care systems. Specifically, we focus on improvements to current physician scheduling practices through the use of mathematical modeling. In the first part of the thesis, we present a physician shift scheduling problem focusing on maximizing continuity of care (i.e., ensuring that patients are familiar with their treating physicians, and vice versa). We develop an objective scoring method for measuring the continuity of a physician schedule and combine it with a mixed integer programming model. We apply our methods to the problem faced in the pediatric intensive care unit at Children's Healthcare of Atlanta at Egleston, and show that our schedule generation approach outperforms manual methods for schedule construction, both with regards to solution time and continuity. The next topic presented in this thesis focuses on two scheduling problems: (i) the assignment of residents to rotations over a one-year period, and given that assignment, (ii) the scheduling of residents' night and weekend shifts. We present an integer programming model for the assignment of residents to rotations such that residents of the same type receive similar educational experiences. We allow for flexible input of parameters and varying groups of residents and rotations without needing to alter the model constraints. We present a simple model for scheduling 1st-year residents to night and weekend shifts. We apply these approaches to problems faced in the Department of Surgery Residency Program at Emory University School of Medicine. Rotation assignment is made more efficient through automated schedule generation, and the shift scheduling model allows us to highlight infeasibilities that occur when shift lengths exceed a certain value, and we discuss the impact of duty hour restrictions under limitations of current scheduling practices. The final topic of this thesis focuses on the assignment of physicians to various tasks while promoting equity of assignments and maximizing space utilization. We present an integer programming model to solve this problem, and we apply this model to the physician scheduling problem faced in the Department of Gynecology and Obstetrics at Emory University Hospital and generate high quality solutions very quickly.
276

Robust and Survivable Network Design Considering Uncertain Node and Link Failures

Sadeghi, Elham January 2016 (has links)
The network design is a planning process of placing system components to provide service or meet certain needs in an economical way. It has strong links to real application areas, such as transportation network, communication network, supply chain, power grid, water distribution systems, etc. In practice, these infrastructures are very vulnerable to any failures of system components. Therefore, the design of such infrastructure networks should be robust and survivable to any failures caused by many factors, for example, natural disasters, intentional attacks, system limits, etc. In this dissertation, we first summarize the background and motivations of our research topic on network design problems. Different from literature on network design, we consider both uncertain node and link failures during the network design process. The first part of our research is to design a survivable network with mixed connectivity requirements, or the (k,l)-connectivity. The designed network can still be connected after failures of any k vertices and (l-1) edges or failures of any (k-1) vertices and l edges. After formally proving its relationships to edge and vertex disjoint paths, we present two integer programming (IP) formulations, valid inequalities to strengthen the IP formulations, and a cutting plane algorithm. Numerical experiments are performed on randomly generated graphs to compare these approaches. Special cases of this problem include: when k=0, l=1, this problem becomes the well-known minimum spanning tree problem; and when k=0, l ≥ 1, this problem is to find a minimum-cost l-edge-connected spanning subgraph, while when k ≥ 2, l=0, the problem is to find a minimum-cost k-vertex-connected spanning subgraph. As a generalization of k-minimum spanning tree and λ-edge-connected spanning subgraph problems for network design, we consider the minimum-cost λ-edge-connected k-subgraph problem, or the (k, λ)-subgraph problem, which is to find a minimum-cost λ-edge-connected subgraph of a graph with at least k vertices. This problem can be considered as designing k-minimum spanning tree with higher connectivity requirements. We also propose several IP formulations for exactly solving the (k, λ)-subgraph problem, based on some graph properties, for example, requirements of cutsets for a division of the graph and paths between any two vertices. In addition, we study the properties of (k,2)-subgraphs, such as connectivity, bridgeless, and strong orientation properties. Based on these properties, we propose several stronger and more compact IP formulations for solving the (k,2)-subgraph problem, which is a direct generalization of the k-minimum spanning tree problem. Serving as a virtual backbone for wireless ad hoc networks, the connected dominating set problem has been widely studied. We design a robust and survivable connected dominating set for a virtual backbone of a larger graph for ad hoc network. More specifically, we study the (k,l)-connected d-dominating set problem. Given a graph G=(V,E), a subset D ⊆ V is a (k,l)-connected d-dominating set if the subgraph induced by D has mixed connectivity at least (k,l) and every vertex outside of S has at least d neighbors from D. The type of virtual backbone is survivable and also robust for sending message under certain number of both node and link failures. We study the properties of such dominating set and also IP formulations. In addition, we design a cutting plane algorithm to solve it.
277

Optimization approaches for designing baseball scout networks under uncertainty

Ozlu, Ahmet Oguzhan 27 May 2016 (has links)
Major League Baseball (MLB) is a 30-team North American professional baseball league and Minor League Baseball (MiLB) is the hierarchy of developmental professional baseball teams for MLB. Most MLB players first develop their skills in MiLB, and MLB teams employ scouts, experts who evaluate the strengths, weaknesses, and overall potential of these players. In this dissertation, we study the problem of designing a scouting network for a Major League Baseball (MLB) team. We introduce the problem to the operations research literature to help teams make strategic and operational level decisions when managing their scouting resources. The thesis consists of three chapters that aim to address decisions such as how the scouts should be assigned to the available MiLB teams, how the scouts should be routed around the country, how many scouts are needed to perform the major scouting tasks, are there any trade-off s between the scouting objectives, and if there are any, what are the outcomes and insights. In the first chapter, we study the problem of assigning and scheduling minor league scouts for Major League Baseball (MLB) teams. There are multiple objectives in this problem. We formulate the problem as an integer program, use decomposition and both column-generation-based and problem-specific heuristics to solve it, and evaluate policies on multiple objective dimensions based on 100 bootstrapped season schedules. Our approach can allow teams to improve operationally by finding better scout schedules, to understand quantitatively the strategic trade-offs inherent in scout assignment policies, and to select the assignment policy whose strategic and operational performance best meets their needs. In the second chapter, we study the problem under uncertainty. In reality we observe that there are always disruptions to the schedules: players are injured, scouts become unavailable, games are delayed due to bad weather, etc. We presented a minor league baseball season simulator that generates random disruptions to the scout's schedules and uses optimization based heuristic models to recover the disrupted schedules. We evaluated the strategic benefits of different policies for team-to-scout assignment using the simulator. Our results demonstrate that the deterministic approach is insufficient for evaluating the benefits and costs of each policy, and that a simulation approach is also much more effective at determining the value of adding an additional scout to the network. The real scouting network design instances we solved in the first two chapters have several detailed complexities that can make them hard to study, such as idle day constraints, varying season lengths, off days for teams in the schedule, days where some teams play and others do not, etc. In the third chapter, we analyzed a simplified version of the Single Scout Problem (SSP), stripping away much of the real-world complexities that complicate SSP instances. Even for this stylized, archetypal version of SSP, we find that even small instances can be computationally difficult. We showed by reduction from Minimum Cost Hamiltonian Path Problem that archetypal version of SSP is NP-complete, even without all of the additional complexity introduced by real scheduling and scouting operations.
278

Portfolio optimisation models

Arbex Valle, Cristiano January 2013 (has links)
In this thesis we consider three different problems in the domain of portfolio optimisation. The first problem we consider is that of selecting an Absolute Return Portfolio (ARP). ARPs are usually seen as financial portfolios that aim to produce a good return regardless of how the underlying market performs, but our literature review shows that there is little agreement on what constitutes an ARP. We present a clear definition via a three-stage mixed-integer zero-one program for the problem of selecting an ARP. The second problem considered is that of designing a Market Neutral Portfolio (MNP). MNPs are generally defined as financial portfolios that (ideally)exhibit performance independent from that of an underlying market, but, once again, the existing literature is very fragmented. We consider the problem of constructing a MNP as a mixed-integer non-linear program (MINLP) which minimises the absolute value of the correlation between portfolio return and underlying benchmark return. The third problem is related to Exchange-Traded Funds (ETFs). ETFs are funds traded on the open market which typically have their performance tied to a benchmark index. They are composed of a basket of assets; most attempt to reproduce the returns of an index, but a growing number try to achieve a multiple of the benchmark return, such as two times or the negative of the return. We present a detailed performance study of the current ETF market and we find, among other conclusions, constant underperformance among ETFs that aim to do more than simply track an index. We present a MINLP for the problem of selecting the basket of assets that compose an ETF, which, to the best of our knowledge, is the first in the literature. For all three models we present extensive computational results for portfolios derived from universes defined by S&P international equity indices with up to 1200 stocks. We use CPLEX to solve the ARP problem and the software package Minotaur for both our MINLPs for MNP and an ETF.
279

Islanding model for preventing wide-area blackouts and the issue of local solutions of the optimal power flow problem

Bukhsh, Waqquas Ahmed January 2014 (has links)
Optimization plays a central role in the control and operation of electricity power networks. In this thesis we focus on two very important optimization problems in power systems. The first is the optimal power flow problem (OPF). This is an old and well-known nonconvex optimization problem in power system. The existence of local solutions of OPF has been a question of interest for decades. Both local and global solution techniques have been put forward to solve OPF problem but without any documented cases of local solutions. We have produced test cases of power networks with local solutions and have collected these test cases in a publicly available online archive (http://www.maths.ed.ac.uk/optenergy/LocalOpt/), which can be used now by researchers and practitioners to test the robustness of their solution techniques. Also a new nonlinear relaxation of OPF is presented and it is shown that this relaxation in practice gives tight lower bounds of the global solution of OPF. The second problem considered is how to split a network into islands so as to prevent cascading blackouts over wide areas. A mixed integer linear programming (MILP) model for islanding of power system is presented. In recent years, islanding of power networks is attracting attention, because of the increasing occurrence and risk of blackouts. Our proposed approach is quite flexible and incorporates line switching and load shedding. We also give the motivation behind the islanding operation and test our model on variety of test cases. The islanding model uses DC model of power flow equations. We give some of the shortcomings of this model and later improve this model by using piecewise linear approximation of nonlinear terms. The improved model yields good feasible results very quickly and numerical results on large networks show the promising performance of this model.
280

Cyber-physical acquisition strategy for COTS-based agility-driven engineering

Knisely, Nathan C. L. 27 May 2016 (has links)
The rising cost of military aircraft has driven the DoD to increase the utilization of commercial off-the-shelf (COTS) components in new acquisitions. Despite several demonstrated advantages of COTS-based systems, challenges relating to obsolescence arise when attempting to design and sustain such systems using traditional acquisition processes. This research addresses these challenges through the creation of an Agile Systems Engineering framework that is specifically aimed at COTS-based systems. This framework, known as the Cyber-physical Acquisition Strategy for COTS-based Agility-Driven Engineering (CASCADE), amends the traditional systems engineering process through the addition of an "identification phase" during which requirements are balanced against the capabilities of commercially-available components. The CASCADE framework motivates the creation of a new Mixed Integer Linear Programming (MILP) formulation which enables the creation of optimum obsolescence mitigation plans. Using this CASCADE MILP formulation, two sets of experiments are carried out: First, verification experiments demonstrate that the CASCADE MILP conforms to expected trends and agrees with existing results. Next, the CASCADE MILP is applied to a representative set of COTS-based systems in order to determine the appropriate level of obsolescence forecast accuracy, and to uncover new system-level cost-vs-reliability trends associated with COTS component modification.

Page generated in 0.036 seconds