Spelling suggestions: "subject:"[een] OPERATIONS"" "subject:"[enn] OPERATIONS""
241 |
Analytic search methods in online social networksMarks, Christopher E. (Christopher Edward) January 2017 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 175-185). / This thesis presents and evaluates methods for searching and analyzing social media data in order to improve situational awareness. We begin by proposing a method for network vertex search that looks for the target vertex by sequentially examining the neighbors of a set of "known" vertices. Using a dynamic programming approach, we show that there is always an optimal "block" search policy, in which all of the neighbors of a known vertex are examined before moving on to another vertex. We provide a precise characterization of the optimal policy in two specific cases: (1) when the connections between the known vertices and the target vertex are independent, and (2) when the target vertex is connected to at most one known vertex. We then apply this result to the problem of finding new accounts belonging to Twitter users whose previous accounts had been suspended for extremist activity, quantifying the performance of our optimal search policy in this application against other policies. In this application we use thousands of Twitter accounts related to the Islamic State in Iraq and Syria (ISIS) to develop a behavioral models for these extremist users. These models are used to identify new extremist accounts, identify pairs of accounts belonging to the same user, and predict to whom a user will connect when opening an account. We use this final model to inform our network search application. Finally, we develop a more general application of network search and classification that obtains a set of social media users from a specified location or group. We propose an expand -- classify methodology which recursively collects users that have social network connections to users inside the target location, and then classifies all of the users by maximizing the probability over a factor graph model. This factor graph model accounts for the implications of both observed user profile features and social network connections in inferring location. Using geo-located data to evaluate our method, we find that our classification method typically outperforms Twitter's native search methods in building a dataset of Twitter users in a specific location. / by Christopher E. Marks. / Ph. D.
|
242 |
Multi-mission optimized re-planning in air mobility command's channel route executionKoepke, Corbin G. (Corbin Gene), 1977- January 2004 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2004. / Includes bibliographical references (p. 143-145). / The United States Air Force's Air Mobility Command is responsible for creating a schedule and executing that schedule for a large-scale air mobility network that encompasses different mission areas. One of the mission areas is channel route. Channel route execution often experiences disruptions that motivate a need for changes in the current channel route schedule. Traditionally, re-planning the channel route schedule has been a manual process that usually stops after the first feasible set of changes is found, due to the challenges of large amounts of data and urgency for a re-plan. Other challenges include subjective trade-offs and a desire for minimal changes to the channel route schedule. We re-plan the channel route schedule using a set of integer programs and heuristics that overcomes these challenges. The integer programs' variables incorporate many of Air Mobility Command's operating constraints, so they do not have to be explicitly included in the formulations. The re-plan uses opportunities in the other mission areas and reroutes channel route aircraft. Finally, our methods can quickly find a solution, allow for "what-if' analysis and interaction with the user, and can be adapted to an evolution in Air Mobility Command's operations while the underlying models remain constant. / by Corbin G. Koepke. / S.M.
|
243 |
Modeling human dynamics and lifestyles using digital tracesXu, Sharon January 2018 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 63-69). / In this thesis, we present algorithms to model and identify shared patterns in human activity with respect to three applications. First, we propose a novel model to characterize the bursty dynamics found in human activity. This model couples excitation from past events with weekly periodicity and circadian rhythms, giving the first descriptive understanding of mechanisms underlying human behavior. The proposed model infers directly from event sequences both the transition rates between tasks as well as nonhomogeneous rates depending on daily and weekly cycles. We focus on credit card transactions to test the model, and find it performs well in prediction and is a good statistical fit for individuals. Second, using credit card transactions, we identify lifestyles in urban regions and add temporal context to behavioral patterns. We find that these lifestyles not only correspond to demographics, but also have a clear signal with one's social network. Third, we analyze household load profiles for segmentation based on energy consumption, focusing on capturing peak times and overall magnitude of consumption. We propose novel metrics to measure the representative accuracy of centroids, and propose a method that outperforms standard and state of the art baselines with respect to these metrics. In addition, we show that this method is able to separate consumers well based on their solar PV and storage needs, thus helping consumers understand their needs and assisting utilities in making good recommendations. / by Sharon Xu. / S.M.
|
244 |
Models for project managementMessmacher, Eduardo B. (Eduardo Bernhart), 1972- January 2000 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2000. / Also available online at the DSpace at MIT website. / Includes bibliographical references (p. 119-122). / Organizations perform work essentially through operations and projects. The characteristics of projects makes them extremely difficult to manage: their non repetitive nature discards the trial and error learning, while their short life span is particularly unforgiving to misjudgments. Some authors have found that effective scheduling is an important contributor to the success of research and development (R&D), as well as construction projects. The widely used critical path method for scheduling projects and identifying important activities fails to capture two important dimensions of the problem: the availability of different technologies (or options) to perform the activities, and the inherent problem of limited availability of resources that most managers face. Nevertheless, when one tries to account for such additional constraints, the problems become very hard to solve. In this thesis we propose an approach to the scheduling problem using a genetic algorithm, and try to compare its performance to more traditional approaches, such as an extension to a very innovative Lagrangian relaxation approach recently proposed. The purpose of using genetic algorithms is twofold: first to obtain good approximations to very hard problems, and second to realize the limitations and virtues of this search technique. The purpose of this thesis is not only to develop the algorithms, but also to obtain insight about the implications of the additional constraints in the perspective of a project manager. / by Eduardo B. Messmacher. / S.M.
|
245 |
Practical applications of large-scale stochastic control for learning and optimizationGutin, Eli January 2018 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2018. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 183-188). / This thesis explores a variety of techniques for large-scale stochastic control. These range from simple heuristics that are motivated by the problem structure and are amenable to analysis, to more general deep reinforcement learning (RL) which applies to broader classes of problems but is trickier to reason about. In the first part of this thesis, we explore a less known application of stochastic control in Multi-armed bandits. By assuming a Bayesian statistical model, we get enough problem structure so that we can formulate an MDP to maximize total rewards. If the objective involved total discounted rewards over an infinite horizon, then the celebrated Gittins index policy would be optimal. Unfortunately, the analysis there does not carry over to the non-discounted, finite-horizon problem. In this work, we propose a tightening sequence of 'optimistic' approximations to the Gittins index. We show that the use of these approximations together with the use of an increasing discount factor appears to offer a compelling alternative to state-of-the-art algorithms. We prove that these optimistic indices constitute a regret optimal algorithm, in the sense of meeting the Lai-Robbins lower bound, including matching constants. The second part of the thesis focuses on the collateral management problem (CMP). In this work, we study the CMP, faced by a prime brokerage, through the lens of multi-period stochastic optimization. We find that, for a large class of CMP instances, algorithms that select collateral based on appropriately computed asset prices are near-optimal. In addition, we back-test the method on data from a prime brokerage and find substantial increases in revenue. Finally, in the third part, we propose novel deep reinforcement learning (DRL) methods for option pricing and portfolio optimization problems. Our work on option pricing enables one to compute tighter confidence bounds on the price, using the same number of Monte Carlo samples, than existing techniques. We also examine constrained portfolio optimization problems and test out policy gradient algorithms that work with somewhat different objective functions. These new objectives measure the performance of a projected version of the policy and penalize constraint violation. / by Eli Gutin. / Ph. D.
|
246 |
Designing a Robust Supply Chain Network Against DisruptionsPariazar, Mahmood 16 April 2019 (has links)
<p> Supply chains are vulnerable to disruptions at any stage of the distribution system. These disruptions can be caused by natural disasters, production problems, or labor defects. The consequences of these disruptions may result in significant economic losses or even human deaths. Therefore, it is important to consider any disruption as an important factor in strategic supply chain design. Consequently, the primary outputs of this dissertation include insights for designing robust supply chains that are neither significantly nor adversely impacted by disruptions.</p><p> The impact of correlated supplier failures is examined and how this problem can be modeled as a variant of a facility location problem is described. Two main problems are defined, the first being the design of a robust supply chain, and the second being the optimization of operational inspection schedules to maintain the quality of an already established supply chain. In this regard, both strategic and operational decisions are considered in the model and (1) a two-stage stochastic programming model; (2) a multi-objective stochastic programming model; and (3) a dynamic programming model are developed to explore the tradeoffs between cost and risk.</p><p> Three methods are developed to identify optimal and robust solutions: an integer L-shaped method; a hybrid genetic algorithm using Data Envelopment Analysis; and an approximate dynamic programming method. Several sensitivity analyses are performed on the model to see how the model output would be affected by uncertainty.</p><p> The findings from this dissertation will be able to help both practitioners designing supply chains, as well as policy makers who need to understand the impact of different disruption mitigation strategies on cost and risk in the supply chain.</p><p>
|
247 |
Inferring noncompensatory choice heuristicsYee, Michael, 1978- January 2006 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2006. / Includes bibliographical references (p. 121-128). / Human decision making is a topic of great interest to marketers, psychologists, economists, and others. People are often modeled as rational utility maximizers with unlimited mental resources. However, due to the structure of the environment as well as cognitive limitations, people frequently use simplifying heuristics for making quick yet accurate decisions. In this research, we apply discrete optimization to infer from observed data if a person is behaving in way consistent with a choice heuristic (e.g., a noncompensatory lexicographic decision rule). We analyze the computational complexity of several inference related problems, showing that while some are easy due to possessing a greedoid language structure, many are hard and likely do not have polynomial time solutions. For the hard problems we develop an exact dynamic programming algorithm that is robust and scalable in practice, as well as analyze several local search heuristics. We conduct an empirical study of SmartPhone preferences and find that the behavior of many respondents can be explained by lexicographic strategies. / (cont.) Furthermore, we find that lexicographic decision rules predict better on holdout data than some standard compensatory models. Finally, we look at a more general form of noncompensatory decision process in the context of consideration set formation. Specifically, we analyze the computational complexity of rule-based consideration set formation, develop solution techniques for inferring rules given observed consideration data, and apply the techniques to a real dataset. / by Michael J. Yee. / Ph.D.
|
248 |
Analysis and optimization of the Emergency Department at Beth Israel Deaconess Medical Center via simulationNoyes, Clay W January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / Includes bibliographical references (p. 65-67). / We develop a simulation model based on patient data from 2/1/05 to 1/31/06 that represents the operations of the Emergency Department at Beth Israel Deaconess Medical Center, a Harvard teaching hospital and a leading medical institution. The model uses a multiclass representation of patients, a time-varying arrival process module that uses multivariate regression to predict future patient arrivals, and a service module that takes into account the fact that service times decrease and capacity increases when the system becomes congested. We show that the simulation model results in predictions of waiting times that closely match those observed in the data. Most importantly, we use the simulation model to propose and analyze new policies such as increasing the number of beds, reducing the downtime between patients, and introducing a point of care lab testing device. The model predicts that incorporating a suite of these proposed changes will result in 21% reduction in waiting times. / by Clay W. Noyes. / S.M.
|
249 |
Statistical learning for decision making : interpretability, uncertainty, and inferenceLetham, Benjamin January 2015 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 183-196). / Data and predictive modeling are an increasingly important part of decision making. Here we present advances in several areas of statistical learning that are important for gaining insight from large amounts of data, and ultimately using predictive models to make better decisions. The first part of the thesis develops methods and theory for constructing interpretable models from association rules. Interpretability is important for decision makers to understand why a prediction is made. First we show how linear mixtures of rules can be used to make sequential predictions. Then we develop Bayesian Rule Lists, a method for learning small, ordered lists of rules. We apply Bayesian Rule Lists to a large database of patient medical histories and produce a simple, interpretable model that solves an important problem in healthcare, with little sacrifice to accuracy. Finally, we prove a uniform generalization bound for decision lists. In the second part of the thesis we focus on decision making from sales transaction data. We develop models and inference procedures for using transaction data to estimate quantities such as willingness-to-pay and lost sales due to stock unavailability. We develop a copula estimation procedure for making optimal bundle pricing decisions. We then develop a Bayesian hierarchical model for inferring demand and substitution behaviors from transaction data with stockouts. We show how posterior sampling can be used to directly incorporate model uncertainty into the decisions that will be made using the model. In the third part of the thesis we propose a method for aggregating relevant information from across the Internet to facilitate informed decision making. Our contributions here include an important theoretical result for Bayesian Sets, a popular method for identifying data that are similar to seed examples. We provide a generalization bound that holds for any data distribution, and moreover is independent of the dimensionality of the feature space. This result justifies the use of Bayesian Sets on high-dimensional problems, and also explains its good performance in settings where its underlying independence assumption does not hold. / by Benjamin Letham. / Ph. D.
|
250 |
The Gomory-Chvátal closure : polyhedrality, complexity, and extensionsDunkel, Juliane January 2011 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011. / Vita. Cataloged from PDF version of thesis. / Includes bibliographical references (p. 163-166). / In this thesis, we examine theoretical aspects of the Gomory-Chvátal closure of polyhedra. A Gomory-Chvátal cutting plane for a polyhedron P is derived from any rational inequality that is valid for P by shifting the boundary of the associated half-space towards the polyhedron until it intersects an integer point. The Gomory-ChvAital closure of P is the intersection of all half-spaces defined by its Gomory-Chvátal cuts. While it is was known that the separation problem for the Gomory-Chvátal closure of a rational polyhedron is NP-hard, we show that this remains true for the family of Gomory-Chvátal cuts for which all coefficients are either 0 or 1. Several combinatorially derived cutting planes belong to this class. Furthermore, as the hyperplanes associated with these cuts have very dense and symmetric lattices of integer points, these cutting planes are in some- sense the "simplest" cuts in the set of all Gomory-Chvátal cuts. In the second part of this thesis, we answer a question raised by Schrijver (1980) and show that the Gomory-Chvátal closure of any non-rational polytope is a polytope. Schrijver (1980) had established the polyhedrality of the Gomory-Chvdtal closure for rational polyhedra. In essence, his proof relies on the fact that the set of integer points in a rational polyhedral cone is generated by a finite subset of these points. This is not true for non-rational polyhedral cones. Hence, we develop a completely different proof technique to show that the Gomory-Chvátal closure of a non-rational polytope can be described by a finite set of Gomory-Chvátal cuts. Our proof is geometrically motivated and applies classic results from polyhedral theory and the geometry of numbers. Last, we introduce a natural modification of Gomory-Chvaital cutting planes for the important class of 0/1 integer programming problems. If the hyperplane associated with a Gomory-Chvátal cut for a polytope P C [0, 1]' does not contain any 0/1 point, shifting the hyperplane further towards P until it intersects a 0/1 point guarantees that the resulting half-space contains all feasible solutions. We formalize this observation and introduce the class of M-cuts that arises by strengthening the family of Gomory- Chvátal cuts in this way. We study the polyhedral properties of the resulting closure, its complexity, and the associated cutting plane procedure. / by Juliane Dunkel / Ph.D.
|
Page generated in 0.0624 seconds