• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1406
  • 107
  • 73
  • 54
  • 26
  • 24
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 15
  • 11
  • 5
  • Tagged with
  • 2122
  • 2122
  • 556
  • 389
  • 328
  • 277
  • 259
  • 225
  • 209
  • 203
  • 175
  • 162
  • 157
  • 141
  • 136
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
351

A data mining approach for acoustic diagnosis of cardiopulmonary disease

Flietstra, Bryan C January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2008. / Includes bibliographical references (p. 107-111). / Variations in training and individual doctor's listening skills make diagnosing a patient via stethoscope based auscultation problematic. Doctors have now turned to more advanced devices such as x-rays and computed tomography (CT) scans to make diagnoses. However, recent advances in lung sound analysis techniques allow for the auscultation to be performed with an array of microphones, which send the lung sounds to a computer for processing. The computer automatically identifies adventitious sounds using time expanded waveform analysis and allows for a more precise auscultation. We investigate three data mining techniques in order to diagnose a patient based solely on the sounds heard within the chest by a "smart" stethoscope. We achieve excellent recognition performance by using k nearest neighbors, neural networks, and support vector machines to make classifications in pair-wise comparisons. We also extend the research to a multi-class scenario and are able to separate patients with interstitial pulmonary fibrosis with 80% accuracy. Adding clinical data also improves recognition performance. Our results show that performing computerized lung auscultation offers a low-cost, non-invasive diagnostic procedure that gives doctors better clinical utility especially in situations when x-rays and CT scans are not available. / by Bryan C. Flietstra. / S.M.
352

From data to decisions in healthcare : an optimization perspective

Weinstein, Alexander Michael January 2017 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2017. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (pages 107-110). / The past few decades have seen many methodological and technological advances in optimization, statistics, and machine learning. What is still not well understood is how to combine these tools to take data as inputs and give decisions as outputs. The healthcare arena offers fertile ground for improvement in data-driven decisionmaking. Every day, medical researchers develop and test novel therapies via randomized clinical trials, which, when designed efficiently, can provide evidence for efficacy and harm. Over the last two decades, electronic medical record systems have become increasingly prevalent in hospitals and other care settings. The growth of these and other data sources, combined with the aforementioned advances in the field of operations research, enable new modes of study and analysis in healthcare. In this thesis, we take a data-driven approach to decision-making in healthcare through the lenses of optimization, statistics, and machine learning. In Parts I and II of the thesis, we apply mixed-integer optimization to enhance the design and analysis of clinical trials, a critical step in the approval process for innovative medical treatments. In Part I, we present a robust mixed-integer optimization algorithm for allocating subjects to treatment groups in sequential experiments. By improving covariate balance across treatment groups, the proposed method yields statistical power at least as high as, and sometimes significantly higher than, state-of- the-art covariate-adaptive randomization approaches. In Part II, we present a mixed-integer optimization approach for identifying exceptional responders in randomized trial data. In practice, this approach can be used to extract added value from costly clinical trials that may have failed to identify a positive treatment effect for the general study population, but could be beneficial to a subgroup of the population. In Part III, we present a personalized approach to diabetes management using electronic medical records. The approach is based on a k-nearest neighbors algorithm. By harnessing the power of optimization and machine learning, we can improve patient outcomes and move from the one-size-fits-all approach that dominates the medical landscape today, to a personalized, patient-centered approach. / by Alexander Michael Weinstein. / Ph. D.
353

The case for coordination : equity, efficiency and passenger impacts in air traffic flow management / Equity, efficiency and passenger impacts in air traffic flow management

Fearing, Douglas (Douglas Stephen) January 2010 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 121-123). / In this thesis, we develop multi-resource integer optimization formulations for coordinating Traffic Flow Management (TFM) programs with equity considerations. Our multi-resource approaches ignore aircraft connectivity between flights, but allow a single flight to utilize multiple capacity-controlled resources. For example, when both Ground Delay Programs (GDPs) and Airspace Flow Programs (AFPs) are simultaneously in effect, a single flight may be impacted by a GDP and one or more AFPs. We show that due to the similarity with current practice, our models can be applied directly in the current Collaborative Decision-Making (CDM) environment. In the first part of the thesis, we develop these formulations as extensions of a well-studied, existing nationwide TFM formulation and compare them to approaches utilized in practice. In order to make these comparisons, we first develop a metric, Time-Order Deviation, for evaluating schedule fairness in the multi-resource setting. We use this metric to compare approaches in terms of both schedule fairness and allocated flight delays. Using historical scenarios derived from 2007 data, we show that, even with limited interaction between TFM programs, our Ration-by-Schedule Exponential Penalty model can improve the utilization of air transportation system resources. Skipping ahead, in the last part of the thesis, we develop a three-stage sequential evaluation procedure in order to analyze the TFM allocation process in the context of a dynamic CDM environment. To perform this evaluation we develop an optimization-based airline disruption response model, which utilizes passenger itinerary data to approximate the underlying airline objective, resulting in estimated flight cancellations and aircraft swaps between flight legs. Using this three-stage sequential evaluation procedure, we show that the benefits of an optimization-based allocation are likely overstated based on a simple flight-level analysis. The difference between these results and those in the first part of the thesis suggests the importance of the multi-stage evaluation procedure. Our results also suggest that there may be significant benefits to incorporating aircraft flow balance considerations into the Federal Aviation Administration's (FAA's) TFM allocation procedures. The passenger itinerary data required for the airline disruption response model in the last part of the thesis are not publicly available, thus in the second part of the thesis, we develop a method for modeling passenger travel and delays. In our approach for estimating historical passenger travel, we develop a discrete choice model trained on one quarter of proprietary booking data to disaggregate publicly available passenger demand. Additionally, we extend a network-based heuristic for calculating passenger delays to estimate historical passenger delays for 2007. To demonstrate the value in this approach, we investigate how passenger delays are affected by various features of the itinerary, such as carrier and time of travel. Beyond its applications in this thesis, we believe the estimated passenger itinerary data will have broad applicability, allowing a passenger-centric focus to be incorporated in many facets of air transportation research. To facilitate these endeavors, we have publicly shared our estimated passenger itinerary data for 2007. / by Douglas Fearing. / Ph.D.
354

Regression under a modern optimization lens

King, Angela, Ph. D. Massachusetts Institute of Technology January 2015 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 131-139). / In the last twenty-five years (1990-2014), algorithmic advances in integer optimization combined with hardware improvements have resulted in an astonishing 200 billion factor speedup in solving mixed integer optimization (MIO) problems. The common mindset of MIO as theoretically elegant but practically irrelevant is no longer justified. In this thesis, we propose a methodology for regression modeling that is based on optimization techniques and centered around MIO. In Part I we propose a method to select a subset of variables to include in a linear regression model using continuous and integer optimization. Despite the natural formulation of subset selection as an optimization problem with an lo-norm constraint, current methods for subset selection do not attempt to use integer optimization to select the best subset. We show that, although this problem is non-convex and NP-hard, it can be practically solved for large scale problems. We numerically demonstrate that our approach outperforms other sparse learning procedures. In Part II of the thesis, we build off of Part I to modify the objective function and include constraints that will produce linear regression models with other desirable properties, in addition to sparsity. We develop a unified framework based on MIO which aims to algorithmize the process of building a high-quality linear regression model. This is the only methodology we are aware of to construct models that imposes statistical properties simultaneously rather than sequentially. Finally, we turn our attention to logistic regression modeling. It is the goal of Part III of the thesis to efficiently solve the mixed integer convex optimization problem of logistic regression with cardinality constraints to provable optimality. We develop a tailored algorithm to solve this challenging problem and demonstrate its speed and performance. We then show how this method can be used within the framework of Part II, thereby also creating an algorithmic approach to fitting high-quality logistic regression models. In each part of the thesis, we illustrate the effectiveness of our proposed approach on both real and synthetic datasets. / by Angela King. / Ph. D.
355

Constructing learning models from data : the dynamic catalog mailing problem

Sun, Peng, 1974- January 2003 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2003. / Includes bibliographical references (p. 105-107). / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / The catalog industry is a large and important industry in the US economy. One of the most important and challenging business decisions in the industry is to decide who should receive catalogs, due to the significant mailing cost and the low response rate. The problem is a dynamic one - when a customer is ready to purchase, s/he may order from a previous catalog if s/he does not have the most recent one. In this sense, customers' purchasing behavior depends not only on the firm's most recent mailing decision, but also on prior mailing decisions. From the firm's perspective, in order to maximize its long-term profit it should make a series of optimal mailing decisions to each customer over time. Contrary to the traditional myopic catalog mailing decision process that is generally implemented in the catalog industry, we propose a model that allows firms to design optimal dynamic mailing policies using their own business data. We constructed the model from a large data set provided by a catalog mailing company. The computational results from the historical data show great potential profit improvement. This application differs from many other applications of (approximate) dynamic programming in that an underlying Markov model is not a priori available, nor can it be derived in a principled manner. Instead, it has to be estimated or "learned" from available data. The thesis furthers the discussion on issues related to constructing learning models from data. More specifically, we discuss the so called "endogeneity problem" and the effects of inaccuracy in model parameter estimation. The fact that the model parameter estimation depends on data collected according to a specific policy introduces an endogeneity problem. As a result, the derived optimal policy depends on the original policy used to collect the data. / (cont.) In the thesis we discuss a specific endogeneity problem, "attribution error." We also investigate whether online learning can solve this problem. More specifically, we discuss the existence of fixed point policies for potential on-line learning algorithms. Imprecision in model parameter estimation also creates the potential for bias. We illustrate this problem and offer a method for detecting it. Finally, we report preliminary results from a large scale field test that tests the effectiveness of the proposed approach in a real business decision setting. / by Peng Sun. / Ph.D.
356

Large scale queueing systems : asymptotics and insights

Goldberg, David Alan, Ph. D. Massachusetts Institute of Technology January 2011 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 195-203). / Parallel server queues are a family of stochastic models useful in a variety of applications, including service systems and telecommunication networks. A particular application that has received considerable attention in recent years is the analysis of call centers. A feature common to these models is the notion of the 'trade-off' between quality and efficiency. It is known that if the underlying system parameters scale together according to a certain 'square-root scaling law', then this trade-off can be precisely quantified, in which case the queue is said to be in the Halfin-Whitt regime. A common approach to understanding this trade-off involves restricting one's models to have exponentially distributed call lengths, and restricting one's analysis to the steady-state behavior of the system. However, these are considered shortcomings of much work in the area. Although several recent works have moved beyond these assumptions, many open questions remain, especially w.r.t. the interplay between the transient and steady-state properties of the relevant models. These questions are the primary focus of this thesis. In the first part of this thesis, we prove several results about the rate of convergence to steady-state for the A/M/rn queue, i.e. n-server queue with exponentially distributed inter-arrival and processing times, in the Halfini-Whitt regime. We identify the limiting rate of convergence to steady-state, discover an asymptotic phase transition that occurs w.r.t. this rate, and prove explicit bounds on the distance to stationarity. The results of the first part of this thesis represent an important step towards understanding how to incorporate transient effects into the analysis of parallel server queues. In the second part of this thesis, we prove several results regarding the steadystate G/G/n queue, i.e. n-server queue with generally distributed inter-arrival and processing times, in the Halfin-Whitt regime. We first prove that under minor technical conditions, the steady-state number of jobs waiting in queue scales like the square root of the number of servers. We then establish bounds for the large deviations behavior of this model, partially resolving a conjecture made by Gamarnik and Momcilovic in [431. We also derive bounds for a related process studied by Reed in [91]. We then derive the first qualitative insights into the steady-state probability that an arriving job must wait for service in the Halfin-Whitt regime, for generally distributed processing times. We partially characterize the behavior of this probability when a certain excess parameter B approaches either 0 or oo. We conclude by studying the large deviations of the number of idle servers, proving that this random variable has a Gaussian-like tail. We prove our main results by combining tools from the theory of stochastic comparison [99] with the theory of heavy-traffic approximations [113]. We compare the system of interest to a 'modified' queue, in which all servers are kept busy at all times by adding artificial arrivals whenever a server would otherwise go idle, and certain servers can permanently break down. We then analyze the modified system using heavy-traffic approximations. The proven bounds hold for all n, have representations as the suprema of certain natural processes, and may prove useful in a variety of settings. The results of the second part of this thesis enhance our understanding of how parallel server queues behave in heavy traffic, when processing times are generally distributed. / by David Alan Goldberg. / Ph.D.
357

Competition and loss of efficiency : from electricity markets to pollution control

Kluberg, Lionel J. (Lionel Jonathan) January 2011 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 221-230). / The thesis investigates the costs and benefits of free competition as opposed to central regulation to coordinate the incentives of various participants in a market. The overarching goal of the thesis is to decide whether deregulated competition is beneficial for society, or more precisely, in which context and under what market structure and what conditions deregulation is beneficial. We consider oligopolistic markets in which a few suppliers with significant market power compete to supply differentiated substitute goods. In such markets, competition is modeled through the game theoretic concept of Nash equilibrium. The thesis compares the Nash equilibrium competitive outcome of these markets with the regulated situation in which a central authority coordinates the decision of the market participants to optimize social welfare. The thesis analyzes the impact of deregulation for producers, for consumers and for society as a whole. The thesis begins with a general quantity (Cournot) oligopolistic market where each producer faces independent production constraints. We then study how a company with multiple subsidiaries can reduce its global energy consumption in a decentralized manner while ensuring that the subsidiaries adopt a globally optimal behavior. We finally propose a new model of supply function competition for electricity markets and show how the number of competing generators and the electrical network constraints affect the performance of deregulation. / by Lionel J. Kluberg. / Ph.D.
358

New statistical techniques for designing future generation retirement and insurance solutions

Zhu, Zhe January 2014 (has links)
Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2014. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages )103-106. / This thesis presents new statistical techniques for designing future generation retirement and insurance solutions. It addresses two major challenges for retirement and insurance products: asset allocation and policyholder behavior. In the first part of the thesis, we focus on estimating the covariance matrix for multidimensional data, and it is used in the application of asset allocation. Classical sample mean and covariance estimates are very sensitive to outliers, and therefore their robust counterparts are considered to overcome the problem. We propose a new robust covariance estimator using the regular vine dependence structure and pairwise robust partial correlation estimators. The resulting robust covariance estimator delivers high performance for identifying outliers under the Barrow Wheel Benchmark for large high dimensional datasets. Finally, we demonstrate a financial application of active asset allocation using the proposed robust covariance estimator. In the second part of the thesis, we expand the regular vine robust estimation technique proposed in the first part, and provide a theory and algorithm for selecting the optimal vine structure. Only two special cases of the regular vine structure were discussed in the previous part, but there are many more different types of regular vines that are neither type. In many applications, restricting our selection to just those two special types is not appropriate, and therefore we propose a vine selection theory based on optimizing the entropy function, as well as an approximation heuristic using the maximum spanning tree to find an appropriate vine structure. Finally, we demonstrate the idea with two financial applications. In the third part of the thesis, we focus on the policyholder behavior modeling for insurance and retirement products. In particular, we choose the variable annuity product, which has many desirable features for retirement saving purposes, such as stock-linked growth potential and protection against losses in the investment. Policyholder behavior is one of the most important profit or loss factors for the variable annuity product, and insurance companies generally do not have sophisticated models at the current time. We discuss a few new approaches using modem statistical learning techniques to model policyholder withdrawal behavior, and the result is promising. / by Zhe Zhu. / Ph. D.
359

An Operations Research approach to aviation security

Martonosi, Susan Elizabeth January 2005 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2005. / Includes bibliographical references (p. 151-163). / Since the terrorist attacks of September 11, 2001, aviation security policy has remained a focus of national attention. We develop mathematical models to address some prominent problems in aviation security. We explore first whether securing aviation deserves priority over other potential targets. We compare the historical risk of aviation terrorism to that posed by other forms of terrorism and conclude that the focus on aviation might be warranted. Secondly, we address the usefulness of passenger pre-screening systems to select potentially high-risk passengers for additional scrutiny. We model the probability that a terrorist boards an aircraft with weapons, incorporating deterrence effects and potential loopholes. We find that despite the emphasis on the pre-screening system, of greater importance is the effectiveness of the underlying screening process. Moreover, the existence of certain loopholes could occasionally decrease the overall chance of a successful terrorist attack. Next, we discuss whether proposed explosives detection policies for cargo, airmail and checked luggage carried on passenger aircraft are cost-effective. / (cont.) We define a threshold time such that if an attempted attack is likely to occur before this time, it is cost-effective to implement the policy, otherwise not. We find that although these three policies protect against similar types of attacks, their cost-effectiveness varies considerably. Lastly, we explore whether dynamically assigning security screeners at various airport security checkpoints can yield major gains in efficiency. We use approximate dynamic programming methods to determine when security screeners should be switched between checkpoints in an airport to accommodate stochastic queue imbalances. We compare the performance of such dynamic allocations to that of pre-scheduled allocations. We find that unless the stochasticity in the system is significant, dynamically reallocating servers might reduce only marginally the average waiting time. Without knowing certain parameter values or understanding terrorist behavior, it can be difficult to draw concrete conclusions about aviation security policies. / (cont.) Nevertheless, these mathematical models can guide policy-makers in adopting security measures, by helping to identify parameters most crucial to the effectiveness of aviation security policies, and helping to analyze how varying key parameters or assumptions can affect strategic planning. / by Susan Elizabeth Martonosi. / Ph.D.
360

The aircraft sequencing problem with arrivals and departures

Muharremogl̆u, Alp, 1975- January 2000 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, Operations Research Center, 2000. / Includes bibliographical references (leaves 57-58). / This thesis investigates the Aircraft Sequencing Problem (ASP) with Arrivals and Departures. The ASP is the problem of sequencing the arriving and departing aircraft on a single nmway to minimize certain performance criteria. We focus on minimizing the total weighted delay. Both the theoretical aspects of the problem, and some practical issues are discussed. The static version of the problem is basically a scheduling problem with sequence dependent processing times and ready times, with the objective of minimizing total weighted delay. Exact algorithms for this problem are not fast enough for practical implementation. WP- give several algorithms that can be used both for the static and the dynamic versions of the problem. These algorithms are not exact solutions, however they are much faster than an exact algorithm and address some very important practical issues related to the ASP. Computational results from these algorithms are given. The computational results demonstrate that the potential benefits of using optimization in the sequencing of arrivals and departures in the Terminal Area are fairly significant. For example, the algorithm HWTW with 11,f PS= (0,0) reduces delays by 40% compared to FCFS. Certain fairness and safety issues are addressed as well. Acknowlegmets I would like to thank my advisor, Prof. Amedeo R. Odoni for his support during the past two years. This research was partially supported by the Federal Aviation Administration (FAA) under the project" Advanced Concepts for Collaborative Decision Making (CDM)," award number SA1603JB and by the Charles Stark Draper Laboratory Inc., under Contract Numnber DLH- 505328. / by Alp Muharremoglu. / S.M.

Page generated in 0.1148 seconds