• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1719
  • 915
  • 622
  • 369
  • 282
  • 186
  • 160
  • 149
  • 130
  • 67
  • 60
  • 58
  • 34
  • 31
  • 18
  • Tagged with
  • 5423
  • 1084
  • 586
  • 532
  • 517
  • 504
  • 477
  • 476
  • 440
  • 423
  • 421
  • 380
  • 362
  • 361
  • 358
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Patterns of heart attacks

Shenk, Kimberly N January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 66-68). / Myocardial infarction is a derivative of heart disease that is a growing concern in the United States today. With heart disease becoming increasingly predominant, it is important to not only take steps toward preventing myocardial infarction, but also towards predicting future myocardial infarctions. If we can predict that the dynamic pattern of an individual's diagnostic history matches a pattern already identified as high-risk for myocardial infarction, more rigorous preventative measures can be taken to alter that individual's trajectory of health so that it leads to a better outcome. In this paper we utilize classification and clustering data mining methods concurrently to determine whether a patient is at risk for a future myocardial infarction. Specifically, we apply the algorithms to medical claims data from more than 47,000 members over five years to: 1) find different groups of members that have interesting temporal diagnostic patterns leading to myocardial infarction and 2) provide out-of-sample predictions of myocardial infarction for these groups. Using clustering methods in conjunction with classification algorithms yields improved predictions of myocardial infarction over using classification alone. In addition to improved prediction accuracy, we found that the clustering methods also effectively split the members into groups with different and meaningful temporal diagnostic patterns leading up to myocardial infarction. The patterns found can be a useful profile reference for identifying patients at high-risk for myocardial infarction in the future. / by Kimberly N. Shenk. / S.M.
332

Human machine collaborative decision making in a complex optimization system

Malasky, Jeremy S January 2005 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2005. / Includes bibliographical references (p. 149-151). / Numerous complex real-world applications are either theoretically intractable or unable to be solved in a practical amount of time. Researchers and practitioners are forced to implement heuristics in solving such problems that can lead to highly sub-optimal solutions. Our research focuses on inserting a human "in-the-loop" of the decision-making or problem solving process in order to generate solutions in a timely manner that improve upon those that are generated either scolely by a human or solely by a computer. We refer to this as Human-Machine Collaborative Decision-Making (HMCDM). The typical design process for developing human-machine approaches either starts with a human approach and augments it with decision-support or starts with an automated approach and augments it with operator input. We provide an alternative design process by presenting an 1HMCDM methodology that addresses collaboration from the outset of the design of the decision- making approach. We apply this design process to a complex military resource allocation and planning problem which selects, sequences, and schedules teams of unmanned aerial vehicles (UAVs) to perform sensing (Intelligence, Surveillance, and Reconnaissance - ISR) and strike activities against enemy targets. Specifically, we examined varying degrees of human-machine collaboration in the creation of variables in the solution of this problem. We also introduce an IIHMCDM method that combines traditional goal decomposition with a model formulation into an Iterative Composite Variable Approach for solving large-scale optimization problems. / (cont.) Finally, we show through experimentation the potential for improvement in the quality and speed of solutions that can be achieved through the use of an HMCDM approach. / by Jeremy S. Malasky. / S.M.
333

Développement d'un magnétomètre à balayage à température cryogénique basé sur la résonance du spin électronique du centre coloré NV du diamant / Development of scanning magnetic field microscope at cryogenic temperatures based on the electron spin resonance of the NV center

De guillebon, Timothée 11 December 2018 (has links)
Le développement de systèmes d'imagerie magnétique à l'échelle nanométrique a été au coeur de nombreuses avancées scientifiques, en nanomagnétisme notamment. Une partie des enjeux actuels se portent sur des matériaux ne présentant des propriétés magnétiques qu'à basse température. L'imagerie dans ces conditions impose donc une augmentation des contraintes sur les dispositifs préexistants, majorant ainsi les enjeux expérimentaux. Ces dernières années ont vu le développement de microscopes de champ magnétique à balayage basés sur la résonance du spin électronique du centre NV, un centre coloré du diamant. Ces dispositifs, combinant une grande sensibilité et une excellente résolution spatiale en champ magnétique, ont apporté de nombreux résultats pour la communauté du nanomagnétisme à température ambiante. Cette technique repose sur la mesure d'un déplacement Zeeman entre deux sous-niveaux du spin électronique d'un centre NV unique et peut être adaptée à température cryogénique, portant ainsi de nombreuses promesses dans l'imagerie résolue et sensible de champ magnétique. Cette thèse décrit la réalisation d'un tel microscope à température cryogénique.Dans un premier temps, nous présenterons le contexte dans lequel se place ce travail, notamment concernant l'étude des parois de domaines magnétiques, concept très étudié ces dernières années dans l'optique de réaliser des applications dans le domaine des mémoires magnétiques. Après avoir discuté de quelques techniques d'imagerie magnétique à température cryogénique, nous présenterons l'utilisation du centre NV comme capteur magnétique. Le chapitre 2 sera dévoué à la présentation du développement expérimental à température cryogénique ainsi que des techniques d'imagerie accessibles avec ce dispositif. Le magnétomètre à centre NV à température cryogénique sera ensuite utilisé pour l'étude de (Ga,Mn)(As,P), un semiconducteur ferromagnétique porteur d'espoirs en vue du développement d'architectures de mémoires performantes. Nous présenterons les premières imageries de parois de domaines ainsi que l'étude de l'homogénéité de l'aimantation à saturation dans ce matériau. La dernière partie du manuscrit sera allouée à l'étude des processus mis en jeu dans la relaxation du centre NV dans les nanodiamants, dans l'optique de l'utiliser comme capteur de champ fluctuants. / The development of magnetic imaging systems at the nanoscale has been of central importance for many scientific advances, in particular in nanomagnetism Part of actual challenges are displaced towards materials displaying magnetic properties only at low temperatures. Imaging in these conditions creates new constraints on pre-existing techniques, which increases experimental challenges. These last years have seen the development of scanning magnetic field microscope based on the electron spin resonance of the NV center, a colored center in diamond. These devices, combining great sensitivity and excellent spatial resolution, have brought great results in nanomagnetism at room temperature. This technique relies on the mesure of the Zeeman displacement between two sublevels of the electronic spin of a unique NV center, and it can be adapted at cryogenic temperatures, bringing thereby several hopes in sensitive and resolved magnetic imaging. These thesis describes the implementation of such a microscope, at cryogenic temperatures.In the first chapter, we will present the context of this work, especially concerning the study of magnetic domain walls, a concept studied these last years for magnetic memories applications. After discussing few magnetic imaging techniques at cryogenic temperatures, we will discuss how to use the NV center as a magnetic sensor. The second chapter will be devoted to the experimental development of this magnetic microscope as well as the different imaging techniques used in this work. Our NV center magnetometer will then be used to study (Ga,Mn)(As,P), a ferromagnetic semiconductor displaying very interesting properties towards high-performance memory architectures. We will show the first domain wall images along with a study of the homogeneity of the magnetization in this material. The last part of the manuscript will be dedicated to the study of the processes at stake in the relaxation of NV centers in nanodiamonds in the prospect of using it as a fluctuating magnetic fields sensor.
334

Midwifery centers as enabled midwifery: women's experiences of care with a human rights-based approach, before and during the pandemic

Stevens, Jennifer Rebecca 25 January 2022 (has links)
BACKGROUND: A human rights-based approach (HRBA) to maternal health care is generally recognized as key to improving quality and acceptability of care. Yet examples of a HRBA in practice are limited. Crises exacerbate underlying challenges in current approaches to maternal child healthcare (MCH) and provide an ideal, if unfortunate, opportunity to assess alternatives. The midwifery model of care is a HRBA based on the relationship between the midwife and woman. It is appropriate for the majority of healthy pregnant women, and has been found to provide safe, cost effective, evidence based, and satisfying care. Yet midwives working in the medical model may struggle to fully express midwifery. A quasi-experimental design was used to assess the impact of three models of care on women’s experiences of respectful care, trust and their fear and knowledge around COVID-19, before and during the COVID-19 pandemic. The models were: the fully enabled midwifery (“FEM”) model in a midwifery center, the midwifery and medicine (“MAM”) model in facilities with midwives working alongside medical practitioners, and the no midwifery (“NoM”) model in facilities without midwives. METHODS: Phone survey data were collected and analyzed from all women (n=1191) who delivered from Jan 2020-June 2020 at 7 health care facilities in Bangladesh. Descriptive statistics and ANOVA, post hoc Tukey and effect size analyses were used to explore the relationships between the models, outcomes and time periods. Linear regression was used to explore relationships between outcomes, models and covariates. RESULTS: The experiences of respectful care, and trust were significantly higher (p=<0.01) and the experience of COVID fear/stigma was significantly lower (p=<0.01) for women who gave birth in the FEM model, compared to the other models, in both the pre and pandemic periods, with the exception of respectful care compared to the MAM model in the pre-pandemic period. CONCLUSION: Midwives, when working in the fully enabled environment of midwifery centers, provided care that was positively related to women’s experience of care. As midwives are used in many countries to prevent maternal mortality, the importance of an enabling environment should not be overlooked. Midwifery centers are an example of an HRBA that should be considered wherever midwives work and considered an important response during a crisis. / 2023-01-24T00:00:00Z
335

Adaptive robust optimization with applications in inventory and revenue management

Iancu, Dan Andrei January 2010 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2010. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 201-213). / In this thesis, we examine a recent paradigm for solving dynamic optimization problems under uncertainty, whereby one considers decisions that depend directly on the sequence of observed disturbances. The resulting policies, called recourse decision rules, originated in Stochastic Programming, and have been widely adopted in recent works in Robust Control and Robust Optimization; the specific subclass of affine policies has been found to be tractable and to deliver excellent empirical performance in several relevant models and applications. In the first chapter of the thesis, using ideas from polyhedral geometry, we prove that disturbance-affine policies are optimal in the context of a one-dimensional, constrained dynamical system. Our approach leads to policies that can be computed by solving a single linear program, and which bear an interesting decomposition property, which we explore in connection with a classical inventory management problem. The result also underscores a fundamental distinction between robust and stochastic models for dynamic optimization, with the former resulting in qualitatively simpler problems than the latter. In the second chapter, we introduce a hierarchy of polynomial policies that are also directly parameterized in the observed uncertainties, and that can be efficiently computed using semidefinite optimization methods. The hierarchy is asymptotically optimal and guaranteed to improve over affine policies for a large class of relevant problems. To test our framework, we consider two problem instances arising in inventory management, for which we find that quadratic policies considerably improve over affine ones, while cubic policies essentially close the optimality gap. In the final chapter, we examine the problem of dynamically pricing inventories in multiple items, in order to maximize revenues. For a linear demand function, we propose a distributionally robust uncertainty model, argue how it can be constructed from limited historical data, and show how pricing policies depending on the observed model mis-specifications can be computed by solving second-order conic or semidefinite optimization problems. We calibrate and test our model using both synthetic data, as well as real data from a large US retailer. Extensive Monte-Carlo simulations show 3 that adaptive robust policies considerably improve over open-loop formulations, and are competitive with popular heuristics in the literature. / by Dan Andrei Iancu. / Ph.D.
336

Provably near-optimal algorithms for multi-stage stochastic optimization models in operations management

Shi, Cong January 2012 (has links)
Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2012. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 157-165). / Many if not most of the core problems studied in operations management fall into the category of multi-stage stochastic optimization models, whereby one considers multiple, often correlated decisions to optimize a particular objective function under uncertainty on the system evolution over the future horizon. Unfortunately, computing the optimal policies is usually computationally intractable due to curse of dimensionality. This thesis is focused on providing provably near-optimal and tractable policies for some of these challenging models arising in the context of inventory control, capacity planning and revenue management; specifically, on the design of approximation algorithms that admit worst-case performance guarantees. In the first chapter, we develop new algorithmic approaches to compute provably near-optimal policies for multi-period stochastic lot-sizing inventory models with positive lead times, general demand distributions and dynamic forecast updates. The proposed policies have worst-case performance guarantees of 3 and typically perform very close to optimal in extensive computational experiments. We also describe a 6-approximation algorithm for the counterpart model under uniform capacity constraints. In the second chapter, we study a class of revenue management problems in systems with reusable resources and advanced reservations. A simple control policy called the class selection policy (CSP) is proposed based on solving a knapsack-type linear program (LP). We show that the CSP and its variants perform provably near-optimal in the Halfin- Whitt regime. The analysis is based on modeling the problem as loss network systems with advanced reservations. In particular, asymptotic upper bounds on the blocking probabilities are derived. In the third chapter, we examine the problem of capacity planning in joint ventures to meet stochastic demand in a newsvendor-type setting. When resources are heterogeneous, there exists a unique revenue-sharing contract such that the corresponding Nash Bargaining Solution, the Strong Nash Equilibrium, and the system optimal solution coincide. The optimal scheme rewards every participant proportionally to her marginal cost. When resources are homogeneous, there does not exist a revenue-sharing scheme which induces the system optimum. Nonetheless, we propose provably good revenue-sharing contracts which suggests that the reward should be inversely proportional to the marginal cost of each participant. / by Cong Shi. / Ph.D.
337

Probabilistic models and optimization algorithms for large-scale transportation problems

Lu, Jing January 2020 (has links)
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Thesis: Ph. D., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2020 / Cataloged from student-submitted PDF of thesis. / Includes bibliographical references (pages 179-186). / This thesis tackles two major challenges of urban transportation optimization problems: (i) high-dimensionality and (ii) uncertainty in both demand and supply. These challenges are addressed from both modeling and algorithm design perspectives. The first part of this thesis focuses on the formulation of analytical transient stochastic link transmission models (LTM) that are computationally tractable and suitable for largescale network analysis and optimization. We first formulate a stochastic LTM based on the model of Osorio and Flötteröd (2015). We propose a formulation with enhanced scalability. In particular, the dimension of the state space is linear, rather than cubic, in the link's space capacity. We then propose a second formulation that has a state space of dimension two; it scales independently of the link's space capacity. Both link models are validated versus benchmark models, both analytical and simulation-based. The proposed models are used to address a probabilistic formulation of a city-wide signal control problem and are benchmarked versus other existing network models. Compared to the benchmarks, both models derive signal plans that perform systematically better considering various performance metrics. The second model, compared to the first model, reduces the computational runtime by at least two orders of magnitude. The second part of this thesis proposes a technique to enhance the computational efficiency of simulation-based optimization (SO) algorithms for high-dimensional discrete SO problems. The technique is based on an adaptive partitioning strategy. It is embedded within the Empirical Stochastic Branch-and-Bound (ESB&B) algorithm of Xu and Nelson (2013). This combination leads to a discrete SO algorithm that is both globally convergent and has good small sample performance. The proposed algorithm is validated and used to address a high-dimensional car-sharing optimization problem. / by Jing Lu. / Ph. D. / Ph.D. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
338

Cost Optimization of Modular Data Centers

Nayak, Suchitra January 2018 (has links)
During the past two decades, the increasing demand for digital telecommunications, data storage and data processing coupled with simultaneous advances in computer and electronic technology have resulted in a dramatic growth rate in the data center (DC) industry. It has been estimated that almost 2% of US total energy consumption and 1.5% of worlds total power consumption is by DCs. With the fossil fuels and earth’s natural energy sources depleting every day, greater efforts have to be made to save energy and improve efficiencies. As yet, most of the DCs are highly inefficient in energy usage. A significant part of this inherent inefficiency comes from poor design and rudimentary operation of current DCs. Thus, there is an urgent need to optimize the power consumption of DCs. This has led to the advent of modular DCs, newer scalable DC architectures, that reduces cost and increases efficiency by eliminating overdesign and allowing for scalable growth. This concept has been particularly appealing for small businesses who find it difficult to commit to setting up a traditional DC with huge upfront capital investment. However, their adoption and implementation is still limited because of a systematic approach of quickly identifying a module DC design. Considering many different choices for subcomponents, such as cooling systems, enclosures and power systems, this is a non-trivial exercise, especially, considering the complex multiphysics interactions among components that drive system efficiency. For designing such DCs, there is no research available. Therefore, most of the time, the engineers and designers rely on experience, to avoid lengthy elaborate engineering analysis, particularly during the conception stages of a DC deployment project. Here, we are developing a design tool that will not only optimize the design of modular DCs but also make the design process much faster than manually done by engineers. One of the major problem in designing modular DCs is finding optimum placement of the cooling unit to keep the temperature under ASHRAE guidelines (recommended safe temperature threshold). In addition to finding the optimum selection and placement of the cooling units and its auxiliary components, the tool also gives an optimum design for the power connection to the cooling units and IT racks with redundancy. Also, a bill of materials and key performance index (KPI) for those designs are generated by the tool. Overall, this tool in the hands of the bidders or sales representatives can significantly increase their chance of winning the project. / Thesis / Master of Applied Science (MASc)
339

Methods for Naval Ship Concept Exploration Interfacing Model Center and ASSET with Machinery System Tools

Strock, Justin William 24 June 2008 (has links)
In response to the Fiscal Year 2006 National Defense Authorization Act, the US Navy conducted an evaluation of alternative propulsion methods for surface combatants and amphibious warfare ships. The study looked at current and future propulsion technology and propulsion alternatives for these three sizes of warships. In their analysis they developed 23 ship concepts, only 7 of which were variants of medium size surface combatants (MSC,21,000-26,000 MT). The report to Congress was based on a cost analysis and operational effectiveness analysis of these variants. The conclusions drawn were only based on the ship variants they developed and not on a representative sample of the feasible, non-dominated designs in the design space. This thesis revisits the Alternative Propulsion Study results for a MSC, which were constrained by the inability of the Navy's design tools to adequately search the full design space. This thesis will also assess automated methods to improve the APS approach, and examine a range of power generation alternatives using realistic operational profiles and requirements to develop a notional medium surface combatant (CGXBMD). It is essential to base conclusions on the non-dominated design space, and this new approach will use a multi-objective optimization to find non-dominated designs in the specified design space and use new visualization tools to assess the characteristics of these designs. This automated approach and new tools are evaluated in the context of the revisited study. / Master of Science
340

Aesthetic Movement Ideals in Contemporary Architecture: The President Garfield Historic Site Visitors Center

Redenshek, Julie 24 July 2006 (has links)
The James A. Garfield National Historic Site in Mentor, Ohio includes numerous structures of mid 19th century Victorian Era architecture. After the grounds became a national landmark in 1945, all new additions conformed to the existing historic style. This Thesis proposes that the existing visitors center be relocated from the carriage house to a new structure on site. This new visitor center is sensitive to the existing however, visually different. This architectural position is contradictory to previous additions in the past 50 years. Therefore, to draw a parallel and in an effort to allude to the past, the contemporary visitor center contains the same philosophical ideals of the Victorian reform Aesthetic Movement. Three of those ideals that are present in the visitor center include horizontality, dynamic space and honesty of structure. For the Aesthetes, horizontality was an influence from Japanese design, while the creation of dynamic space was meant to create an emotional response. Honesty of structure meant that a building should posses a clear and evident expression of its structural system and materials. In other words, using materials for their own sake. Even though over one hundred years have passed since the beginning of the Aesthetic Movement, this thesis is an exploration and continuation of those main ideals into contemporary architecture. / Master of Architecture

Page generated in 0.085 seconds