• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 209
  • 4
  • 4
  • 3
  • 1
  • Tagged with
  • 236
  • 236
  • 176
  • 176
  • 175
  • 21
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Cheeger sets for unit cube : analytical and numerical solutions for L [infinity] and L² norms

Hussain, Mohammad Tariq January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / In title on t.p., "L" appears as italic letters and "[infinity]" appears as the symbol. / Includes bibliographical references (leaves 47-48). / The Cheeger constant h(Q) of a domain Q is defined as the minimum value of ...... with D varying over all smooth sub-domains of Q. The D that achieves this minimum is called the Cheeger set of Q. We present some analytical and numerical work on the Cheeger set for the unit cube ... using the ...and the ... norms for measuring IIDII. We look at the equivalent max-flow min-cut problem for continuum flows, and use it to get numerical results for the problem. We then use these results to suggest analytical solutions to the problem and optimize these shapes using calculus and numerical methods. Finally we make some observations about the general shapes we get, and how they can be derived using an algorithm similar to the one for finding Cheeger sets for domains in ... / by Mohammad Tariq Hussain. / S.M.
32

Optimization for recipe-based, diet-planning inventory management

Kang, Sheng January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 40-41). / This thesis presents a new modeling framework and research methodology for the study of recipe-based, diet-planning inventory management. The thesis begins with an exploration on the classic optimization problem - the diet problem based upon mixed-integer linear programming. Then, considering the fact that real diet-planning is sophisticated as it would be planning recipes rather than possible raw materials for the meals. Hence, the thesis develops the modeling framework under the assumption that given the recipes and the different purchasing options for raw materials listed in the recipes, examine the nutrition facts and calculate the purchasing decisions and the yearly optimal minimum cost for food consumption. This thesis further discusses the scenarios for different groups of raw materials in terms of shelf-timing difference. To model this inventory management, the modeling implementation includes preprocess part and the optimization part: the formal part involves with conversion of customized selection to quantitative relation with stored recipes and measurement on nutrition factors; the latter part solves the cost optimization problem. / by Sheng Kang. / S.M.
33

Optimization of ship-pack in a two-echelon distribution system

Wen, Naijun January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 61-62). / The traditional Economic Order Quantity (EOQ) model ignores the physical limitations of distribution practices. Very often distribution centers (DC) have to deliver merchandise in manufacturer-specified packages, which can impose restrictions on the application of the economic order quantity. These manufacturer-specified packages, or ship-packs, include cases (e.g., cartons containing 24 or 48 units), inners (packages of 6 or 8 units) and eaches (individual units). For each Stock Keeping Unit (SKU), a retailer decides which of these ship-pack options to use when replenishing its retail stores. Working with a major US retailer, we have developed a cost model to help determine the optimum warehouse ship-pack. Besides recommending the most economical ship-pack, the model is also capable of identifying candidates for warehouse dual-slotting, i.e., two picking modules for the same SKU that carry two different pack sizes. We find that SKUs whose sales volumes vary greatly over time will benefit more from dual-slotting. Finally, we extend our model to investigate the ideal case configuration for a particular SKU, that is, the ideal size for an inner package. / by Naijun Wen. / S.M.
34

Development of discontinuous Galerkin method for nonlocal linear elasticity

Bala Chandran, Ram January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 75-81). / A number of constitutive theories have arisen describing materials which, by nature, exhibit a non-local response. The formulation of boundary value problems, in this case, leads to a system of equations involving higher-order derivatives which, in turn, results in requirements of continuity of the solution of higher order. Discontinuous Galerkin methods are particularly attractive toward this end, as they provide a means to naturally enforce higher interelement continuity in a weak manner without the need of modifying the finite element interpolation. In this work, a discontinuous Galerkin formulation for boundary value problems in small strain, non-local linear elasticity is proposed. The underlying theory corresponds to the phenomenological strain-gradient theory developed by Fleck and Hutchinson within the Toupin-Mindlin framework. The single-field displacement method obtained enables the discretization of the boundary value problem with a conventional continuous interpolation inside each finite element, whereas the higher-order interelement continuity is enforced in a weak manner. The proposed method is shown to be consistent and stable both theoretically and with suitable numerical examples. / by Ram Bala Chandran. / S.M.
35

Return on investment and library complexity analysis for DNA sequencing

Hogstrom, Larson J January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016. / Cataloged from PDF version of thesis. / Includes bibliographical references (page 49). / Understanding the profiles of information acquisition during DNA sequencing experiments is critical to the design and implementation of large-scale studies in medical and population genetics. One known technical challenge and cost driver in next-generation sequencing data is the occurrence of non-independent observations that are created from sequencing artifacts and duplication events from polymerase chain reaction (PCR). The current study demonstrates improved return on investment (ROI) modeling strategies to better anticipate the impact of non-independent observations in multiple forms of next-generation sequencing data. Here, a physical modeling approach based on Pó1ya urn was evaluated using both multi-point estimation and duplicate set occupancy vectors. The results of this study can be used to reduce sequencing costs by improving aspects of experimental design including sample pooling strategies, top-up events, and termination of non-informative samples. / by Larson J. Hogstrom. / S.M.
36

Methods for design optimization using high fidelity turbulent flow simulations

Talnikar, Chaitanya Anil January 2015 (has links)
Thesis: S.M., Massachusetts Institute of Technology, School of Engineering, Center for Computational Engineering, Computation for Design and Optimization Program, 2015. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 75-79). / Design optimization with high-fidelity turbulent flow simulations can be challenging due to noisy and expensive objective function evaluations. The noise decays slowly as computation cost increases, therefore is significant in most simulations. It is often unpredictable due to chaotic dynamics of turbulence, in that it can be totally different for almost identical simulations. This thesis presents a modified parallel Bayesian optimization algorithm designed for performing optimization with high-fidelity simulations. It strives to find the optimum in a minimum number of evaluations by judiciously exploring the design space. Additionally, to potentially augment the optimization algorithm with the availability of a gradient, a massively parallel discrete unsteady adjoint solver for the compressible Navier-Stokes equations is derived and implemented. Both the methods are demonstrated on a large scale transonic fluid flow problem in a turbomachinery component. / by Chaitanya Anil Talnikar. / S.M.
37

Particle filtering with Lagrangian data in a point vortex model

Mitra, Subhadeep January 2012 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2012. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 131-138). / Particle filtering is a technique used for state estimation from noisy measurements. In fluid dynamics, a popular problem called Lagrangian data assimilation (LaDA) uses Lagrangian measurements in the form of tracer positions to learn about the changing flow field. Particle filtering can be applied to LaDA to track the flow field over a period of time. As opposed to techniques like Extended Kalman Filter (EKF) and Ensemble Kalman Filter (EnKF), particle filtering does not rely on linearization of the forward model and can provide very accurate estimates of the state, as it represents the true Bayesian posterior distribution using a large number of weighted particles. In this work, we study the performance of various particle filters for LaDA using a two-dimensional point vortex model; this is a simplified fluid dynamics model wherein the positions of vortex singularities (point vortices) define the state. We consider various parameters associated with algorithm and examine their effect on filtering performance under several vortex configurations. Further, we study the effect of different tracer release positions on filtering performance. Finally, we relate the problem of optimal tracer deployment to the Lagrangian coherent structures (LCS) of point vortex system. / by Subhadeep Mitra. / S.M.
38

Testing for jumps and cojumps in financial markets

Ju, Cheng, S.M. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 63-64). / In this thesis, we introduce a new testing methodology to detect cojumps in multi-asset returns. We define a cojump as a jump in at least one dimension of the return processes. For a multivariate process that follows a semimartingale, and with no other specific assumptions on the process, we form a test statistic which can easily disentangle jumps from continuous paths of the process. We prove that the test statistics are chi-square distributed in the absence of jumps in any dimensions. We propose a hypothesis testing based on the extreme distribution of the test statistics. If the test statistic observed is beyond the extreme level, then most likely, a cojump occurs. Monte Carlo simulation is performed to access the effectiveness of the test by examining the size and power of the test. We apply the test to a pair of empirical asset returns data and the findings of jump timing are consistent with existing literature. / by Cheng Ju. / S.M.
39

A diagnostic analysis of retail out-of-stocks / Diagnostic analysis of retail OOSs

Foo, Yong Ning January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 101). / In the highly competitive retail industry, merchandise out-of-stock (OOS) is a significant and pertinent problem. This thesis performs a diagnostic analysis on retail out-of-stocks using empirical data from a major retailer. In this thesis, we establish the empirical relationship of OOS rate with the amount of safety stock carried, the time between orders and the forecast error, providing insights to the effects of these three factors on the probability of OOS occurrences. The root causes of OOS are also examined in the thesis. We find that up to 34% of OOS can be attributed to forecast error while up to 22% can be attributed to delay in order replenishment. For the OOSs that were associated with order delay, we can trace 60% of these to out-of-stock at the store's distribution center (DC). The thesis also examines a peculiarity in the occurrence of OOSs. We found that the OOS rate of Class C items is significantly higher in stores with higher sales volume. We can attribute much of this phenomenon to three factors: stores with higher sales volume hold less safety stock for Class C items, have a shorter time between orders and have relatively larger forecast errors. / by Yong Ning Foo. / S.M.
40

Methods and applications in computational protein design

Biddle, Jason Charles January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student-submitted PDF version of thesis. / Includes bibliographical references (p. 107-111). / In this thesis, we summarize our work on applications and methods for computational protein design. First, we apply computational protein design to address the problem of degradation in stored proteins. Specifically, we target cysteine, asparagine, glutamine, and methionine amino acid residues to reduce or eliminate a protein's susceptibility to degradation via aggregation, deamidation, and oxidation. We demonstrate this technique on a subset of degradation-prone amino acids in phosphotriesterase, an enzyme that hydrolyzes toxic organophosphates including pesticides and chemical warfare agents. Second, we introduce BroMAP/A*, an exhaustive branch-and- bound search technique with enumeration. We compare performance of BroMAP/A* to DEE/A*, the current standard for conformational search with enumeration in the protein design community. When limited computational resources are available, DEE/A* sometimes fails to find the global minimum energy conformation and/or enumerate the lowest-energy conformations for large designs. Given the same computational resources, we show how BroMAP/A* is able to solve large designs by efficiently dividing the search space into small, solvable subproblems. / by Jason Charles Biddle. / S.M.

Page generated in 0.1435 seconds