• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 768
  • 242
  • 119
  • 117
  • 37
  • 34
  • 16
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1745
  • 355
  • 304
  • 279
  • 262
  • 243
  • 191
  • 191
  • 185
  • 182
  • 181
  • 170
  • 169
  • 168
  • 161
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Improving the efficiency of an automated manufacturing system through a tri-part approach

Song, Chen, S.M. Massachusetts Institute of Technology January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 71-72). / This research investigates a complex automated manufacturing system at three levels to improve its efficiency. In the system there are parallel loops of stations connected by a single closed conveyor. In each loop there are a series of identical stations, each with multiple storage slots and with capability to process several jobs simultaneously. At the system level we undertake capacity planning and explore Work-in-Process (WIP) control. We build an Excel model to calculate the implied load of each station, applying the model to sensitivity analyses of the system capacity. In addition, we identify a concave relationship between output and WIP based on actual factory data from our industrial partner. We surprisingly observe a reduction in output when WIP is high. Therefore, we suggest adopting a CONWIP policy in the system in order to increase and smooth the output. At the loop level we study the assignment policy. The complexity of this study is highlighted by non-trivial travel time between stations. We build a simulation model in Matlab to compare different assignment policies. The objective is to find the assignment policy that balances the station load, decreases the flow time for jobs, and reduces the rejection or blockage rate for the system. At the station level we investigate the holding time between simultaneous processes. We model this as a semi-Markov process, building a simulation model in Matlab to confirm the analytical results. We discover a tradeoff between flow time and production rate with different holding times, and propose new holding rules to further improve station performance. The conclusions from this research are useful for our industrial partner in its efforts to improve the operation of the system and to increase its capacity. Moreover, the methodologies and insights of this work can be beneficial to further research on related industry practice. / by Chen Song. / S.M.
62

A simulation-based resource optimization and time reduction model using design structure matrix

Zhang, Yifeng, S.M. Massachusetts Institute of Technology January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (p. 87-89). / Project scheduling is an important research and application area in engineering management. Recent research in this area addresses resource constraints as well as stochastic durations. This thesis presents a simulation-based optimization model for solving resource-constrained product development project scheduling problems. The model uses design structure matrix (DSM) to represent the information exchange among various tasks of a project. Instead of a simple binary precedence relationship, DSM is able to quantify the extent of interactions as well. In particular, these interactions are characterized by rework probabilities, rework impacts and learning. As a result, modeling based on DSM allows iterations to take place. This stochastic characteristic is not well addressed in earlier literatures of project scheduling problems. Adding resource factors to DSM simulation is a relatively new topic. We not only model the constraints posed by resource requirements, but also explore the effect of allocating different amount of resources on iterations. Genetic algorithm (GA) is chosen to optimize the model over a weighted sum of a set of heuristics. GA is known for its robustness in solving many types of problems. While the normal branch-and-bound method depends on problem specific information to generate tight bounds, GA requires virtually no information of the search space. Therefore GA makes this simulation- optimization model more general. Results are shown for several fictitious examples, each having some uniqueness in their DSM structure. Managerial insights are derived from the comparison of the GA solutions to these examples with other known solutions. / by Yifeng Zhang. / S.M.
63

Pricing with quality perception : theory and experiment

Sinchaisri, Wichinpong (Wichinpong Park) January 2016 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2016. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 111-115). / Quality is one of the most important factors behind a decision to purchase any product. Consumers have long assumed that price and quality are highly correlated, and that as the price of a product increases, its quality also increases ("you get what you pay for"). Several researchers have studied how consumers use price to infer quality, but very few have investigated the impact of pricing strategies, particularly price markdowns, on quality perception and how a retailer should react to such behavior. Our key research questions, viewed through both an empirical and a theoretical lens, concern how markdowns with different discount levels may induce different consumer behaviors and how the firm should incorporate them when optimizing its markdown policy. We empirically elicit the relationship between a consumer's quality perception and available price information, and refine a consumer demand model to capture these insights, together with other motives-reference dependence, loss aversion, patience, and optimism. For the retailer, we characterize the structure of the market segmentation and analyze its optimal markdown strategy when consumers are sensitive to quality. We present conditions in which it is optimal for the firm to apply a markdown to its products. When consumers are more sensitive to the product's original price than to the discount, or are impatient to wait for the future discounts, the retailer can earn the maximum revenue when applying a markdown strategy. Furthermore, we advocate that the firm should pre-announce the information about future markdowns in order to avoid the negative effect of the consumers' inaccurate estimates. / by Wichinpong Sinchaisri. / S.M.
64

Multilevel spectral clustering : graph partitions and image segmentation

Kong, Tian Fook January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / Includes bibliographical references (p. 145-146). / While the spectral graph partitioning method gives high quality segmentation, segmenting large graphs by the spectral method is computationally expensive. Numerous multilevel graph partitioning algorithms are proposed to reduce the segmentation time for the spectral partition of large graphs. However, the greedy local refinement used in these multilevel schemes has the tendency of trapping the partition in poor local minima. In this thesis, I develop a multilevel graph partitioning algorithm that incorporates the inverse powering method with greedy local refinement. The combination of the inverse powering method with greedy local refinement ensures that the partition quality of the multilevel method is as good as, if not better than, segmenting the large graph by the spectral method. In addition, I present a scheme to construct the adjacency matrix, W and degree matrix, D for the coarse graphs. The proposed multilevel graph partitioning algorithm is able to bisect a graph (k = 2) with significantly shorter time than segmenting the original graph without the multilevel implementation, and at the same time achieving the same normalized cut (Ncut) value. The starting eigenvector, obtained by solving a generalized eigenvalue problem on the coarsest graph, is close to the Fiedler vector of the original graph. Hence, the inverse iteration needs only a few iterations to converge the starting vector. In the k-way multilevel graph partition, the larger the graph, the greater the reduction in the time needed for segmenting the graph. For the multilevel image segmentation, the multilevel scheme is able to give better segmentation than segmenting the original image. The multilevel scheme has higher success of preserving the salient part of an object. / (cont.) In this work, I also show that the Ncut value is not the ultimate yardstick for the segmentation quality of an image. Finding a partition that has lower Ncut value does not necessary means better segmentation quality. Segmenting large images by the multilevel method offers both speed and quality. / by Tian Fook Kong. / S.M.
65

Sensitivity analysis of oscillating hybrid systems

Saxena, Vibhu Prakash January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 137-140). / Many models of physical systems oscillate periodically and exhibit both discrete-state and continuous-state dynamics. These systems are called oscillating hybrid systems and find applications in diverse areas of science and engineering, including robotics, power systems, systems biology, and so on. A useful tool that can provide valuable insights into the influence of parameters on the dynamic behavior of such systems is sensitivity analysis. A theory for sensitivity analysis with respect to the initial conditions and/or parameters of oscillating hybrid systems is developed and discussed. Boundary-value formulations are presented for initial conditions, period, period sensitivity and initial conditions for the sensitivities. A difference equation analysis of general homogeneous equations and parametric sensitivity equations with linear periodic piecewise continuous coefficients is presented. It is noted that the monodromy matrix for these systems is not a fundamental matrix evaluated after one period, but depends on one. A three part decomposition of the sensitivities is presented based on the analysis. These three parts classify the influence of the parameters on the period, amplitude and relative phase of the limit-cycles of hybrid systems, respectively. The theory developed is then applied to the computation of sensitivity information for some examples of oscillating hybrid systems using existing numerical techniques and methods. The relevant information given by the sensitivity trajectory and its parts can be used in algorithms for different applications such as parameter estimation, control system design, stability analysis and dynamic optimization. / by Vibhu Prakash Saxena. / S.M.
66

The effectiveness of a simple policy for coordinating inventory control and pricing strategies

Sun, Zhibo, S.M. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 51-54). / We investigate the effectiveness of an (s, S, p) policy relative to an (s, S, A, p) policy in a single product, periodic review, finite horizon model with stochastic multiplicative demand and fixed ordering cost, in which an (s, S, A, p) policy is optimal. An extensive numerical study shows that empirically an (s, S, p) policy is highly effective relative to an (s, S, A, p) policy. We also formulate two alternative benchmark policies and find that the (s, S, p) policy is superior in terms of profit. In addition, we propose an efficient algorithm with simulated annealing and modified binary search to determine the (s, S, p) policy for the model. / by Zhibo Sun. / S.M.
67

Simulation and design optimization for linear wave phenomena on metamaterials

Saà-Seoane, Joel January 2011 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 87-91). / Periodicity can change materials properties in a very unintuitive way. Many wave propagation phenomena, such as waveguides, light bending structures or frequency filters can be modeled through finite periodic structures designed using optimization techniques. Two different kind of problems can be found: those involving linear waves and those involving nonlinear waves. The former have been widely studied and analyzed within the last few years and many interesting results have been found: cloaking devices, superlensing, fiber optics The latter is a topic of high interest nowadays and a lot of work still needs to be done, since it is far more complicated and very little is known. Nonlinear wave phenomena include acoustic amplitude filters, sound bullets or elastic shock mitigation structures, among others. The wave equation can be solved accurately using the Hybridizable Discontinuous Galerkin Method both in time and in frequency domain. Furthermore, convex optimization techniques can be used to obtain the desired material properties. Thus, the path to follow is to implement a wave phenomena simulator in 1 and 2 dimensions and then formulate specific optimization problems that will lead to materials with some particular and special properties. Within the optimization problems that can be found, there are eigenvalue optimization problems as well as more general optimal control topology optimization problems. This thesis is focused on linear phenomena. An HDG simulation code has been developed and optimization problems for the design of some model devices have also been formulated. A series of numerical results are also included showing how effective and unintuitive such designs are. / by Joel Saà-Seoane. / S.M.
68

On the Gap-Tooth direct simulation Monte Carlo method

Armour, Jessica D January 2012 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, February 2012. / "February 2012." Cataloged from PDF version of thesis. / Includes bibliographical references (p. [73]-74). / This thesis develops and evaluates Gap-tooth DSMC (GT-DSMC), a direct Monte Carlo simulation procedure for dilute gases combined with the Gap-tooth method of Gear, Li, and Kevrekidis. The latter was proposed as a means of reducing the computational cost of microscopic (e.g. molecular) simulation methods using simulation particles only in small regions of space (teeth) surrounded by (ideally) large gaps. This scheme requires an algorithm for transporting particles between teeth. Such an algorithm can be readily developed and implemented within direct Monte Carlo simulations of dilute gases due to the non-interacting nature of the particle-simulators. The present work develops and evaluates particle treatment at the boundaries associated with diffuse-wall boundary conditions and investigates the drawbacks associated with GT-DSMC implementations which detract from the theoretically large computational benefit associated with this algorithm (the cost reduction is linear in the gap-to-tooth ratio). Particular attention is paid to the additional numerical error introduced by the gap-tooth algorithm as well as the additional statistical uncertainty introduced by the smaller number of particles. We find the numerical error introduced by transporting particles to adjacent teeth to be considerable. Moreover, we find that due to the reduced number of particles in the simulation domain, correlations persist longer, and thus statistical uncertainties are larger than DSMC for the same number of particles per cell. This considerably reduces the computational benefit associated with the GT-DSMC algorithm. We conclude that the GT-DSMC method requires more development, particularly in the area of error and uncertainty reduction, before it can be used as an effective simulation method. / by Jessica D. Armour. / S.M.
69

Algorithms for particle remeshing applied to smoothed particle hydrodynamics

Galagali, Nikhil January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 57-59). / This thesis outlines adaptivity schemes for particle-based methods for the simulation of nearly incompressible fluid flows. As with the remeshing schemes used in mesh and grid-based methods, there is a need to use localized refinement in particle methods to reduce computational costs. Various forms of particle refinement have been proposed for particle-based methods such as Smoothed Particle Hydrodynamics (SPH). However, none of the techniques that exist currently are able to retain the original degree of randomness among particles. Existing methods reinitialize particle positions on a regular grid. Using such a method for region localized refinement can lead to discontinuities at the interfaces between refined and unrefined particle domains. In turn, this can produce inaccurate results or solution divergence. This thesis outlines the development of new localized refinement algorithms that are capable of retaining the initial randomness of the particles, thus eliminating transition zone discontinuities. The algorithms were tested through SPH simulations of Couette Flow and Poiseuille Flow with spatially varying particle spacing. The determined velocity profiles agree well with theoretical results. In addition, the algorithms were also tested on a flow past a cylinder problem, but with a complete domain remeshing. The original and the remeshed particle distributions showed similar velocity profiles. The algorithms can be extended to 3-D flows with few changes, and allow the simulation of multi-scale flows at reduced computational costs. / by Nikhil Galagali. / S.M.
70

The (travel) times they are a changing : a computational framework for the diagnosis of non-alcoholic fatty liver disease (NAFLD) / Computational framework for the diagnosis of non-alcoholic fatty liver disease (NAFLD)

Benjamin, Alex (Alex Robert) January 2017 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2017. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 57-61). / We propose and validate a non-invasive method to diagnose Non-Alcoholic Fatty Liver Disease (NAFLD). The proposed method is based on two fundamental concepts: 1) the speed of sound in a fatty liver is lower than that in a healthy liver and 2) the quality of an ultrasound image is maximized when the beamforming speed of sound used in image formation matches the speed in the medium under examination. The proposed method uses image brightness and sharpness as quantitative image-quality metrics to predict the true sound speed and capture the effects of fat infiltration, while accounting for the transmission through subcutaneous fat. Validation using nonlinear acoustic simulations indicated the proposed method's ability to predict the speed of sound within a medium under examination with little sensitivity to the transducer's frequency (errors less than 2%). Additionally, ex vivo testing on sheep liver, mice livers, and tissue-mimicking phantoms indicated the method's ability to predict the true speed of sound with errors less than 0.5% (despite the presence of subcutaneous fat) and its ability to quantify the relationship between fat content and speed of sound. Additionally, this work starts to create a framework which allows for the determination of the spatial distribution of the longitudinal speed of sound, thereby providing a promising method for diagnosing NAFLD over time. / by Alex Benjamin. / S.M.

Page generated in 0.0972 seconds