Spelling suggestions: "subject:"design anda optimization"" "subject:"design ando optimization""
91 |
Modeling travel time uncertainty in traffic networksChen, Daizhuo January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 147-154). / Uncertainty in travel time is one of the key factors that could allow us to understand and manage congestion in transportation networks. Models that incorporate uncertainty in travel time need to specify two mechanisms: the mechanism through which travel time uncertainty is generated and the mechanism through which travel time uncertainty influences users' behavior. Existing traffic equilibrium models are not sufficient in capturing these two mechanisms in an integrated way. This thesis proposes a new stochastic traffic equilibrium model that incorporates travel time uncertainty in an integrated manner. We focus on how uncertainty in travel time induces uncertainty in the traffic flow and vice versa. Travelers independently make probabilistic path choice decisions, inducing stochastic traffic flows in the network, which in turn result in uncertain travel times. Our model, based on the distribution of the travel time, uses the mean-variance approach in order to evaluate travelers' travel times and subsequently induce a stochastic traffic equilibrium flow pattern. In this thesis, we also examine when the new model we present has a solution as well as when the solution is unique. We discuss algorithms for solving this new model, and compare the model with existing traffic equilibrium models in the literature. We find that existing models tend to overestimate traffic flows on links with high travel time variance-to-mean ratios. To benchmark the various traffic network equilibrium models in the literature relative to the model we introduce, we investigate the total system cost, namely the total travel time in the network, for all these models. We prove three bounds that allow us to compare the system cost for the new model relative to existing models. We discuss the tightness of these bounds but also test them through numerical experimentation on test networks. / by Daizhuo Chen. / S.M.
|
92 |
A Bayesian approach to feed reconstructionConjeevaram Krishnakumar, Naveen Kartik January 2013 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2013. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 83-86). / In this thesis, we developed a Bayesian approach to estimate the detailed composition of an unknown feedstock in a chemical plant by combining information from a few bulk measurements of the feedstock in the plant along with some detailed composition information of a similar feedstock that was measured in a laboratory. The complexity of the Bayesian model combined with the simplex-type constraints on the weight fractions makes it difficult to sample from the resulting high-dimensional posterior distribution. We reviewed and implemented different algorithms to generate samples from this posterior that satisfy the given constraints. We tested our approach on a data set from a plant. / by Naveen Kartik Conjeevaram Krishnakumar. / S.M.
|
93 |
Must linear algebra be block cyclic? : and other explorations into the expressivity of data parallel and task parallel languagesSundaresh, Harish Peruvamba January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (leaves 68-69). / Prevailing Parallel Linear Algebra software block cyclically distributes data across its processors for good load balancing and communication between its nodes. The block cyclic distribution schema characterized by cyclic order allocation of row and column data blocks followed by consecutive elimination is widely used in scientific computing and is the default approach in ScaLA-PACK. The fact that we are not familiar with any software outside of linear algebra that has considered cyclic distributions for their execution presents incompatibility. This calls for possible change in approach as advanced computing platforms like Star-P are emerging allowing for interoperability of algorithms. This work demonstrates a data parallel column block cyclic elimination technique for LU and QR factorization. This technique yields good load balance and communication between nodes, and also eliminates superfluous overheads. The algorithms are implemented with consecutive allocation and cyclic elimination using the high level platform, Star-P. Block update tenders extensive performance enhancement making use of Basic Linear Algebra Subroutine-3 for delivering tremendous speedup. This project also provides an overview of threading in parallel systems through implementation of important task parallel algorithms: prefix, hexadecimal Pi digits and Monte-Carlo simulation. / by Harish Peruvamba Sundaresh. / S.M.
|
94 |
Loss of coordination in competitive supply chainsTeo, Koon Soon January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Cataloged from student submitted PDF version of thesis. / Includes bibliographical references (p. 161-163). / The loss of coordination in supply chains quantifies the inefficiency (i.e. the loss of total profit) due to the presence of competition in the supply chain. In this thesis, we discuss four models: one model with multiple retailers facing the multinomial logit demand, and three supply chain configurations with one supplier and multiple retailers in a i) quantity competition among retailers with substitute products, ii) price competition among retailers with substitute products, and iii) quantity competition among retailers with complement products, producing differentiated products under an affine demand function. As a special case, we also consider the symmetric setting in the four models where all retailers encounter identical demand, marginal costs, quality dierences, and in the multinomial logit demand case, when there are identical variances in the consumers' utility functions. The main contribution in this thesis lies in the precise quantification of the loss of profit due to lack of coordination, through analytical lower bounds. We provide bounds in terms of the eigenvalues of the demand sensitivity matrix, or the demand sensitivities. For the multinomial logit demand model, the lower bounds are in terms of the number of retailers and the predictability of consumer behaviour. We use simulations to provide further insights on the loss of coordination and tightness of the bounds. We find that a supply chain with retailers operating under Bertrand competition offering substitute products is the most ecient with an average profit loss of less than 15%. We also nd that competitive supply chains can be coordinated when offering substitute products. / (cont.) This occurs under the symmetric setting when there is a 'reasonable' number of Cournot retailers under intense competition, or when demand is 'more' inelastic in a Bertrand competition setting. As an example, in the presence of six Cournot retailers under intense competition, the profit loss is 2.04%, and when demand is perfectly inelastic in a Bertrand competition, the supply chain is perfectly coordinated with profit loss of 0%. For the multinomial logit demand case, we nd that higher predictability of consumer behaviour (i.e, when consumers' choices are more deterministic) increases profits both under coordination and under competition, and a larger number of retailers decreases profits under competition, but 3 increases profits under coordination. The net result is that efficiency 'deteriorates' when the number of competitive retailers and predictability of consumer behaviour increases. / by Koon Soon Teo. / S.M.
|
95 |
Optimal corporate investment and financing policies with time-varying investment opportunitiesCai, Linjiang January 2011 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2011. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 65-68). / Bolton, Chen and Wang (2009) propose a model (the BCW model) of dynamic corporate investment, financing, and risk management for a financially constrained firm. In the BCW model, corporate risk management is a combination of internal liquidity management, financial hedging, investment, and payout decisions. However, Bolton et al. (2009) assume that the firm's investment opportunities are constant over time, which is unrealistic in many situations. I extend the analytical tractable dynamic framework of Bolton et al. (2009) for firms facing stochastic investment opportunities. My extended model can help financially constrained firms to optimally choose external financing (equity or credit line), internal cash accumulation, corporate investment, risk management and payout policies in an environment subjective to time-varying productivity shocks. The differences of policies from the BCW model and my extended model, as well as the optimal and non-optimal policies are also compared. / by Linjiang Cai. / S.M.
|
96 |
Surrogate modeling for large-scale black-box systemsLiem, Rhea Patricia January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 105-110). / This research introduces a systematic method to reduce the complexity of large-scale blackbox systems for which the governing equations are unavailable. For such systems, surrogate models are critical for many applications, such as Monte Carlo simulations; however, existing surrogate modeling methods often are not applicable, particularly when the dimension of the input space is very high. In this research, we develop a systematic approach to represent the high-dimensional input space of a large-scale system by a smaller set of inputs. This collection of representatives is called a multi-agent collective, forming a surrogate model with which an inexpensive computation replaces the original complex task. The mathematical criteria used to derive the collective aim to avoid overlapping of characteristics between representatives, in order to achieve an effective surrogate model and avoid redundancies. The surrogate modeling method is demonstrated on a light inventory that contains light data corresponding to 82 aircraft types. Ten aircraft types are selected by the method to represent the full light inventory for the computation of fuel burn estimates, yielding an error between outputs from the surrogate and full models of just 2.08%. The ten representative aircraft types are selected by first aggregating similar aircraft types together into agents, and then selecting a representative aircraft type for each agent. In assessing the similarity between aircraft types, the characteristic of each aircraft type is determined from available light data instead of solving the fuel burn computation model, which makes the assessment procedure inexpensive. / (cont.) Aggregation criteria are specified to quantify the similarity between aircraft types and a stringency, which controls the tradeoff between the two competing objectives in the modeling -- the number of representatives and the estimation error. The surrogate modeling results are compared to a model obtained via manual aggregation; that is, the aggregation of aircraft types is done based on engineering judgment. The surrogate model derived using the systematic approach yields fewer representatives in the collective, yielding a surrogate model with lower computational cost, while achieving better accuracy. Further, the systematic approach eliminates the subjectivity that is inherent in the manual aggregation method. The surrogate model is also applied to other light inventories, yielding errors of similar magnitude to the case when the reference light inventory is considered. / by Rhea Patricia Liem. / S.M.
|
97 |
On the predictive capability and stability of rubber material modelsZheng, Haining January 2008 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2008. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 99-101). / Due to the high non-linearity and incompressibility constraint of rubber materials, the predictive capability and stability of rubber material models require specific attention for practical engineering analysis. In this thesis, the predictive capability of various rubber material models, namely the Mooney-Rivlin model, Arruda-Boyce model, Ogden model and the newly proposed Sussman-Bathe model, is investigated theoretically with continuum mechanics methods and tested numerically in various deformation situations using the finite element analysis software ADINA. In addition, a recently made available stability criterion of rubber material models is re-derived and verified through numerical experiments for the above four models with ADINA. Thereafter, the predictive capability and stability of material models are studied jointly for non-homogenous deformations. The Mooney-Rivlin model, Arruda-Boyce model, Ogden model have difficulties in describing the uniaxial compression data while the Sussman-Bathe model can fit both compression and extension data well. Thus, the Sussman-Bathe model has the best predictive capability for pure shear deformations. Furthermore, with respect to more complex non-homogenous deformations, a conclusion is drawn that all three major deformations, namely uniaxial deformation, biaxial deformation and pure shear deformation, must satisfy the stability criterion to obtain physically correct non-homogenous simulation results. / by Haining Zheng. / S.M.
|
98 |
Fairness and optimality in tradingNguyen, Van Vinh, S.M. Massachusetts Institute of Technology January 2010 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2010. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 50-51). / This thesis proposes a novel approach to address the issues of efficiency and fairness when multiple portfolios are rebalanced simultaneously. A fund manager who rebalances multiple portfolios needs to not only optimize the total efficiency, i.e., maximize net risk-adjusted return, but also guarantee that trading costs are fairly split among the clients. The existing approaches in the literature, namely the Social Welfare and the Competitive Equilibrium schemes, do not compromise efficiency and fairness effectively. To this end, we suggest an approach that utilizes popular and well-accepted resource allocation ideas from the field of communications and economics, such as Max-Min fairness, Proportional fairness and a-fairness. We incorporate in our formulation a quadratic model of market impact cost to reflect the cumulative effect of trade pooling. Total trading costs are split fairly among accounts using the so-called pro rata scheme. We solve the resulting multi-objective optimization problem by adopting the Max-Min fairness, Proportional fairness and a-fairness schemes. Under these schemes, the resulting optimization problems have non-convex objectives and non-convex constraints, which are NP-hard in general. We solve these problems using a local search method based on linearization techniques. The efficiency of this approach is discussed when we compare it with a deterministic global optimization method on small size optimization problems that have similar structure to the aforementioned problems. We present computational results for a small data set (2 funds, 73 assets) and a large set (6 funds, 73 assets). These results suggest that the solution obtained from our model provides a better compromise between efficiency and fairness than existing approaches. An important implication of our work is that given a level of fairness that we want to maintain, we can always find Pareto-efficient trade sets. / by Van Vinh Nguyen. / S.M.
|
99 |
An analysis of the TR-BDF2 integration scheme / Analysis of the Trapezoidal Rule with the second order Backward Difference Formula integration schemeDharmaraja, Sohan January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / Includes bibliographical references (p. 75-76). / We intend to try to better our understanding of how the combined L-stable 'Trapezoidal Rule with the second order Backward Difference Formula' (TR-BDF2) integrator and the standard A-stable Trapezoidal integrator perform on systems of coupled non-linear partial differential equations (PDEs). It was originally Professor KlausJiirgen Bathe who suggested that further analysis was needed in this area. We draw attention to numerical instabilities that arise due to insufficient numerical damping from the Crank-Nicolson method (which is based on the Trapezoidal rule) and demonstrate how these problems can be rectified with the TR-BDF2 scheme. Several examples are presented, including an advection-diffusion-reaction (ADR) problem and the (chaotic) damped driven pendulum. We also briefly introduce how the ideas of splitting methods can be coupled with the TR-BDF2 scheme and applied to the ADR equation to take advantage of the excellent modern day explicit techniques to solve hyperbolic equations. / by Sohan Dharmaraja. / S.M.
|
100 |
Silence of the lamb wavesBenjamin, Rishon Robert January 2017 (has links)
Thesis: S.M., Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2017. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 101-104). / Roll-to-Roll (R2R) manufacturing has seen great interest in the recent decade due to the proliferation of personalized and wearable devices for monitoring a variety of biometrics. Given the sensitive nature of the potential applications of these sensors, the throughput of manufacturing due to increased demand, and the scale of the electrical components being manufactured, R2R flexible electronics manufacturing technologies require new sensing and measurement capabilities for defect detection and process control. The work presented herein investigates the use of ultrasound, specifically Lamb and longitudinal waves, as a sensing modality and measurement technique for thin film R2R manufacturing substrates. Contact (transducer-based) and non-contact (photoacoustic) generation methods along with deterministic and probabilistic tomographic reconstruction algorithms were implemented evaluate their suitability for non-destructive evaluation (NDE) and in-line control of surface additions of 76[mu]m aluminum and polyethylene terephthalate (PET) films. The ultrasonic waves were used to ascertain properties of these substrates such as the thickness of substrate, applied load, presence of defects (holes/cracks), size of defects, presence of surface features (fluid drops, multi-layer structures), and nature of surface features (differing chemistries). In addition, surface features alter the behavior of sound waves in the presence of such features. These surface features may then be imaged to create tomographic maps. The results presented show that, currently, a quasi-contact acoustic generation scheme can be used to successfully image defects and surface features on the order of -1mm. Furthermore, the algorithm is able to distinguish qualitatively between surface features of differing physiochemical properties. The authors hope that the information collected from this thesis will be part of a rich data set that can contribute to advanced machine-learning frameworks for predictive maintenance, failure, and process control analysis for the R2R process. / by Rishon Robert Benjamin. / S.M.
|
Page generated in 0.1222 seconds