• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6769
  • 2451
  • 1001
  • 805
  • 777
  • 234
  • 168
  • 119
  • 83
  • 79
  • 70
  • 63
  • 54
  • 52
  • 50
  • Tagged with
  • 15002
  • 2422
  • 1971
  • 1814
  • 1642
  • 1528
  • 1381
  • 1327
  • 1284
  • 1252
  • 1220
  • 1114
  • 972
  • 928
  • 926
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
891

Multi-period optimal network flow and pricing strategy for commodity online retailer

Wang, Jie, S.M. Massachusetts Institute of Technology January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / Cataloged from PDF version of thesis. / Includes bibliographical references (p. 65). / This thesis aims to study the network of a nationwide distributor of a commodity product. As we cannot disclose the actual product for competitive reasons, we will present the research in terms of a similar, representative product, namely salt for ice prevention across United States. The distribution network includes four kinds of nodes, sources, buffer locations at sources, storage points and demand regions. It also includes four types of arcs, from sources to buffer locations and to storage points, from buffer locations to storage points, and from storage points to demand regions. The goal is to maximize the total gross margin subject to a set of supply, demand and inventory constraints. In this thesis, we establish two mathematical models to achieve the goal. The first one is a basic model to identify the optimal flows along the arcs across time by treating product prices and market demand as fixed parameters. The model is built in OPL and solved by CPLEX. We then carry out some numerical analyses and tests to validate the correctness of the model and demonstrate its utility. The second one is an advanced model treating product prices and market demand as additional decision variables. The product price and market demand are related by an exponential function, which makes the model difficult to solve with the available commercial solver codes. We then propose several algorithms to reduce the computational complexity of the model so that we can solve with CPLEX. At last, we compare the algorithms to identify the best one. We provide additional numerical tests to show the benefit from including the pricing decisions along with the optimization of the network flows. / by Jie Wang. / S.M.
892

Robust option pricing : An [epsilon]-arbitrage approach

Chen, Si, S.M. Massachusetts Institute of Technology January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / In title on title-page, "[epsilon]" appears as the lower case Greek letter. Cataloged from PDF version of thesis. / Includes bibliographical references (p. 59-60). / This research aims to provide tractable approaches to price options using robust optimization. The pricing problem is reduced to a problem of identifying the replicating portfolio which minimizes the worst case arbitrage possible for a given uncertainty set on underlying asset returns. We construct corresponding uncertainty sets based on different levels of risk aversion of investors and make no assumption on specific probabilistic distributions of asset returns. The most significant benefits of our approach are (a) computational tractability illustrated by our ability to price multi-dimensional options and (b) modeling flexibility illustrated by our ability to model the "volatility smile". Specifically, we report extensive computational results that provide empirical evidence that the "implied volatility smile" that is observed in practice arises from different levels of risk aversion for different strikes. We are able to capture the phenomenon by appropriately finding the right risk-aversion as a function of the strike price. Besides European style options which have fixed exercising date, our method can also be adopted to price American style options which we can exercise early. We also show the applicability of this pricing method in the case of exotic and multi-dimensional options, in particular, we provide formulations to price Asian options, Lookback options and also Index options. These prices are compared with market prices, and we observe close matches when we use our formulations with appropriate uncertainty sets constructed based on market-implied risk aversion. / by Si Chen. / S.M.
893

A linear multigrid preconditioner for the solution of the Navier-Stokes equations using a discontinuous Galerkin discretization

Diosady, Laslo Tibor January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 69-72). / A Newton-Krylov method is developed for the solution of the steady compressible Navier-Stokes equations using a Discontinuous Galerkin (DG) discretization on unstructured meshes. An element Line-Jacobi preconditioner is presented which solves a block tridiagonal system along lines of maximum coupling in the flow. An incomplete block-LU factorization (Block-ILU(O)) is also presented as a preconditioner, where the factorization is performed using a reordering of elements based upon the lines of maximum coupling used for the element Line-Jacobi preconditioner. This reordering is shown to be far superior to standard reordering techniques (Nested Dissection, One-way Dissection, Quotient Minimum Degree, Reverse Cuthill-Mckee) especially for viscous test cases. The Block-ILU(0) factorization is performed in-place and a novel algorithm is presented for the application of the linearization which reduces both the memory and CPU time over the traditional dual matrix storage format. A linear p-multigrid algorithm using element Line-Jacobi, and Block-ILU(O) smoothing is presented as a preconditioner to GMRES. / (cont.) The coarse level Jacobians are obtained using a simple Galerkin projection which is shown to closely approximate the linearization of the restricted problem except for perturbations due to artificial dissipation terms introduced for shock capturing. The linear multigrid preconditioner is shown to significantly improve convergence in terms of the number of linear iterations as well as to reduce the total CPU time required to obtain a converged solution. A parallel implementation of the linear multi-grid preconditioner is presented and a grid repartitioning strategy is developed to ensure scalable parallel performance. / by Laslo Tibor Diosady. / S.M.
894

Runge-Kutta Discontinuous Galerkin method for the Boltzmann equation / RKDG method for the Boltzmann equation

Lui, Ho Man January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2006. / Includes bibliographical references (p. 85-87). / In this thesis we investigate the ability of the Runge-Kutta Discontinuous Galerkin (RKDG) method to provide accurate and efficient solutions of the Boltzmann equation. Solutions of the Boltzmann equation are desirable in connection to small scale science and technology because when characteristic flow length scales become of the order of, or smaller than, the molecular mean free path, the Navier-Stokes description fails. The prevalent Boltzmann solution method is a stochastic particle simulation scheme known as Direct Simulation Monte Carlo (DSMC). Unfortunately, DSMC is not very effective in low speed flows (typical of small scale devices of interest) because of the high statistical uncertainty associated with the statistical sampling of macroscopic quantities employed by this method. This work complements the recent development of an efficient low noise method for calculating the collision integral of the Boltzmann equation, by providing a high-order discretization method for the advection operator balancing the collision integral in the Boltzmann equation. One of the most attractive features of the RKDG method is its ability to combine high-order accuracy, both in physical space and time, with the ability to capture discontinuous solutions. / (cont.) The validity of this claim is thoroughly investigated in this thesis. It is shown that, for a model collisionless Boltzmann equation, high-order accuracy can be achieved for continuous solutions; whereas for discontinuous solutions, the RKDG method, with or without the application of a slope limiter such as a viscosity limiter, displays high-order accuracy away from the vicinity of the discontinuity. Given these results, we developed a RKDG solution method for the Boltzmann equation by formulating the collision integral as a source term in the advection equation. Solutions of the Boltzmann equation, in the form of mean velocity and shear stress, are obtained for a number of characteristic flow length scales and compared to DSMC solutions. With a small number of elements and a low order of approximation in physical space, the RKDG method achieves similar results to the DSMC method. When the characteristic flow length scale is small compared to the mean free path (i.e. when the effect of collisions is small), oscillations are present in the mean velocity and shear stress profiles when a coarse velocity space discretization is used. With a finer velocity space discretization, the oscillations are reduced, but the method becomes approximately five times more computationally expensive. / (cont.) We show that these oscillations (due to the presence of propagating discontinuities in the distribution function) can be removed using a viscosity limiter at significantly smaller computational cost. / by Ho Man Lui. / S.M.
895

MACHINE LEARNING ALGORITHM PERFORMANCE OPTIMIZATION: SOLVING ISSUES OF BIG DATA ANALYSIS

Sohangir, Soroosh 01 December 2015 (has links) (PDF)
Because of high complexity of time and space, generating machine learning models for big data is difficult. This research is introducing a novel approach to optimize the performance of learning algorithms with a particular focus on big data manipulation. To implement this method a machine learning platform using eighteen machine learning algorithms is implemented. This platform is tested using four different use cases and result is illustrated and analyzed.
896

A Parsimonious Two-Way Shooting Algorithm for Connected Automated Traffic Smoothing

Zhou, Fang 14 August 2015 (has links)
Advanced connected and automated vehicle technologies offer new opportunities for highway traffic smoothing by optimizing automated vehicle trajectories. As one of the pioneering attempts, this study proposes an efficient trajectory optimization algorithm that can simultaneously improve a range of performance measures for a platoon of vehicles on a signalized highway section. This optimization is centered at a novel shooting heuristic (SH) for trajectory construction that considers realistic constraints including vehicle kinematic limits, traffic arrival patterns, carollowing safety, and signal operations. SH has a very parsimonious structure (e.g., only four acceleration parameters) and a very small computational complexity. Therefore, it is suitable for real-time applications when relevant technologies are in place in the near future. This study lays a solid foundation for devising holistic cooperative control strategies on a general transportation network with emerging technologies.
897

Sensitivity Analysis and Parameter Estimation for the APEX Model on Runoff, Sediments and Phosphorus

Jiang, Yi 09 December 2016 (has links)
Sensitivity analysis is essential for the hydrologic models to help gain insight into model’s behavior, and assess the model structure and conceptualization. Parameter estimation in the distributed hydrologic models is difficult due to the high-dimensional parameter spaces. Sensitivity analysis identified the influential and non-influential parameters in the modeling process, thus it will benefit the calibration process. This study identified, applied and evaluated two sensitivity analysis methods for the APEX model. The screening methods, the Morris method, and LH-OAT method, were implemented in the experimental site in North Carolina for modeling runoff, sediment loss, TP and DP losses. At the beginning of the application, the run number evaluation was conducted for the Morris method. The result suggested that 2760 runs were sufficient for 45 input parameters to get reliable sensitivity result. Sensitivity result for the five management scenarios in the study site indicated that the Morris method and LH-OAT method provided similar results on the sensitivity of the input parameters, except the difference on the importance of PARM2, PARM8, PARM12, PARM15, PARM20, PARM49, PARM76, PARM81, PARM84, and PARM85. The results for the five management scenarios indicated the very influential parameters were consistent in most cases, such as PARM23, PARM34, and PARM84. The “sensitive” parameters had good overlaps between different scenarios. In addition, little variation was observed in the importance of the sensitive parameters in the different scenarios, such as PARM26. The optimization process with the most influential parameters from sensitivity analysis showed great improvement on the APEX modeling performance in all scenarios by the objective functions, PI1, NSE, and GLUE.
898

Development of an integrated suite of methods to reduce computational effort in groundwater modeling validation and testing

Pettway, Jacqueline 01 May 2010 (has links)
A suite of tools to reduce the computational effort in groundwater modeling validation and testing has been developed. The work herein explores reduction of computational effort via smart adaptivemeshing, optimization techniques, which require fewer model calls, and the development of surrogate models. Adaptive meshing reduces the computational domain by allowing for mesh refinement in areas of interest determined dynamically by the model through error indicators instead of requiring a priori knowledge or a posteriori determination and rebuilding of the computational domain. As the areas of interest change with the physics, the refinement is removed to lower computational time by using unrefinement. The computational time for dynamic mesh adaption versus uniform refinement is orders of magnitudes smaller. Further reduction in computational time may be required especially when using parameter estimation techniques that require on the order of 2n computations, where n is the number of parameters being estimated. A demonstration of the usefulness of parameter estimation techniques is given, followed by a discussion of methods to further reduce computational time. It may also be necessary to look at reduced physics-type methods to further reduce computational time for the physics-based model. Surrogate models, such as proper orthogonal decomposition (POD), greatly reduce the computational time while maintaining the most important aspects of the physics being solved. The idea here is to run the full model, create the PODs basis, then use this basis to run parameter estimation. Once a better fit has been determined, the full model is run again to capture the full-physics results. The technique is repeated as necessary to capture the “best” parameters to numerically represent the observed behavior.
899

Dynamic control of a tidal hydro-electric plant

Kerr, Wayne R. January 1974 (has links)
No description available.
900

Computer aided optimization of non-equally spaced linear arrays.

Lau, Honkan January 1971 (has links)
No description available.

Page generated in 1.7523 seconds