• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 762
  • 242
  • 119
  • 117
  • 37
  • 34
  • 16
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • Tagged with
  • 1735
  • 354
  • 303
  • 277
  • 261
  • 242
  • 191
  • 191
  • 183
  • 182
  • 181
  • 170
  • 166
  • 166
  • 163
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Robust option pricing : An [epsilon]-arbitrage approach

Chen, Si, S.M. Massachusetts Institute of Technology January 2009 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2009. / In title on title-page, "[epsilon]" appears as the lower case Greek letter. Cataloged from PDF version of thesis. / Includes bibliographical references (p. 59-60). / This research aims to provide tractable approaches to price options using robust optimization. The pricing problem is reduced to a problem of identifying the replicating portfolio which minimizes the worst case arbitrage possible for a given uncertainty set on underlying asset returns. We construct corresponding uncertainty sets based on different levels of risk aversion of investors and make no assumption on specific probabilistic distributions of asset returns. The most significant benefits of our approach are (a) computational tractability illustrated by our ability to price multi-dimensional options and (b) modeling flexibility illustrated by our ability to model the "volatility smile". Specifically, we report extensive computational results that provide empirical evidence that the "implied volatility smile" that is observed in practice arises from different levels of risk aversion for different strikes. We are able to capture the phenomenon by appropriately finding the right risk-aversion as a function of the strike price. Besides European style options which have fixed exercising date, our method can also be adopted to price American style options which we can exercise early. We also show the applicability of this pricing method in the case of exotic and multi-dimensional options, in particular, we provide formulations to price Asian options, Lookback options and also Index options. These prices are compared with market prices, and we observe close matches when we use our formulations with appropriate uncertainty sets constructed based on market-implied risk aversion. / by Si Chen. / S.M.
222

A linear multigrid preconditioner for the solution of the Navier-Stokes equations using a discontinuous Galerkin discretization

Diosady, Laslo Tibor January 2007 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2007. / This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. / Includes bibliographical references (p. 69-72). / A Newton-Krylov method is developed for the solution of the steady compressible Navier-Stokes equations using a Discontinuous Galerkin (DG) discretization on unstructured meshes. An element Line-Jacobi preconditioner is presented which solves a block tridiagonal system along lines of maximum coupling in the flow. An incomplete block-LU factorization (Block-ILU(O)) is also presented as a preconditioner, where the factorization is performed using a reordering of elements based upon the lines of maximum coupling used for the element Line-Jacobi preconditioner. This reordering is shown to be far superior to standard reordering techniques (Nested Dissection, One-way Dissection, Quotient Minimum Degree, Reverse Cuthill-Mckee) especially for viscous test cases. The Block-ILU(0) factorization is performed in-place and a novel algorithm is presented for the application of the linearization which reduces both the memory and CPU time over the traditional dual matrix storage format. A linear p-multigrid algorithm using element Line-Jacobi, and Block-ILU(O) smoothing is presented as a preconditioner to GMRES. / (cont.) The coarse level Jacobians are obtained using a simple Galerkin projection which is shown to closely approximate the linearization of the restricted problem except for perturbations due to artificial dissipation terms introduced for shock capturing. The linear multigrid preconditioner is shown to significantly improve convergence in terms of the number of linear iterations as well as to reduce the total CPU time required to obtain a converged solution. A parallel implementation of the linear multi-grid preconditioner is presented and a grid repartitioning strategy is developed to ensure scalable parallel performance. / by Laslo Tibor Diosady. / S.M.
223

Runge-Kutta Discontinuous Galerkin method for the Boltzmann equation / RKDG method for the Boltzmann equation

Lui, Ho Man January 2006 (has links)
Thesis (S.M.)--Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2006. / Includes bibliographical references (p. 85-87). / In this thesis we investigate the ability of the Runge-Kutta Discontinuous Galerkin (RKDG) method to provide accurate and efficient solutions of the Boltzmann equation. Solutions of the Boltzmann equation are desirable in connection to small scale science and technology because when characteristic flow length scales become of the order of, or smaller than, the molecular mean free path, the Navier-Stokes description fails. The prevalent Boltzmann solution method is a stochastic particle simulation scheme known as Direct Simulation Monte Carlo (DSMC). Unfortunately, DSMC is not very effective in low speed flows (typical of small scale devices of interest) because of the high statistical uncertainty associated with the statistical sampling of macroscopic quantities employed by this method. This work complements the recent development of an efficient low noise method for calculating the collision integral of the Boltzmann equation, by providing a high-order discretization method for the advection operator balancing the collision integral in the Boltzmann equation. One of the most attractive features of the RKDG method is its ability to combine high-order accuracy, both in physical space and time, with the ability to capture discontinuous solutions. / (cont.) The validity of this claim is thoroughly investigated in this thesis. It is shown that, for a model collisionless Boltzmann equation, high-order accuracy can be achieved for continuous solutions; whereas for discontinuous solutions, the RKDG method, with or without the application of a slope limiter such as a viscosity limiter, displays high-order accuracy away from the vicinity of the discontinuity. Given these results, we developed a RKDG solution method for the Boltzmann equation by formulating the collision integral as a source term in the advection equation. Solutions of the Boltzmann equation, in the form of mean velocity and shear stress, are obtained for a number of characteristic flow length scales and compared to DSMC solutions. With a small number of elements and a low order of approximation in physical space, the RKDG method achieves similar results to the DSMC method. When the characteristic flow length scale is small compared to the mean free path (i.e. when the effect of collisions is small), oscillations are present in the mean velocity and shear stress profiles when a coarse velocity space discretization is used. With a finer velocity space discretization, the oscillations are reduced, but the method becomes approximately five times more computationally expensive. / (cont.) We show that these oscillations (due to the presence of propagating discontinuities in the distribution function) can be removed using a viscosity limiter at significantly smaller computational cost. / by Ho Man Lui. / S.M.
224

Variational and adaptive non-local image denoising using edge detection and k − means clustering

Mujahid, Shiraz 12 May 2023 (has links) (PDF)
With the increased presence of image-based data in modern applications, the need for robust methods of image denoising grows greater. The work presented herein considers two of the most ubiquitous approaches towards image denoising: variational and non-local methods. The effectiveness of these methods is assessed using quantitatively using peak signal-to-noise ratio and structural similarity index measure metrics. This study employs ��−means clustering, an unsupervised machine learning algorithm, to isolate the most dominant cluster centroids within the incoming data and propose the introduction of a new adaptive parameter into the non-local means framework. Motivated by the fact that a majority of discrepancies between clean and denoised images occur at feature edges, this study examines several convolution-based edge detection methods to isolate relevant feature. The resultant gradient and edge information is used to further parameterize the ��−means non-local method. An additional hybrid method involving the combined contributions of variational and ��−means non-local denoising is proposed, with the weighting determined by edge intensities. This method outperforms the other methods outlined in the study, both conventional and newly presented.
225

The Design and Implementation of a Simple Incremental Assembler on the Hewlett Packard 2100A Computer

Forrester, James Alan 05 1900 (has links)
<p>The basic concepts of batch, conversational, and incremental computing are presented along with a brief discussion on their influence on computing.</p> <p>The design and implementation consideration for the assembly language implementation of a simple incremental assembler is presented. An assembler, to accept simple assembly language programs which are scanned as they are received and assembled into machine code, has been implemented on the Hewlett Packard 2100A computer and is discussed in full detail. The assembler has been designed to execute incomplete programs such that debugging print out of registers and specified core locations is possible. The assembler also provides an editor to perform delete, insert and replace operations on user programs input to the assembler.</p> <p>The assembler is oriented for the naive user, but it assumes the user has a small knowledge of assembly language programming. Important considerations in writing interactive programs are also discussed.</p> / Thesis / Master of Science (MSc)
226

A quantitative approach to patient risk assessment and safety optimization in intensive care units

Hu, Yiqun (Computational design expert) Massachusetts Institute of Technology January 2017 (has links)
Thesis: S. M. in Computation for Design and Optimization, Massachusetts Institute of Technology, Computation for Design and Optimization Program, 2017. / Cataloged from PDF version of thesis. / Includes bibliographical references (pages 101-104). / Health care quality and patient safety has gained an increasing amount of attention for the past two decades. The quality of care nowadays does not only refer to successful cure of diseases for patients, but a much broader concept involving health care community, inter-relationships among care providers, patients and family, efficiency, humanity and satisfaction. The intensive care units (ICU) typically admit and care for the most clinically complex patients. While much effort has been put into patient safety improvement, the critical care system still continuous to see many human errors occur each day, despite the fact that people who work in such environment have received exceptional training. Traditional interventions to mitigate patient harm events in ICU generally focus on individual patient harms, and highly underestimate the overall risk patient face during their stay. This thesis aims to establish a new framework that more accurately account for patient risk and is capable of providing recommendations for operational decision making in launching intervention strategies that improve care quality and patient safety. Our approach is based on theories regarding the underlying causes of human errors and a system engineering as well as analytics perspectives. We use various statistical methodologies to output rigorous but clinically intuitive insights. The core concept is to study and utilize how system-level conditions, including both human and environmental factors, can affect the likelihood of harm events in ICU. These can hopefully be used to reduce patient harms and promote patient safety by eliminating unfavorable conditions that are in higher correlation with these events, or promoting safe conditions. We first create a quantitative metric to assess the total burden of harm that patients face, including both high frequency harms, which are typically measured in ICUs today, as well as harms that can bring highly negative outcomes to the patient but ignored due to low frequency. It is an aggregated measure that aims to reflect the true risk level in the ICUs. Then, unlike the traditional approach that motivates intervention strategy to specific harms, we depend on the concept of risk drivers, which describe relevant ICU system conditions, and investigate what drivers affect the probability for harm events in the ICU. These conditions are defined as Risky States, and suggested by the model for elimination to avoid a variety of consequent risk and improve patient safety. The underlying assumption is that the same risk drivers (risky state) may affect many harms. Finally, we propose a new ensemble statistical learning algorithm based on regression trees that is not only powerful in examine the relationship between drivers and outcomes, but also being descriptive defining the risky states. The framework was applied to the retrospective data of 2012 and 2013 from 9 ICUs at the Beth Israel Deaconess Medical Center (BIDMC), with both clinical and administrative records of more than ten thousand patients. Based on our analysis, we see a strong evidence that system conditions are associated with harm events, which include, for example, ICU patient flow (e.g., how many patients are admitted to and discharged from unit), patient acuity level, nurse workload, and unit service type, etc. The model is capable of providing insights such as "when a medical unit has more than 3 newly admitted patients during a day shift, its risk level is approximately 35% higher than the average day shift risk levels in medical units", which can motivate decisions such as assigning a new patient to some other medical unit when the current one has already admitted 3 patients during the shift, in order to avoid the above risky state from occurring. The model output is further presented to BIDMC experts for validation through a clinical perspective. It is also being implemented and integrated to BIDMC ICU tablet application to provide guidance to ICU staff as an alerting system. The Risky State framework is unique in its innovative approach to assess patient risk and capability to offer leverage for overall patient safety improvement, and at same time designed to be compatible and spreadable with different hospital settings. / by Yiqun Hu. / S. M. in Computation for Design and Optimization
227

A METHODOLOGY FOR ANALYZING HARDWARE ACCELERATED CONTINUOUS-TIME METHODS FOR MIXED SIGNAL SIMULATION

DURBHA, SRIRAM 07 October 2004 (has links)
No description available.
228

Face Lattice Computation under Symmetry

Li, Johnathan 08 1900 (has links)
The last 15 years have seen a significant progress in the development of general purpose algorithms and software for polyhedral computation. Many polytopes of practical interest have enormous output complexity and are often highly degenerate, posing severe difficulties for known general purpose algorithms. They are, however, highly structured and attention has turned to exploiting this structure, particularly symmetry. We focus on polytopes arising from combinatorial optimization problems. In particular, we study the face lattice of the metric polytope associated with the well-known maxcut and multicommodity flow problems, as well as with finite metric spaces. Exploiting the high degree of symmetry, we provide the first complete orbitwise description of the higher layers of the face lattice of the metric polytope for any dimension. Further computational and combinatorial issues are presented. / Thesis / Master of Applied Science (MASc)
229

Optimal Mobile Computation Offloading With Hard Task Deadlines

Hekmati, Arvin January 2019 (has links)
This thesis considers mobile computation offloading where task completion times are subject to hard deadline constraints. Hard deadlines are difficult to meet in conventional computation offloading due to the stochastic nature of the wireless channels involved. Rather than using binary offload decisions, we permit concurrent remote and local job execution when it is needed to ensure task completion deadlines. The thesis addresses this problem for homogeneous Markovian wireless channels. Two online energy-optimal computation offloading algorithms, OnOpt and MultiOpt, are proposed. OnOpt uploads the job to the server continuously and MultiOpt uploads the job in separate parts, each of which requires a separate offload initiation decision. The energy optimality of the algorithms is shown by constructing a time-dilated absorbing Markov process and applying dynamic programming. Closed form results are derived for general Markovian channels. The Gilbert-Elliott channel model is used to show how a particular Markov chain structure can be exploited to compute optimal offload initiation times more efficiently. The performance of the proposed algorithms is compared to three others, namely, Immediate Offloading, Channel Threshold, and Local Execution. Performance results show that the proposed algorithms can significantly improve mobile device energy consumption compared to the other approaches while guaranteeing hard task execution deadlines. / Thesis / Master of Applied Science (MASc)
230

Distributed computation in networked systems

Costello, Zachary Kohl 27 May 2016 (has links)
The objective of this thesis is to develop a theoretical understanding of computation in networked dynamical systems and demonstrate practical applications supported by the theory. We are interested in understanding how networks of locally interacting agents can be controlled to compute arbitrary functions of the initial node states. In other words, can a dynamical networked system be made to behave like a computer? In this thesis, we take steps towards answering this question with a particular model class for distributed, networked systems which can be made to compute linear transformations.

Page generated in 0.135 seconds