• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 684
  • 252
  • 79
  • 57
  • 42
  • 37
  • 30
  • 26
  • 25
  • 14
  • 9
  • 8
  • 7
  • 7
  • 7
  • Tagged with
  • 1504
  • 1030
  • 249
  • 238
  • 223
  • 215
  • 195
  • 185
  • 167
  • 163
  • 151
  • 124
  • 123
  • 122
  • 111
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

GARCH Option Pricing Model Fitting With Taiwan Stock Market

Lo, Hao-yuan 03 July 2007 (has links)
This article emphasizes on fitting GARCH option pricing model with Taiwan stock market. Duan¡¦s(1995) NGARCH option pricing model is adopted. Duan solved the European option by simulation, this article follow the method and extents to pricing American option. In general, simulation approach is not convenient to solve American options as well as European options. However, the least-squares method proposed by Longstaff and Schwartz is a simple and powerful tool, so this article tests the method. The NGARCH model has parameters, and base on loglikelihood function, we fit the model with empirical observations to obtain parameters. Then we can simulate the stock prices, once stock prices are simulated, the option value can be priced. Since the article simulates the option, there should be the antithetic approaches instead of simulation. In practice, the Black-Schoels model is the benchmark for pricing European option, so this article compares the simulated European options with Black-Scholes. For American option, this article compares the simulated American options which are priced by least-squares method with trinomial tree (finite difference method).
362

Development Of Property Equations For Butane And Isobutane

Cuylan, Gokhan 01 June 2009 (has links) (PDF)
This study aims to simulate a vapor compression refrigeration cycle, working with either butane (R-600) or isobutane (R-600a). For this purpose a computer program is written to design a household refrigerator, by modeling a steady-state, vapor compression cycle, with user defined input data. Each refrigerator component can be designed separately, as well as parts of a single refrigeration system in the program. In order to determine the refrigerant thermophysical properties at different states, least squares polynomial equations for different properties of R-600 and R-600a have been developed. Computer program is used for refrigeration cycle analysis, variable speed compressor design and calculating coefficient of performance (COP) and irreversibility of the cycle. Sample-preliminary designs have been carried out for different refrigeration loads, room and cold space temperatures with the program, to compare the performance characteristics of the refrigerants. Designs have been performed at different refrigeration loads, room and cold space temperatures. It is observed that for the same conditions R-600 has slightly better performance characteristics than those of R-600a.
363

Development Of An Incompressible, Laminar Flowsolver Based On Least Squares Spectral Element Methodwith P-type Adaptive Refinement Capabilities

Ozcelikkale, Altug 01 June 2010 (has links) (PDF)
The aim of this thesis is to develop a flow solver that has the ability to obtain an accurate numerical solution fast and efficiently with minimum user intervention. In this study, a two-dimensional viscous, laminar, incompressible flow solver based on Least-Squares Spectral Element Method (LSSEM) is developed. The LSSEM flow solver can work on hp-type nonconforming grids and can perform p-type adaptive refinement. Several benchmark problems are solved in order to validate the solver and successful results are obtained. In particular, it is demonstrated that p-type adaptive refinement on hp-type non-conforming grids can be used to improve the quality of the solution. Moreover, it is found that mass conservation performance of LSSEM can be enhanced by using p-type adaptive refinement strategies while keeping computational costs reasonable.
364

Identifying Factors That Facilitate The Use Of Multi-purpose Smart Cards By University Students: An Empirical Investigation

Teker, Mahmut 01 February 2011 (has links) (PDF)
The aim of this thesis is to identify factors that affect the university students&rsquo / acceptance of multi-purpose Smart Cards. The findings of this study will be beneficial to facilitate the use of Smart-Card enabled system both n universities and in other institutions which either have these systems in use or plan to invest on these systems in the future. The research methodology employed within this study is based on quantitative methods. A survey instrument comprising 51 5-point Likert-type questions has been developed and applied to 207 university Middle East Technical University students. The data collected has been analyzed using Exploratory Factor Analysis to categorize factors having items. According to analysis results, the data classified under 5 factors / Perceived Usefulness, Perceived Ease of Use, Behavioral Intention, Anxiety, and Technological Complexity. Then, the relations between these 5 factors identified and a measurement model was created. For assessing the proposed model Discriminant and Convergent Validity scores were calculated by Confirmatory Factor Analysis. Then, Structural Equation Modeling was conducted with Partial Least Squares for validating the model&rsquo / s estimated influence. The study has shown that the main Technology Acceptance Model constructs fit for determining the university students&rsquo / intention of Smart Card usage except for Perceived Ease of Use over Behavioral Intention. Moreover, study showed that Anxiety and Technological Complexity were the external factors that have effect on willingness of using multi-purpose Smart Cards. If students have Anxiety, this affects their perception of easiness of the system and it has negative indirect effect on the perceived usefulness and direct effect on intention. Technological Complexity is another factor which has direct affect on the perception of easiness and usefulness and intention.
365

Inverse Analysis of Transient Heat Source from Arc Erosion

Li, Yung-Yuan 02 July 2001 (has links)
An inverse method is developed to analyze the transient heat source from arc erosion. The temperature at the contour of arc erosion is assumed as melting point. And the temperature in grid points at the last time is calculated by interpolation, which include measurement errors. Then, the unknown parameters of transient heat source can be solved by linear least-squares error method. These parameters are plasma radius at the anode surface grows with time, arc power, and plasma flushing efficiency on the anode. Because the temperature in measuring points includes measurement errors, the exact solution can be found when fewer unknowns are considered. The inverse method is sensitivity to measurement errors.
366

Adaptive Rake Multiuser Receiver with Linearly Constrained Sliding Window RLS Algorithm for DS-CDMA Systems

Lee, Hsin-Pei 04 July 2003 (has links)
The technique of direct sequence code division multiple access (DS-CDMA) cellular system has been the focus of increased attention. In this thesis, we will consider the environment of DS-CDMA systems, where the asynchronous narrow band interference due to other systems is joined suddenly to the CDMA system. The suddenly joined narrow band interference will make the system crush down. The main concern of this thesis is to deal with suddenly joined narrow band interference cancellation. An adaptive filtering algorithm based on sliding window criterion and variable forgetting factor is known to be very attractive for violent changing environment. In this thesis, a new sliding window linearly constrained recursive least squares (SW LC-RLS) algorithm and variable forgetting factor linearly constrained recursive least squares (VFF LC-RLS) algorithm on the modified minimum mean squared error (MMSE) structure [9] is devised for RAKE receiver in direct sequence code-division multiple access (DS-CDMA) system over multipath fading channels. Where the channel estimation scheme is accomplished at the output of adaptive filter. The proposed SW LC-RLS algorithm and VFF LC-RLS has the advantage of having faster convergence property and tracking ability, and can be applied to the environment, where the narrow band interference is suddenly joined to the system, to achieve desired performance. Via computer simulation, we show that the performance, in terms of mean square errors (MSE) and signal to interference plus noise ratio (SINR), is superior to the conventional LC-RLS and orthogonal decomposition-based LMS algorithms based on the MMSE structure [9].
367

TOA Wireless Location Algorithm with NLOS Mitigation Based on LS-SVM in UWB Systems

Lin, Chien-hung 29 July 2008 (has links)
One of the major problems encountered in wireless location is the effect caused by non-line of sight (NLOS) propagation. When the direct path from the mobile station (MS) to base stations (BSs) is blocked by obstacles or buildings, the signal arrival times will delay. That will make the signal measurements include an error due to the excess path propagation. If we use the NLOS signal measurements for localization, that will make the system localization performance reduce greatly. In the thesis, a time-of-arrival (TOA) based location system with NLOS mitigation algorithm is proposed. The proposed method uses least squares-support vector machine (LS-SVM) with optimal parameters selection by particle swarm optimization (PSO) for establishing regression model, which is used in the estimation of propagation distances and reduction of the NLOS propagation errors. By using a weighted objective function, the estimation results of the distances are combined with suitable weight factors, which are derived from the differences between the estimated measurements and the measured measurements. By applying the optimality of the weighted objection function, the method is capable of mitigating the NLOS effects and reducing the propagation range errors. Computer simulation results in ultra-wideband (UWB) environments show that the proposed NLOS mitigation algorithm can reduce the mean and variance of the NLOS measurements efficiently. The proposed method outperforms other methods in improving localization accuracy under different NLOS conditions.
368

Development and Application of Kinetic Meshless Methods for Euler Equations

C, Praveen 07 1900 (has links)
Meshless methods are a relatively new class of schemes for the numerical solution of partial differential equations. Their special characteristic is that they do not require a mesh but only need a distribution of points in the computational domain. The approximation at any point of spatial derivatives appearing in the partial differential equations is performed using a local cloud of points called the "connectivity" (or stencil). A point distribution can be more easily generated than a grid since we have less constraints to satisfy. The present work uses two meshless methods; an existing scheme called Least Squares Kinetic Upwind Method (LSKUM) and a new scheme called Kinetic Meshless Method (KMM). LSKUM is a "kinetic" scheme which uses a "least squares" approximation} for discretizing the derivatives occurring in the partial differential equations. The first part of the thesis is concerned with some theoretical properties and application of LSKUM to 3-D point distributions. Using previously established results we show that first order LSKUM in 1-D is positivity preserving under a CFL-like condition. The 3-D LSKUM is applied to point distributions obtained from FAME mesh. FAME, which stands for Feature Associated Mesh Embedding, is a composite overlapping grid system developed at QinetiQ (formerly DERA), UK, for store separation problems. The FAME mesh has a cell-based data structure and this is first converted to a node-based data structure which leads to a point distribution. For each point in this distribution we find a set of nearby nodes which forms the connectivity. The connectivity at each point (which is also the "full stencil" for that point) is split along each of the three coordinate directions so that we need six split (or half or one-sided) stencils at each point. The split stencils are used in LSKUM to calculate the split-flux derivatives arising in kinetic schemes which gives the upwind character to LSKUM. The "quality" of each of these stencils affects the accuracy and stability of the numerical scheme. In this work we focus on developing some numerical criteria to quantify the quality of a stencil for meshless methods like LSKUM. The first test is based on singular value decomposition of the over-determined problem and the singular values are used to measure the ill-conditioning (generally caused by a flat stencil). If any of the split stencils are found to be ill-conditioned then we use the full stencil for calculating the corresponding split flux derivative. A second test that is used is based on an accuracy measurement. The idea of this test is that a "good" stencil must give accurate estimates of derivatives and vice versa. If the error in the computed derivatives is above some specified tolerance the stencil is classified as unacceptable. In this case we either enhance the stencil (to remove disc-type degenerate structure) or switch to full stencil. It is found that the full stencil almost always behaves well in terms of both the tests. The use of these two tests and the associated modifications of defective stencils in an automatic manner allows the solver to converge without any blow up. The results obtained for a 3-D configuration compare favorably with wind tunnel measurements and the framework developed here provides a rational basis for approaching the connectivity selection problem. The second part of the thesis deals with a new scheme called Kinetic Meshless Method (KMM) which was developed as a consequence of the experience obtained with LSKUM and FAME mesh. As mentioned before the full stencil is generally better behaved than the split stencils. Hence the new scheme is constructed so that it does not require split stencils but operates on a full stencil (which is like a centered stencil). In order to obtain an upwind bias we introduce mid-point states (between a point and its neighbour) and the least squares fitting is performed using these mid-point states. The mid-point states are defined in an upwind-biased manner at the kinetic/Boltzmann level and moment-method strategy leads to an upwind scheme at the Euler level. On a standard 4-point Cartesian stencil this scheme reduces to finite volume method with KFVS fluxes. We can also show the rotational invariance of the scheme which is an important property of the governing equations themselves. The KMM is extended to higher order accuracy using a reconstruction procedure similar to finite volume schemes even though we do not have (or need) any cells in the present case. Numerical studies on a model 2-D problem show second order accuracy. Some theoretical and practical advantages of using a kinetic formulation for deriving the scheme are recognized. Several 2-D inviscid flows are solved which also demonstrate many important characteristics. The subsonic test cases show that the scheme produces less numerical entropy compared to LSKUM, and is also better in preserving the symmetry of the flow. The test cases involving discontinuous flows show that the new scheme is capable of resolving shocks very sharply especially with adaptation. The robustness of the scheme is also very good as shown in the supersonic test cases.
369

Acoustic Emission in Composite Laminates - Numerical Simulations and Experimental Characterization

Johnson, Mikael January 2002 (has links)
No description available.
370

Semiparametric estimation of unimodal distributions [electronic resource] / by Jason K. Looper.

Looper, Jason K. January 2003 (has links)
Title from PDF of title page. / Document formatted into pages; contains 93 pages. / Thesis (M.S.)--University of South Florida, 2003. / Includes bibliographical references. / Text (Electronic thesis) in PDF format. / ABSTRACT: One often wishes to understand the probability distribution of stochastic data from experiment or computer simulations. However, where no model is given, practitioners must resort to parametric or non-parametric methods in order to gain information about the underlying distribution. Others have used initially a nonparametric estimator in order to understand the underlying shape of a set of data, and then later returned with a parametric method to locate the peaks. However they are interested in estimating spectra, which may have multiple peaks, where in this work we are interested in approximating the peak position of a single-peak probability distribution. One method of analyzing a distribution of data is by fitting a curve to, or smoothing them. Polynomial regression and least-squares fit are examples of smoothing methods. Initial understanding of the underlying distribution can be obscured depending on the degree of smoothing. / ABSTRACT: Problems such as under and oversmoothing must be addressed in order to determine the shape of the underlying distribution.Furthermore, smoothing of skewed data can give a biased estimation of the peak position. We propose two new approaches for statistical mode estimation based on the assumption that the underlying distribution has only one peak. The first method imposes the global constraint of unimodality locally, by requiring negative curvature over some domain. The second method performs a search that assumes a position of the distribution's peak and requires positive slope to the left, and negative slope to the right. / ABSTRACT: Each approach entails a constrained least-squares fit to the raw cumulative probability distribution.We compare the relative efficiencies [12] of finding the peak location of these two estimators for artificially generated data from known families of distributions Weibull, beta, and gamma. Within each family a parameter controls the skewness or kurtosis, quantifying the shapes of the distributions for comparison. We also compare our methods with other estimators such as the kernel-density estimator, adaptive histogram, and polynomial regression. By comparing the effectiveness of the estimators, we can determine which estimator best locates the peak position. We find that our estimators do not perform better than other known estimators. We also find that our estimators are biased. / ABSTRACT: Overall, an adaptation of kernel estimation proved to be the most efficient.The results for the work done in this thesis will be submitted, in a different form, for publication by D.A. Rabson and J.K. Looper. / System requirements: World Wide Web browser and PDF reader. / Mode of access: World Wide Web.

Page generated in 0.2265 seconds