• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 43
  • 43
  • 13
  • 11
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Hua Type Integrals over Unitary Groups and over Projective Limits of

Yurii A. Neretin, neretin@main.mccme.rssi.ru 30 May 2000 (has links)
No description available.
2

Estimation with stable disturbances

Ghaffari, Novin 16 March 2015 (has links)
The family of stable distributions represents an important generalization of the Gaussian family; stable random variables obey a generalized central limit theorem where the assumption of finite variance is replaced with one of power law decay in the tails. Possessing heavy tails, asymmetry, and infinite variance, non-Gaussian stable distributions can be suitable for inference in settings featuring impulsive, possibly skewed noise. A general lack of analytical form for the densities and distributions of stable laws has prompted research into computational methods of estimation. This report introduces stable distributions through a discussion of their basic properties and definitions in chapter 1. Chapter 2 surveys applications, and chapter 3 discusses a number of procedures for inference, with particular attention to time series models in the ARMA setting. Further details and an application can be found in the appendices. / text
3

Performance improvement of MEMS accelerometers in vibration based diagnosis

Badri, Abdellatef E. O. January 2011 (has links)
Vibration measurement and analysis has been an accepted method since decades to meet a number of objectives - machinery condition monitoring, dynamic qualification of any designed structural components, prediction of faults and structural aging-related problems, and several other structural dynamics studies and diagnosis. However, the requirement of the vibration measurement at number of locations in structures, machines and/or equipments makes the vibration measurement exorbitant if conventional piezoelectric accelerometers are used. Hence, there is a need for cheaper and reliable alternative for the conventional accelerometers. The Micro-Electro-Mechanical Systems (MEMS) accelerometers are one such cheap alternative. However, a significant deviation in the performance of the MEMS accelerometers has been observed in earlier research studies and also confirmed by this presented study when compared with well known conventional accelerometer. Therefore, two methods have been suggested to improve the performance of the existing MEMS accelerometers; one for correction in time domain and other in frequency domain. Both methods are based on the generation of a characteristic function (CF) for the MEMS accelerometer using well known reference accelerometer in laboratory tests. The procedures of both methods have been discussed and validations of these methods have been presented through experimental examples. In addition, a Finite Element (FE) model of a typical MEMS accelerometer has been developed and modal analysis has been carried out to understand the dynamics of capacitive type MEMS accelerometer and to identify the source of errors. It has been observed that the moving fingers behave like a cantilever beam while the fixed fingers showed rigid body motion. This cantilever type of motion seems to be causing non-parallel plates effect in the formed capacitors between moving and fixed fingers which results in errors in the vibration measurement. Hence, design modifications on finger shape have been suggested to remove the cantilever motion and results showed remarkable improvement. Moreover, the effect of using synchronous amplitude modulation and demodulation in the readout circuit has been studied. The experimental study showed that this circuit also introduces errors in amplitude and phase of the output signal compared with the input signal. Thus, in the new design of MEMS accelerometers, improvements in both mechanical design and electronic circuit are required.
4

Performance Evaluation of Equal Gain Diversity Systems In Fading Channels

Viswanathan, Ramanathan 12 January 2004 (has links)
Next generation wireless systems are being designed to provide ubiquitous broadband link access to information infrastructure. Diversity techniques play a vital role in supporting such high speed connections over radio channels by mitigating the detrimental effects of multiuser interference and multipath fading. Equal gain combining (EGC) diversity receiver is of practical interest because of its reduced complexity relative to optimum maximal ratio combining scheme while achieving near-optimal performance. Despite this, the literature on EGC receiver performance is meager owing to difficulty in deriving the probability density function of the diversity combiner output. This problem is further compounded when the diversity paths are correlated. Since spatial, pattern, or polarization diversity implementations at a mobile handset are usually limited to a small diversity order with closely spaced antenna elements (owing to cost and ergonomic constraints), any performance analysis must be revamped to account for the effects of branch correlation between the combined signals. This thesis presents a powerful characteristic function method for evaluating the performance of a two-branch EGC receiver in Nakagami-m channels with non-independent and non-identical fading statistics. The proposed framework facilitates efficient error probability analysis for a broad range of modulation/detection schemes in a unified manner. The thesis also examines the efficacy of an average diversity combiner in slotted direct sequence spread-spectrum access packet radio networks. A two-dimensional EGC diversity combining scheme is introduced, wherein a corrupted packet is retained and combined with its retransmission at the bit level to produce a more reliable packet. The mathematical analysis of the average diversity combiner presented in this thesis is sufficiently general to handle generalized fading channel models with independent fading statistics for a myriad of digital modulation schemes. / Master of Science
5

Pricing of European options using empirical characteristic functions

Binkowski, Karol Patryk January 2008 (has links)
Thesis (PhD)--Macquarie University, Division of Economic and Financial Studies, Dept. of Statistics, 2008. / Bibliography: p. 73-77. / Introduction -- Lévy processes used in option pricing -- Option pricing for Lévy processes -- Option pricing based on empirical characteristic functions -- Performance of the five models on historical data -- Conclusions -- References -- Appendix A. Proofs -- Appendix B. Supplements -- Appendix C. Matlab programs. / Pricing problems of financial derivatives are among the most important ones in Quantitative Finance. Since 1973 when a Nobel prize winning model was introduced by Black, Merton and Scholes the Brownian Motion (BM) process gained huge attention of professionals professionals. It is now known, however, that stock market log-returns do not follow the very popular BM process. Derivative pricing models which are based on more general Lévy processes tend to perform better. --Carr & Madan (1999) and Lewis (2001) (CML) developed a method for vanilla options valuation based on a characteristic function of asset log-returns assuming that they follow a Lévy process. Assuming that at least part of the problem is in adequate modeling of the distribution of log-returns of the underlying price process, we use instead a nonparametric approach in the CML formula and replaced the unknown characteristic function with its empirical version, the Empirical Characteristic Functions (ECF). We consider four modifications of this model based on the ECF. The first modification requires only historical log-returns of the underlying price process. The other three modifications of the model need, in addition, a calibration based on historical option prices. We compare their performance based on the historical data of the DAX index and on ODAX options written on the index between the 1st of June 2006 and the 17th of May 2007. The resulting pricing errors show that one of our models performs, at least in the cases considered in the project, better than the Carr & Madan (1999) model based on calibration of a parametric Lévy model, called a VG model. --Our study seems to confirm a necessity of using implied parameters, apart from an adequate modeling of the probability distribution of the asset log-returns. It indicates that to precisely reproduce behaviour of the real option prices yet other factors like stochastic volatility need to be included in the option pricing model. Fortunately the discrepancies between our model and real option prices are reduced by introducing the implied parameters which seem to be easily modeled and forecasted using a mixture of regression and time series models. Such approach is computationaly less expensive than the explicit modeling of the stochastic volatility like in the Heston (1993) model and its modifications. / Mode of access: World Wide Web. / x, 111 p. ill., charts
6

Statistická charakteristická funkce a její využití pro zpracování signálu / Statistic Characteristic Function and its Usage for Digital Signal Processing

Mžourek, Zdeněk January 2014 (has links)
Aim of this thesis is provide basic information about characteristic function used in statistic and compare its properties with the Fourier transform used in engineering applications. First part of this thesis is theoretical, there are discussed basic concepts, their properties and mutual relations. The second part is devoted to some possible applications, for example normality testing of data or utilization of the characteristic function in independent component analysis. The first chapter describes the introduction to probability theory for the unification of terminology and mentioned concepts will be used to demonstrate the interesting properties of characteristic function. The second chapter describes the Fourier transform, definition of characteristic function and their comparison. The second part of this text is devoted to applications the empirical characteristic function is analyzed as an estimate of the characteristic function of examined data. As an example of application is describe a simple test of normality. The last part deals with more advanced applications of characteristic function for methods such as independent component analysis.
7

Edgeworthův rozvoj / Edgeworth expansion

Dzurilla, Matúš January 2019 (has links)
This thesis is focused around Edgeworths expansion for aproximation of distribution for parameter estimation. Aim of the thesis is to introduce term Edgeworths expansion, its assumptions and terminology associeted with it. Afterwords demonstrate process of deducting first term of Edgeworths expansion. In the end demonstrate this deduction on examples and compare it with different approximations (mainly central limit theorem), and show strong and weak points of Edgeworths expansion.
8

Edgeworthův rozvoj / Edgeworth expansion

Dzurilla, Matúš January 2019 (has links)
This thesis is focused around Edgeworth's expansion for approximation of distribution for parameter estimation. Aim of the thesis is to introduce term Edgeworth's expansion, its assumptions and terminology associated with it. Afterwards demonstrate process of deducting first term of Edgeworth's expansion. In the end demonstrate this deduction on examples and compare it with different approximations (mainly central limit theorem), and show strong and weak points of Edgeworth's expansion.
9

Calculating Distribution Function and Characteristic Function using Mathematica

Chen, Cheng-yu 07 July 2010 (has links)
This paper deals with the applications of symbolic computation of Mathematica 7.0 (Wolfram, 2008) in distribution theory. The purpose of this study is twofold. Firstly, we will implement some functions to extend Mathematica capabilities to handle symbolic computations of the characteristic function for linear combination of independent univariate random variables. These functions utilizes pattern-matching codes that enhance Mathematica's ability to simplify expressions involving the product and summation of algebraic terms. Secondly, characteristic function can be classified into commonly used distributions, including six discrete distributions and seven continuous distributions, via the pattern-matching feature of Mathematica. Finally, several examples will be presented. The examples include calculating limit of characteristic function of linear combinations of independent random variables, and applications of coded functions and illustrate the central limit theorem, the law of large numbers and properties of some distributions.
10

Automatic Recognition of Artificial Objects in Side-scan Sonar Imerage

Li, Ying-Zhang 02 August 2011 (has links)
Abstract The interpretation and identification of information from the side-scan sonar imagery are mainly depended on visual observation and personal experiences. Recent studies tended to increase the identification efficiency by using numerical analysis methods. This can reduce the error that cause by the differences of observer¡¦s experience as well as by extended time observation. The position around the center line of the slant range corrected side-scan sonar imagery might result in the degradation of the ability of numerical methods to successfully detect artificial objects. Theoretically, this problem could be solved by using a specific characteristic function to identify the existence of concrete reefs, and then filtering the noise of the central line area with a threshold value. This study was intended to develop fully automatic sonar imagery processing system for the identification of cubic concrete and cross-type protective artificial reefs in Taiwan offshore area. The procedures of the automatic sonar imagery processing system are as follows: (1) Image Acquisition¡G500kHz with slant range of 75m. (2) Feature Extraction¡Ggrey level co-occurrence matrix (i.e., Entropy, Homogeneity and Mean) (3) Classification¡Gunsupervised Bayesian classifier. (4) Object Identification¡Gby characteristic feature (i.e., Entropy). (5) Object¡¦s Status Analysis¡Gobject¡¦s circumference¡Barea¡Bcenter of mass and quantity. This study used the sonar images collected at Chey-Ding artificial reef site in Kaohsiung City as a case study, aiming to verify the automatic sonar imagery processing system and find out the optimum window size. The image characteristic functions include one set of first order parameter (i.e., mean) and two sets of second order parameter (i.e., entropy and homogeneity). Eight(8) sonar images with 1-8 sets of cubic concrete and cross-type protective artificial reefs where used in this step. The identification efficiency of the system, in terms of the produce¡¦s accuracy, is 79.41%. The results illustrated that there were 16~28 sets of artificial reefs being detected in this case which is comparable with the actual amount of 17 sets. Based on this investigation, the optimum window size was concluded to be 12¡Ñ12 pixels with sliding size of 4 pixel. Imagery collected at Fang-Liau artificial reef site of Pingtung County was tested. For the purpose of applicability, the original imagery (2048¡Ñ2800 pixels) was divided into 8 consecutive smaller sized imagery frames(2048¡Ñ350 pixels). The influence of using a two-fold classification procedure and a central filtering method to reduce the noise that caused by slant range correction were discussed. The results showed that central line filtering method is applicable. The results of object¡¦s status analysis showed that there are 156-236 sets of reefs existed. Automatic determination of the target using the characteristic function of entropy is feasible. If the value is larger than 1.45, it represents positive identification of concrete artificial reefs. It can be classified as muddy sand seabed type if the value is smaller than 1.35. If the value is between 1.35~1.45, it illustrates the existence of a transition zone where objects of smaller in dimensions might exist. To achieve the purpose of automatic operation, firstly, we have to identify the existence of the concrete reefs by using the specific characteristic function. Based on the result of existing concrete reefs, suture line filtering method will hence be used to filter the noise from the image information. For that all of the procedures are automatically operated without human intervention. Key word: side-scan sonar ; characteristic function ; gray level co-occurrence matrix ; Bayesian classification ;entropy ; homogeneity ; mean

Page generated in 0.1395 seconds