• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 4
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 35
  • 35
  • 16
  • 14
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Some aspects of signal processing in heavy tailed noise

Brcic, Ramon Francis January 2002 (has links)
This thesis addresses some problems that arise in signal processing when the noise is impulsive and follows a heavy tailed distribution. After reviewing several of the more well known heavy- tailed distributions the common problem of which of these hest models the observations is considered. To this end, a test is proposed for the symmetric alpha stable distribution. The test threshold is found using both asymptotic theory and parametric bootstrap resampling. In doing so, some modifications are proposed for Koutrouvelis' estimator of the symmetric alpha stable distributions parameters that improve performance. In electrical systems impulsive noise is generated externally to the receiver while thermal Gaussian noise is generated internally by the receiver electronics, the resultant noise is an additive combination of these two independent sources. A characteristic function domain estimator for the parameters of the resultant distribution is developed for the case when the impulsive noise is modeled by a symmetric alpha stable distribution. Having concentrated on validation and parameter estimation for the noise model, some problems in signal detection and estimation are considered. Detection of the number of sources impinging on an array is an important first. step in many array processing problems for which the development of optimal methods can be complicated even in the Gaussian case. Here, a multiple hypothesis test for the equality of the eigenvalues of the sample array covariance is proposed. / The nonparametric bootstrap is used to estimate the distributions of the test statistics removing the assumption of Gaussianity and offering improved performance for heavy tailed observations. Finally, some robust estimators are proposed for estimating parametric signals in additive noise. These are based on M-estimators but implicitly incorporate an estimate of the noise distribution. enabling the estimator to adapt to the unknown noise distribution. Two estimators are developed, one uses a nonparametric kernel density estimator while the other models the score function of the noise distribution with a linear combination of basis functions.
2

The pareto stable distribution as a hypothesis for returns of stocks listed in the DAX /

Höchstötter, Markus. January 2006 (has links)
Zugl.: Karlsruhe, University, Diss., 2006.
3

Evaluating performance in systems with heavy-tailed input : A quantile-based approach /

Fiedler, Ulrich. January 2003 (has links)
Diss. no. 15233 techn. sc. SFIT Zurich. / Im Buchh.: Aachen : Shaker. Literaturverz.
4

Bootstrap Unit Root Tests for Heavy-Tailed Observations

Parfionovas, Andrejus 01 May 2003 (has links)
We explore the application of the bootstrap unit root test to time series with heavy-tailed errors. The size and power of the tests are estimated for two different autoregressive models (AR(1)) using computer simulated data. Real-data examples are also presented. Two different bootstrap methods and the subsampling approach are compared. Conclusions on the optimal bootstrap parameters, the range of applicability, and the performance of the tests are made.
5

Quantitative analysis of extreme risks in insurance and finance

Yuan, Zhongyi 01 May 2013 (has links)
In this thesis, we aim at a quantitative understanding of extreme risks. We use heavy-tailed distribution functions to model extreme risks, and use various tools, such as copulas and MRV, to model dependence structures. We focus on modeling as well as quantitatively estimating certain measurements of extreme risks. We start with a credit risk management problem. More specifically, we consider a credit portfolio of multiple obligors subject to possible default. We propose a new structural model for the loss given default, which takes into account the severity of default. Then we study the tail behavior of the loss given default under the assumption that the losses of the obligors jointly follow an MRV structure. This structure provides an ideal framework for modeling both heavy tails and asymptotic dependence. Using HRV, we also accommodate the asymptotically independent case. Multivariate models involving Archimedean copulas, mixtures and linear transforms are revisited. We then derive asymptotic estimates for the Value at Risk and Conditional Tail Expectation of the loss given default and compare them with the traditional empirical estimates. Next, we consider an investor who invests in multiple lines of business and study a capital allocation problem. A randomly weighted sum structure is proposed, which can capture both the heavy-tailedness of losses and the dependence among them, while at the same time separates the magnitudes from dependence. To pursue as much generality as possible, we do not impose any requirement on the dependence structure of the random weights. We first study the tail behavior of the total loss and obtain asymptotic formulas under various sets of conditions. Then we derive asymptotic formulas for capital allocation and further refine them to be explicit for some cases. Finally, we conduct extreme risk analysis for an insurer who makes investments. We consider a discrete-time risk model in which the insurer is allowed to invest a proportion of its wealth in a risky stock and keep the rest in a risk-free bond. Assume that the claim amounts within individual periods follow an autoregressive process with heavy-tailed innovations and that the log-returns of the stock follow another autoregressive process, independent of the former one. We derive an asymptotic formula for the finite-time ruin probability and propose a hybrid method, combining simulation with asymptotics, to compute this ruin probability more efficiently. As an application, we consider a portfolio optimization problem in which we determine the proportion invested in the risky stock that maximizes the expected terminal wealth subject to a constraint on the ruin probability.
6

Effective task assignment strategies for distributed systems under highly variable workloads

Broberg, James Andrew, james@broberg.com.au January 2007 (has links)
Heavy-tailed workload distributions are commonly experienced in many areas of distributed computing. Such workloads are highly variable, where a small number of very large tasks make up a large proportion of the workload, making the load very hard to distribute effectively. Traditional task assignment policies are ineffective under these conditions as they were formulated based on the assumption of an exponentially distributed workload. Size-based task assignment policies have been proposed to handle heavy-tailed workloads, but their applications are limited by their static nature and assumption of prior knowledge of a task's service requirement. This thesis analyses existing approaches to load distribution under heavy-tailed workloads, and presents a new generalised task assignment policy that significantly improves performance for many distributed applications, by intelligently addressing the negative effects on performance that highly variable workloads cause. Many problems associated with the modelling and optimisations of systems under highly variable workloads were then addressed by a novel technique that approximated these workloads with simpler mathematical representations, without losing any of their pertinent original properties. Finally, we obtain advance queuing metrics (such as the variance of key measurements like waiting time and slowdown that are difficult to obtain analytically) through rigorous simulation.
7

Understanding Churn in Decentralized Peer-to-Peer Networks

Yao, Zhongmei 2009 August 1900 (has links)
This dissertation presents a novel modeling framework for understanding the dynamics of peer-to-peer (P2P) networks under churn (i.e., random user arrival/departure) and designing systems more resilient against node failure. The proposed models are applicable to general distributed systems under a variety of conditions on graph construction and user lifetimes. The foundation of this work is a new churn model that describes user arrival and departure as a superposition of many periodic (renewal) processes. It not only allows general (non-exponential) user lifetime distributions, but also captures heterogeneous behavior of peers. We utilize this model to analyze link dynamics and the ability of the system to stay connected under churn. Our results offers exact computation of user-isolation and graph-partitioning probabilities for any monotone lifetime distribution, including heavy-tailed cases found in real systems. We also propose an age-proportional random-walk algorithm for creating links in unstructured P2P networks that achieves zero isolation probability as system size becomes infinite. We additionally obtain many insightful results on the transient distribution of in-degree, edge arrival process, system size, and lifetimes of live users as simple functions of the aggregate lifetime distribution. The second half of this work studies churn in structured P2P networks that are usually built upon distributed hash tables (DHTs). Users in DHTs maintain two types of neighbor sets: routing tables and successor/leaf sets. The former tables determine link lifetimes and routing performance of the system, while the latter are built for ensuring DHT consistency and connectivity. Our first result in this area proves that robustness of DHTs is mainly determined by zone size of selected neighbors, which leads us to propose a min-zone algorithm that significantly reduces link churn in DHTs. Our second result uses the Chen-Stein method to understand concurrent failures among strongly dependent successor sets of many DHTs and finds an optimal stabilization strategy for keeping Chord connected under churn.
8

Fitting financial time series data to heavy tailed distribution

Huang, Liu-Yuen 23 June 2002 (has links)
Financial data, such as daily or monthly maximum log return of stock price usually possess heavy tail and skewness properties. In this thesis, we consider stock price data of computer hardware and money center banks. Heavy-tailed distributions including Pearson type IV, Pearson type VII and stable distribution were fitted to the daily log return of the data sets, and goodness of fit were compared. For the monthly maximum log return, nonlinear threshold time series models were fitted with heavy tailed innovation distributions. In addition, the value at risk and volatility of the data sets are derived from the fitted distributions.
9

A Comparison Study on Natural and Head/tail Breaks Involving Digital Elevation Models

Lin, Yue January 2013 (has links)
The most widely used classification method for statistical mapping is Jenks’s natural breaks. However, it has been found that natural breaks is not good at classifying data which have scaling property. Scaling property is ubiquitous in many societal and natural phenomena. It can be explained as there are far more smaller things than larger ones. For example, there are far more shorter streets than longer ones, far more smaller street blocks than bigger ones, and far more smaller cities than larger ones. Head/tail breaks is a new classification scheme that is designed for values that exhibit scaling property. In Digital Elevation Models (DEMs), there are far more lower elevation points than higher elevation points. This study performs both head/tail breaks and natural breaks for values from five resolutions of DEMs. The aim of this study is to examine advantages and disadvantages of head/tail breaks classification scheme compared with natural breaks. One of the five resolutions of DEMs is given as an example to illustrate the principle behind the head/tail breaks in the case study.The results of head/tail breaks for five resolutions are slightly different from each other in number of classes or level of details. The similar results of comparisons support the previous finding that head/tail breaks is advantaged over natural breaks in reflecting the hierarchy of data. But the number of classes could be reduced for better statistical mapping. Otherwise the top values, which are very little, would be nearly invisible in the map.A main conclusion to be drawn from this study is that head/tail breaks classification scheme is advantaged over natural breaks in presenting hierarchy or scaling of elevation data, with the top classes gathered into one. Another conclusion is when the resolution gets higher; the scaling property gets more striking.
10

An Analysis of Quantile Measures of Kurtosis: Center and Tails

Kotz, Samuel, Seier, Edith 01 June 2009 (has links)
The consequences of substituting the denominator Q 3(p) - Q 1(p) by Q 2 - Q 1(p) in Groeneveld's class of quantile measures of kurtosis (γ 2(p)) for symmetric distributions, are explored using the symmetric influence function. The relationship between the measure γ 2(p) and the alternative class of kurtosis measures κ2(p) is derived together with the relationship between their influence functions. The Laplace, Logistic, symmetric Two-sided Power, Tukey and Beta distributions are considered in the examples in order to discuss the results obtained pertaining to unimodal, heavy tailed, bounded domain and U-shaped distributions.

Page generated in 0.032 seconds