• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 837
  • 93
  • 87
  • 86
  • 34
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 6
  • 6
  • 6
  • 5
  • Tagged with
  • 1521
  • 266
  • 261
  • 242
  • 213
  • 190
  • 188
  • 170
  • 169
  • 168
  • 163
  • 157
  • 147
  • 138
  • 131
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

GARCH models based on Brownian Inverse Gaussian innovation processes / Gideon Griebenow

Griebenow, Gideon January 2006 (has links)
In classic GARCH models for financial returns the innovations are usually assumed to be normally distributed. However, it is generally accepted that a non-normal innovation distribution is needed in order to account for the heavier tails often encountered in financial returns. Since the structure of the normal inverse Gaussian (NIG) distribution makes it an attractive alternative innovation distribution for this purpose, we extend the normal GARCH model by assuming that the innovations are NIG-distributed. We use the normal variance mixture interpretation of the NIG distribution to show that a NIG innovation may be interpreted as a normal innovation coupled with a multiplicative random impact factor adjustment of the ordinary GARCH volatility. We relate this new volatility estimate to realised volatility and suggest that the random impact factors are due to a news noise process influencing the underlying returns process. This GARCH model with NIG-distributed innovations leads to more accurate parameter estimates than the normal GARCH model. In order to obtain even more accurate parameter estimates, and since we expect an information gain if we use more data, we further extend the model to cater for high, low and close data, as well as full intraday data, instead of only daily returns. This is achieved by introducing the Brownian inverse Gaussian (BIG) process, which follows naturally from the unit inverse Gaussian distribution and standard Brownian motion. Fitting these models to empirical data, we find that the accuracy of the model fit increases as we move from the models assuming normally distributed innovations and allowing for only daily data to those assuming underlying BIG processes and allowing for full intraday data. However, we do encounter one problematic result, namely that there is empirical evidence of time dependence in the random impact factors. This means that the news noise processes, which we assumed to be independent over time, are indeed time dependent, as can actually be expected. In order to cater for this time dependence, we extend the model still further by allowing for autocorrelation in the random impact factors. The increased complexity that this extension introduces means that we can no longer rely on standard Maximum Likelihood methods, but have to turn to Simulated Maximum Likelihood methods, in conjunction with Efficient Importance Sampling and the Control Variate variance reduction technique, in order to obtain an approximation to the likelihood function and the parameter estimates. We find that this time dependent model assuming an underlying BIG process and catering for full intraday data fits generated data and empirical data very well, as long as enough intraday data is available. / Thesis (Ph.D. (Risk Analysis))--North-West University, Potchefstroom Campus, 2006.
502

Bayesian Analysis of Spatial Point Patterns

Leininger, Thomas Jeffrey January 2014 (has links)
<p>We explore the posterior inference available for Bayesian spatial point process models. In the literature, discussion of such models is usually focused on model fitting and rejecting complete spatial randomness, with model diagnostics and posterior inference often left as an afterthought. Posterior predictive point patterns are shown to be useful in performing model diagnostics and model selection, as well as providing a wide array of posterior model summaries. We prescribe Bayesian residuals and methods for cross-validation and model selection for Poisson processes, log-Gaussian Cox processes, Gibbs processes, and cluster processes. These novel approaches are demonstrated using existing datasets and simulation studies.</p> / Dissertation
503

Multivariate Spatial Process Gradients with Environmental Applications

Terres, Maria Antonia January 2014 (has links)
<p>Previous papers have elaborated formal gradient analysis for spatial processes, focusing on the distribution theory for directional derivatives associated with a response variable assumed to follow a Gaussian process model. In the current work, these ideas are extended to additionally accommodate one or more continuous covariate(s) whose directional derivatives are of interest and to relate the behavior of the directional derivatives of the response surface to those of the covariate surface(s). It is of interest to assess whether, in some sense, the gradients of the response follow those of the explanatory variable(s), thereby gaining insight into the local relationships between the variables. The joint Gaussian structure of the spatial random effects and associated directional derivatives allows for explicit distribution theory and, hence, kriging across the spatial region using multivariate normal theory. The gradient analysis is illustrated for bivariate and multivariate spatial models, non-Gaussian responses such as presence-absence and point patterns, and outlined for several additional spatial modeling frameworks that commonly arise in the literature. Working within a hierarchical modeling framework, posterior samples enable all gradient analyses to occur as post model fitting procedures.</p> / Dissertation
504

Using Gaussian Processes for the Calibration and Exploration of Complex Computer Models

Coleman-Smith, Christopher January 2014 (has links)
<p>Cutting edge research problems require the use of complicated and computationally expensive computer models. I will present a practical overview of the design and analysis of computer experiments in high energy nuclear and astro phsyics. The aim of these experiments is to infer credible ranges for certain fundamental parameters of the underlying physical processes through the analysis of model output and experimental data.</p><p>To be truly useful computer models must be calibrated against experimental data. Gaining an understanding of the response of expensive models across the full range of inputs can be a slow and painful process. Gaussian Process emulators can be an efficient and informative surrogate for expensive computer models and prove to be an ideal mechanism for exploring the response of these models to variations in their inputs.</p><p>A sensitivity analysis can be performed on these model emulators to characterize and quantify the relationship between model input parameters and predicted observable properties. The result of this analysis provides the user with information about which parameters are most important and most likely to affect the prediction of a given observable. Sensitivity analysis allow us to identify what model parameters can be most efficiently constrained by the given observational data set.</p><p>In this thesis I describe a range of techniques for the calibration and exploration of the complex and expensive computer models so common in modern physics research. These statistical methods are illustrated with examples drawn from the fields of high energy nuclear physics and galaxy formation.</p> / Dissertation
505

New tools for unsupervised learning

Xiao, Ying 12 January 2015 (has links)
In an unsupervised learning problem, one is given an unlabelled dataset and hopes to find some hidden structure; the prototypical example is clustering similar data. Such problems often arise in machine learning and statistics, but also in signal processing, theoretical computer science, and any number of quantitative scientific fields. The distinguishing feature of unsupervised learning is that there are no privileged variables or labels which are particularly informative, and thus the greatest challenge is often to differentiate between what is relevant or irrelevant in any particular dataset or problem. In the course of this thesis, we study a number of problems which span the breadth of unsupervised learning. We make progress in Gaussian mixtures, independent component analysis (where we solve the open problem of underdetermined ICA), and we formulate and solve a feature selection/dimension reduction model. Throughout, our goal is to give finite sample complexity bounds for our algorithms -- these are essentially the strongest type of quantitative bound that one can prove for such algorithms. Some of our algorithmic techniques turn out to be very efficient in practice as well. Our major technical tool is tensor spectral decomposition: tensors are generalisations of matrices, and often allow access to the "fine structure" of data. Thus, they are often the right tools for unravelling the hidden structure in an unsupervised learning setting. However, naive generalisations of matrix algorithms to tensors run into NP-hardness results almost immediately, and thus to solve our problems, we are obliged to develop two new tensor decompositions (with robust analyses) from scratch. Both of these decompositions are polynomial time, and can be viewed as efficient generalisations of PCA extended to tensors.
506

Cough Detection and Forecasting for Radiation Treatment of Lung Cancer

Qiu, Zigang Jimmy 06 April 2010 (has links)
In radiation therapy, a treatment plan is designed to make the delivery of radiation to a target more accurate, effective, and less damaging to surrounding healthy tissues. In lung sites, the tumor is affected by the patient’s respiratory motion. Despite tumor motion, current practice still uses a static delivery plan. Unexpected changes due to coughs and sneezes are not taken into account and as a result, the tumor is not treated accurately and healthy tissues are damaged. In this thesis we detail a framework of using an accelerometer device to detect and forecast coughs. The accelerometer measurements are modeled as a ARMA process to make forecasts. We draw from studies in cough physiology and use amplitudes and durations of the forecasted breathing cycles as features to estimate parameters of Gaussian Mixture Models for cough and normal breathing classes. The system was tested on 10 volunteers, where each data set consisted of one 3-5 minute accelerometer measurements to train the system, and two 1-3 minute accelerometer measurements for testing.
507

Cough Detection and Forecasting for Radiation Treatment of Lung Cancer

Qiu, Zigang Jimmy 06 April 2010 (has links)
In radiation therapy, a treatment plan is designed to make the delivery of radiation to a target more accurate, effective, and less damaging to surrounding healthy tissues. In lung sites, the tumor is affected by the patient’s respiratory motion. Despite tumor motion, current practice still uses a static delivery plan. Unexpected changes due to coughs and sneezes are not taken into account and as a result, the tumor is not treated accurately and healthy tissues are damaged. In this thesis we detail a framework of using an accelerometer device to detect and forecast coughs. The accelerometer measurements are modeled as a ARMA process to make forecasts. We draw from studies in cough physiology and use amplitudes and durations of the forecasted breathing cycles as features to estimate parameters of Gaussian Mixture Models for cough and normal breathing classes. The system was tested on 10 volunteers, where each data set consisted of one 3-5 minute accelerometer measurements to train the system, and two 1-3 minute accelerometer measurements for testing.
508

Stochastic Characterization And Mathematical Analysis Of Feedforward Linearizers

Coskun, Arslan Hakan 01 January 2003 (has links) (PDF)
Feedforward is known to be one of the best methods for power amplifier linearization due to its superior linearization performance and broadband stable operation. However feedforward systems have relatively poor power efficiency and are complicated due to the presence of two nonlinear amplifiers and the requirements of amplitude, phase and delay matching within two different loops. In this thesis stochastic characterization of a simple feedforward system with autocorrelation analysis has been presented for Code Division Multiple Access (CDMA) applications taking the amplitude and delay mismatches into consideration. It has been assumed that, the input signal can be represented as Gaussian noise, main and error amplifiers can be modeled with third order AM/AM nonlinearities and there exists no phase mismatch within the loops. Hence closed form expressions, which relate the main channel and distorted adjacent channel power at any point in the feedforward circuitry to the system parameters, have been obtained. Consequently, a mathematical handy tool is achieved towards specifying the circuit parameters rapidly for optimum linearity performance and efficiency. The developed analytical model has been verified by Radio Frequency (RF) and system simulations. An alternative approach towards modeling feedforward systems for arbitrary signals has also been brought into consideration and has been verified with system simulations.
509

Two-dimensional Finite Volume Weighted Essentially Non-oscillatory Euler Schemes With Uniform And Non-uniform Grid Coefficients

Elfarra, Monier Ali 01 February 2005 (has links) (PDF)
In this thesis, Finite Volume Weighted Essentially Non-Oscillatory (FV-WENO) codes for one and two-dimensional discretised Euler equations are developed. The construction and application of the FV-WENO scheme and codes will be described. Also the effects of the grid coefficients as well as the effect of the Gaussian Quadrature on the solution have been tested and discussed. WENO schemes are high order accurate schemes designed for problems with piecewise smooth solutions containing discontinuities. The key idea lies at the high approximation level, where a convex combination of all the candidate stencils is used with certain weights. Those weights are used to eliminate the stencils, which contain discontinuity. WENO schemes have been quite successful in applications, especially for problems containing both shocks and complicated smooth solution structures. The applications tested in this thesis are the Diverging Nozzle, Shock Vortex Interaction, Supersonic Channel Flow, Flow over Bump, and supersonic Staggered Wedge Cascade. The numerical solutions for the diverging nozzle and the supersonic channel flow are compared with the analytical solutions. The results for the shock vortex interaction are compared with the Roe scheme results. The results for the bump flow and the supersonic staggered cascade are compared with results from literature.
510

Unitary Integrations for Unified MIMO Capacity and Performance Analysis

Ghaderipoor, Alireza 11 1900 (has links)
Integrations over the unitary group are required in many applications including the joint eigenvalue distributions of the Wishart matrices. In this thesis, a universal integration framework is proposed to use the character expansions for any unitary integral with general rectangular complex matrices in the integrand. The proposed method is applied to solve some of the well--known but not solved in general form unitary integrals in their general forms, such as the generalized Harish--Chandra--Itzykson--Zuber integral. These integrals have applications in quantum chromodynamics and color--flavor transformations in physics. The unitary integral results are used to obtain new expressions for the joint eigenvalue distributions of the semi--correlated and full--correlated central Wishart matrices, as well as the i.i.d. and uncorrelated noncentral Wishart matrices, in a unified approach. Compared to the previous expressions in the literature, these new expressions are much easier to compute and also to apply for further analysis. In addition, the joint eigenvalue distribution of the full--correlated case is a new result in random matrix theory. The new distribution results are employed to obtain the individual eigenvalue densities of Wishart matrices, as well as the capacity of multiple--input multiple--output (MIMO) wireless channels. The joint eigenvalue distribution of the i.i.d. case is used to obtain the largest eigenvalue density and the bit error rate (BER) of the optimal beamforming in finite--series expressions. When complete channel state information is not available at the transmitter, a codebook of beamformers is used by the transmitter and the receiver. In this thesis, a codebook design method using the genetic algorithm is proposed, which reduces the design complexity and achieves large minimum--distance codebooks. Exploiting the specific structure of these beamformers, an order and bound algorithm is proposed to reduce the beamformer selection complexity at the receiver side. By employing a geometrical approach, an approximate BER for limited feedback beamforming is derived in finite--series expressions.

Page generated in 0.0497 seconds