• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 1
  • 1
  • 1
  • Tagged with
  • 14
  • 14
  • 8
  • 6
  • 6
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A heteroscedastic volatility model with Fama and French risk factors for portfolio returns in Japan / En heteroskedastisk volatilitetsmodell med Fama och Frenchriskfaktorer för portföljavkastning i Japan

Wallin, Edvin, Chapman, Timothy January 2021 (has links)
This thesis has used the Fama and French five-factor model (FF5M) and proposed an alternative model. The proposed model is named the Fama and French five-factor heteroscedastic student's model (FF5HSM). The model utilises an ARMA model for the returns with the FF5M factors incorporated and a GARCH(1,1) model for the volatility. The FF5HSM uses returns data from the FF5M's portfolio construction for the Japanese stock market and the five risk factors. The portfolio's capture different levels of market capitalisation, and the factors capture market risk. The ARMA modelling is used to address the autocorrelation present in the data. To deal with the heteroscedasticity in daily returns of stocks, a GARCH(1,1) model has been used. The order of the GARCH-model has been concluded to be reasonable in academic literature for this type of data. Another finding in earlier research is that asset returns do not follow the assumption of normality that a regular regression model assumes. Therefore, the skewed student's t-distribution has been assumed for the error terms. The result of the data indicates that the FF5HSM has a better in-sample fit than the FF5M. The FF5HSM addresses heteroscedasticity and autocorrelation in the data and minimises them depending on the portfolio. Regardingforecasting, both the FF5HSM and the FF5M are accurate models depending on what portfolio the model is applied on.
12

Likelihood inference for multiple step-stress models from a generalized Birnbaum-Saunders distribution under time constraint

Alam, Farouq 11 1900 (has links)
Researchers conduct life testing on objects of interest in an attempt to determine their life distribution as a means of studying their reliability (or survivability). Determining the life distribution of the objects under study helps manufacturers to identify potential faults, and to improve quality. Researchers sometimes conduct accelerated life tests (ALTs) to ensure that failure among the tested units is earlier than what could result under normal operating (or environmental) conditions. Moreover, such experiments allow the experimenters to examine the effects of high levels of one or more stress factors on the lifetimes of experimental units. Examples of stress factors include, but not limited to, cycling rate, dosage, humidity, load, pressure, temperature, vibration, voltage, etc. A special class of ALT is step-stress accelerated life testing. In this type of experiments, the study sample is tested at initial stresses for a given period of time. Afterwards, the levels of the stress factors are increased in agreement with prefixed points of time called stress-change times. In practice, time and resources are limited; thus, any experiment is expected to be constrained to a deadline which is called a termination time. Hence, the observed information may be subjected to Type-I censoring. This study discusses maximum likelihood inferential methods for the parameters of multiple step-stress models from a generalized Birnbaum-Saunders distribution under time constraint alongside other inference-related problems. A couple of general inference frameworks are studied; namely, the observed likelihood (OL) framework, and the expectation-maximization (EM) framework. The last-mentioned framework is considered since there is a possibility that Type-I censored data are obtained. In the first framework, the scoring algorithm is used to get the maximum likelihood estimators (MLEs) for the model parameters. In the second framework, EM-based algorithms are utilized to determine the required MLEs. Obtaining observed information matrices under both frameworks is also discussed. Accordingly, asymptotic and bootstrap-based interval estimators for the model parameters are derived. Model discrimination within the considered generalized Birnbaum-Saunders distribution is carried out by likelihood ratio test as well as by information-based criteria. The discussed step-stress models are illustrated by analyzing three real-life datasets. Accordingly, establishing optimal multiple step-stress test plans based on cost considerations and three optimality criteria is discussed. Since maximum likelihood estimators are obtained by numerical optimization that involves maximizing some objective functions, optimization methods used, and their software implementations in R are discussed. Because of the computational aspects are in focus in this study, the benefits of parallel computing in R, as a high-performance computational approach, are briefly addressed. Numerical examples and Monte Carlo simulations are used to illustrate and to evaluate the methods presented in this thesis. / Thesis / Doctor of Science (PhD)
13

Revisiting the CAPM and the Fama-French Multi-Factor Models: Modeling Volatility Dynamics in Financial Markets

Michaelides, Michael 25 April 2017 (has links)
The primary objective of this dissertation is to revisit the CAPM and the Fama-French multi-factor models with a view to evaluate the validity of the probabilistic assumptions imposed (directly or indirectly) on the particular data used. By thoroughly testing the assumptions underlying these models, several departures are found and the original linear regression models are respecified. The respecification results in a family of heterogeneous Student's t models which are shown to account for all the statistical regularities in the data. This family of models provides an appropriate basis for revisiting the empirical adequacy of the CAPM and the Fama-French multi-factor models, as well as other models, such as alternative asset pricing models and risk evaluation models. Along the lines of providing a sound basis for reliable inference, the respecified models can serve as a coherent basis for selecting the relevant factors from the set of possible ones. The latter contributes to the enhancement of the substantive adequacy of the CAPM and the multi-factor models. / Ph. D.
14

Who Spoke What And Where? A Latent Variable Framework For Acoustic Scene Analysis

Sundar, Harshavardhan 26 March 2016 (has links) (PDF)
Speech is by far the most natural form of communication between human beings. It is intuitive, expressive and contains information at several cognitive levels. We as humans, are perceptive to several of these cognitive levels of information, as we can gather the information pertaining to the identity of the speaker, the speaker's gender, emotion, location, the language, and so on, in addition to the content of what is being spoken. This makes speech based human machine interaction (HMI), both desirable and challenging for the same set of reasons. For HMI to be natural for humans, it is imperative that a machine understands information present in speech, at least at the level of speaker identity, language, location in space, and the summary of what is being spoken. Although one can draw parallels between the human-human interaction and HMI, the two differ in their purpose. We, as humans, interact with a machine, mostly in the context of getting a task done more efficiently, than is possible without the machine. Thus, typically in HMI, controlling the machine in a specific manner is the primary goal. In this context, it can be argued that, HMI, with a limited vocabulary containing specific commands, would suffice for a more efficient use of the machine. In this thesis, we address the problem of ``Who spoke what and where", in the context of a machine understanding the information pertaining to identities of the speakers, their locations in space and the keywords they spoke, thus considering three levels of information - speaker identity (who), location (where) and keywords (what). This can be addressed with the help of multiple sensors like microphones, video camera, proximity sensors, motion detectors, etc., and combining all these modalities. However, we explore the use of only microphones to address this issue. In practical scenarios, often there are times, wherein, multiple people are talking at the same time. Thus, the goal of this thesis is to detect all the speakers, their keywords, and their locations in mixture signals containing speech from simultaneous speakers. Addressing this problem of ``Who spoke what and where" using only microphone signals, forms a part of acoustic scene analysis (ASA) of speech based acoustic events. We divide the problem of ``who spoke what and where" into two sub-problems: ``Who spoke what?" and ``Who spoke where". Each of these problems is cast in a generic latent variable (LV) framework to capture information in speech at different levels. We associate a LV to represent each of these levels and model the relationship between the levels using conditional dependency. The sub-problem of ``who spoke what" is addressed using single channel microphone signal, by modeling the mixture signal in terms of LV mass functions of speaker identity, the conditional mass function of the keyword spoken given the speaker identity, and a speaker-specific-keyword model. The LV mass functions are estimated in a Maximum likelihood (ML) framework using the Expectation Maximization (EM) algorithm using Student's-t Mixture Model (tMM) as speaker-specific-keyword models. Motivated by HMI in a home environment, we have created our own database. In mixture signals, containing two speakers uttering the keywords simultaneously, the proposed framework achieves an accuracy of 82 % for detecting both the speakers and their respective keywords. The other sub-problem of ``who spoke where?" is addressed in two stages. In the first stage, the enclosure is discretized into sectors. The speakers and the sectors in which they are located are detected in an approach similar to the one employed for ``who spoke what" using signals collected from a Uniform Circular Array (UCA). However, in place of speaker-specific-keyword models, we use tMM based speaker models trained on clean speech, along with a simple Delay and Sum Beamformer (DSB). In the second stage, the speakers are localized within the active sectors using a novel region constrained localization technique based on time difference of arrival (TDOA). Since the problem being addressed is a multi-label classification task, we use the average Hamming score (accuracy) as the performance metric. Although the proposed approach yields an accuracy of 100 % in an anechoic setting for detecting both the speakers and their corresponding sectors in two-speaker mixture signals, the performance degrades to an accuracy of 67 % in a reverberant setting, with a $60$ dB reverberation time (RT60) of 300 ms. To improve the performance under reverberation, prior knowledge of the location of multiple sources is derived using a novel technique derived from geometrical insights into TDOA estimation. With this prior knowledge, the accuracy of the proposed approach improves to 91 %. It is worthwhile to note that, the accuracies are computed for mixture signals containing more than 90 % overlap of competing speakers. The proposed LV framework offers a convenient methodology to represent information at broad levels. In this thesis, we have shown its use with three different levels. This can be extended to several such levels to be applicable for a generic analysis of the acoustic scene consisting of broad levels of events. It will turn out that not all levels are dependent on each other and hence the LV dependencies can be minimized by independence assumption, which will lead to solving several smaller sub-problems, as we have shown above. The LV framework is also attractive to incorporate prior knowledge about the acoustic setting, which is combined with the evidence from the data to derive the information about the presence of an acoustic event. The performance of the framework, is dependent on the choice of stochastic models, which model the likelihood function of the data given the presence of acoustic events. However, it provides an access to compare and contrast the use of different stochastic models for representing the likelihood function.

Page generated in 0.2296 seconds