• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 194
  • 57
  • 25
  • 23
  • 19
  • 12
  • 7
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 375
  • 252
  • 51
  • 44
  • 43
  • 36
  • 35
  • 32
  • 32
  • 29
  • 29
  • 28
  • 27
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Marketing Expenditures and IPO Underpricing Puzzle: Evidence from China A-Share Stock Market

Li, Pei-shan 25 June 2009 (has links)
Recently, there has been considerable concern with determining underpricing of initial public offerings (IPOs). This study utilizes both OLS and quantile regression model to examine whether pre-listing marketing expenditure reduce IPO underpricing using China A-share IPOs data. Our OLS result shows that firm¡¥s marketing expenditure could reduce IPO underpricing significantly that was consist with Luo¡¥s (2008) finding who investigate US IPOs market. With regard to quantile regression results, we find that pre-listing IPO marketing expenditures are significantly associated with lower underpricing for lower-underpricing stocks but with no significant effects for median-, and higher-underpricing stocks. We infer that: for lower-underpricing stocks, the risk premium investors require would be lowered because pre-listing marketing expenditures can help for raising transparency of the firm.
52

Endogenous credit risk model:the recovery rate, the probability of default,and the cyclicality

Lee, Yi-mei 20 June 2009 (has links)
Several reports research the best prediction power of the credit risk models for different industries. The structural models use firm¡¦s information for firms¡¦ structural variables, such as asset value and asset volatility, to determine the time of default, but it suffer from some drawbacks, which represent the main reasons behind their relatively poor empirical performance. It require estimates for the parameters of the firm¡¦s asset value, which is nonobservable. Moody's KMV model is well known and useful among them, but it ignores recovery rate and difference in financial structure and industry. The reduced-form models fundamentally differ from typical structural models in the degree of predictability of the default. Reduced-form models use market data and assume the probability of default is exogenously generated. However, the basel committee for banking supervision proposed that risk is endogenous. The purpose of this paper is using quantile and threshold regression to introduce a new approach which is based on the Moody¡¦s KMV model, the Lu and Kuo ( 2005) and the Altman, Brooks Brady, Resti and Sironi (2005) to the evaluation of the endogenous probability of default and the endogenous recovery rate.
53

Intra-hour wind power variability assessment using the conditional range metric : quantification, forecasting and applications

Boutsika, Thekla 09 September 2013 (has links)
The research presented herein concentrates on the quantification, assessment and forecasting of intra-hour wind power variability. Wind power is intrinsically variable and, due to the increase in wind power penetration levels, the level of intra-hour wind power variability is expected to increase as well. Existing metrics used in wind integration studies fail to efficiently capture intra-hour wind power variation. As a result, this can lead to an underestimation of intra-hour wind power variability with adverse effects on power systems, especially their reliability and economics. One major research focus in this dissertation is to develop a novel variability metric which can effectively quantify intra-hour wind power variability. The proposed metric, termed conditional range metric (CRM), quantifies wind power variability using the range of wind power output over a time period. The metric is termed conditional because the range of wind power output is conditioned on the time interval length k and on the wind power average production l[subscript j] over the given time interval. Using statistical analysis and optimization approaches, a computational algorithm to obtain a unique p[superscript th] quantile of the conditional range metric is given, turning the proposed conditional range metric into a probabilistic intra-hour wind power variability metric. The probabilistic conditional range metric CRM[subscript k,l subscript j,p] assists power system operators and wind farm owners in decision making under uncertainty, since decisions involving wind power variability can be made based on the willingness to accept a certain level of risk [alpha] = 1 - p. An extensive performance analysis of the conditional range metric on real-world wind power and wind speed data reveals how certain variables affect intra-hour wind power variability. Wind power variability over a time frame is found to increase with increasing time frame size and decreasing wind farm size, and is highest at mid production wind power levels. Moreover, wind turbines connected through converters to the grid exhibit lower wind power variability compared to same size simple induction generators, while wind power variability is also found to decrease slightly with increasing wind turbine size. These results can lead to improvements in existing or definitions of new wind power management techniques. Moreover, the comparison of the conditional range metric to the commonly used step-changes statistics reveals that, on average, the conditional range metric can accommodate intra-hour wind power variations for an additional 15% of hours within a given year, significantly benefiting power system reliability. The other major research focus in this dissertation is on providing intrahour wind power variability forecasts. Wind power variability forecasts use pth CRM quantiles estimates to construct probabilistic intervals within which future wind power output will lie, conditioned on the forecasted average wind power production. One static and two time-adaptive methods are used to obtain p[superscript th] CRM quantiles estimates. All methods produce quantile estimates of acceptable reliability, with average expected deviations from nominal proportions close to 1%. Wind power variability forecasts can serve as joint-chance constraints in stochastic optimization problems, which opens the door to numerous applications of the conditional range metric. A practical example application uses the conditional range metric to estimate the size of an energy storage system (ESS). Using a probabilistic forecast of wind power hourly averages and historical data on intra-hour wind power variability, the proposed methodology estimates the size of an ESS which minimizes deviations from the forecasted hourly average. The methodology is evaluated using real-world wind power data. When the estimated ESS capacities are compared to the ESS capacities obtained from the actual data, they exhibit coverage rates which are very close to the nominal ones, with an average absolute deviation less than 1.5%. / text
54

A collection of Bayesian models of stochastic failure processes

Kirschenmann, Thomas Harold 06 November 2013 (has links)
Risk managers currently seek new advances in statistical methodology to better forecast and quantify uncertainty. This thesis comprises a collection of new Bayesian models and computational methods which collectively aim to better estimate parameters and predict observables when data arise from stochastic failure processes. Such data commonly arise in reliability theory and survival analysis to predict failure times of mechanical devices, compare medical treatments, and to ultimately make well-informed risk management decisions. The collection of models proposed in this thesis advances the quality of those forecasts by providing computational modeling methodology to aid quantitative based decision makers. Through these models, a reliability expert will have the ability: to model how future decisions affect the process; to impose his prior beliefs on hazard rate shapes; to efficiently estimate parameters with MCMC methods; to incorporate exogenous information in the form of covariate data using Cox proportional hazard models; to utilize nonparametric priors for enhanced model flexibility. Managers are often forced to make decisions that affect the underlying distribution of a stochastic process. They regularly make these choices while lacking a mathematical model for how the process may itself depend significantly on their decisions. The first model proposed in this thesis provides a method to capture this decision dependency; this is used to make an optimal decision policy in the future, utilizing the interactions of the sequences of decisions. The model and method in this thesis is the first to directly estimate decision dependency in a stochastic process with the flexibility and power of the Bayesian formulation. The model parameters are estimated using an efficient Markov chain Monte Carlo technique, leading to predictive probability densities for the stochastic process. Using the posterior distributions of the random parameters in the model, a stochastic optimization program is solved to determine the sequence of decisions that minimise a cost-based objective function over a finite time horizon. The method is tested with artificial data and then used to model maintenance and failure time data from a condenser system at the South Texas Project Nuclear Operating Company (STPNOC). The second and third models proposed in this thesis offer a new way for survival analysts and reliability engineers to utilize their prior beliefs regarding the shape of hazard rate functions. Two generalizations of Weibull models have become popular recently, the exponentiated Weibull and the modified Weibull densities. The popularity of these models is largely due to the flexible hazard rate functions they can induce, such as bathtub, increasing, decreasing, and unimodal shaped hazard rates. These models are more complex than the standard Weibull, and without a Bayesian approach, one faces difficulties using traditional frequentist techniques to estimate the parameters. This thesis develops stylized families of prior distributions that should allow engineers to model their beliefs based on the context. Both models are first tested on artificial data and then compared when modeling a low pressure switch for a containment door at the STPNOC in Bay City, TX. Additionally, survival analysis is performed with these models using a famous collection of censored data about leukemia treatments. Two additional models are developed using the exponentiated and modified Weibull hazard functions as a baseline distribution to implement Cox proportional hazards models, allowing survival analysts to incorporate additional covariate information. Two nonparametric methods for estimating survival functions are compared using both simulated and real data from cancer treatment research. The quantile pyramid process is compared to Polya tree priors and is shown to have a distinct advantage due to the need for choosing a distribution upon which to center a Polya tree. The Polya tree and the quantile pyramid appear to have effectively the same accuracy when the Polya tree has a very well-informed choice of centering distribution. That is rarely the case, however, and one must conclude that the quantile pyramid process is at least as effective as Polya tree priors for modeling unknown situations. / text
55

Three Essays on Labor Market Outcomes

Prakash, Anila January 2015 (has links)
The three chapters in this dissertation look at different aspects of the labor market and its players. The first chapter estimates the impact of using the internet for job search on job match quality. Using both the semi-parametric Meyer (1990) model and the non-parametric Hausman Woutersen (2014) hazard model, the paper finds that exit rate from employment is at least 28% lower when internet is used as a job search tool. The second chapter looks at the effect of past unemployment on future wages. It is believed that employers may use past unemployment as a signal of low productivity. In this situation workers with a history of unemployment may receive lower wages. The paper uses the Machado Mata (2005) quantile decomposition technique to decompose the wage difference into differences due to characteristics and differences due to rewards. Results indicate that workers with an unemployment spell of more than three months receive at least 12% lower wages and that more than 40% of this wage difference can be attributed to the lower rewards received by the previously unemployed.. The last chapter focuses on human capital formation and looks at some of the reasons behind the low levels of schooling India. Using the Indian Household Development Survey (2005), the paper finds that income continues to be an important factor behind the low level of primary school enrollment. On average, poor students have at least 3% lower enrollment rates, when compared to similar skilled non-poor students.
56

Sequential Analysis of Quantiles and Probability Distributions by Replicated Simulations

Eickhoff, Mirko January 2007 (has links)
Discrete event simulation is well known to be a powerful approach to investigate behaviour of complex dynamic stochastic systems, especially when the system is analytically not tractable. The estimation of mean values has traditionally been the main goal of simulation output analysis, even though it provides limited information about the analysed system's performance. Because of its complexity, quantile analysis is not as frequently applied, despite its ability to provide much deeper insights into the system of interest. A set of quantiles can be used to approximate a cumulative distribution function, providing fuller information about a given performance characteristic of the simulated system. This thesis employs the distributed computing power of multiple computers by proposing new methods for sequential and automated analysis of quantile-based performance measures of such dynamic systems. These new methods estimate steady state quantiles based on replicating simulations on clusters of workstations as simulation engines. A general contribution to the problem of the length of the initial transient is made by considering steady state in terms of the underlying probability distribution. Our research focuses on sequential and automated methods to guarantee a satisfactory level of confidence of the final results. The correctness of the proposed methods has been exhaustively studied by means of sequential coverage analysis. Quantile estimates are used to investigate underlying probability distributions. We demonstrate that synchronous replications greatly assist this kind of analysis.
57

信用卡持卡人行為研究與風險估計

陳淑君 Unknown Date (has links)
根據金管會銀行局的統計資料顯示,台灣在2005年2月底信用卡流通卡數已高逹44,611仟張,是1992年底信用卡流通卡數的近30倍。雖然信用卡流通卡數持續增長,在1992年底時成長率高逹62.1%,之後在這十年間信用卡流通卡數成長率幾乎都有30%以上的成長率,1996年成長率為48.7%,此時正為產品生命周期中的成長期。觀察近二年信用卡流通卡數的成長率,2004年只有16.7%,今年(2005年)成長率卻下滑到1%左右,可見信用卡市場已從生命周期中的成長期逐漸邁向成熟期。銀行若想在競爭激烈的信用卡市場中搶得先機,進而獲取利潤,應進行所謂產品的製程創新,即如何在信用卡進入產品生命周期的成熟期中,加強信用風險控管以降低成本、提高消費性產品即信用卡的品質和附加價值,以及如何進一步鞏固現有的信用卡客戶。本研究擬將提供一個具體之模型,以供日後銀行預測信用卡持卡人違約或剪卡之用。 本論文擬使用國內某家銀行在2004年3月底於資料倉儲中的客戶資料,有效分析客戶數共計128萬多筆。首先,本文先將信用卡客戶依人口統計變數、信用卡持卡人與發卡機構往來狀況、信用卡持卡人之使用狀況、信用卡持卡人之消費行為以及信用卡客戶付款狀況,探討信用卡客戶的剪卡概況。接著建構一個logistic model來預測客戶的剪卡機率,再用quantile regression model 分別對高剪卡率及低剪卡率之信用卡客戶進行分析。本文的重要發現有: 1. 年齡、是否使用循環利息在不同分量下,對於剪卡率的影響皆為負向關係,而且隨著分量愈大,剪卡率下降的幅度也愈多。 2. 每月限額、半年內交易次數、預借現金次數在不同分量下,對於剪卡率的影響皆為負向關係,而且隨著分量愈大,剪卡率下降的幅也愈少。 3. 婚姻狀況、有效信用卡數在不同分量下,對於剪卡率的影響皆為正向關係,而且隨著分量愈大,剪卡率增加的幅度也愈大。 銀行可根據重要的發現結果來制定授信政策,例如在每月限額部份,對於高剪卡率的客戶而言,若提高此客戶的信用額度,將使其剪卡率下降幅度少於低剪卡率的客戶,因此,銀行可著重在鞏固低剪卡率的客戶,藉由調高其信用額度,增加這群客戶對銀行信用卡的品牌忠誠度。或者可加以參考客戶的其它持卡消費行為,使授信政策更為完全,而且又可以滿足現存客戶的需求。
58

Efeitos da maternidade e do casamento sobre o diferencial de salários entre gêneros no Brasil para o ano de 2014

Souza, Paola Faria Lucas de January 2016 (has links)
SOUZA, Paola Faria Lucas de. Efeitos da maternidade e do casamento sobre o diferencial de salários entre gêneros no Brasil para o ano de 2014. 2016. 113f. - Tese (Doutorado) - Universidade Federal do Ceará, Faculdade de Economia, Administração, Atuária e Contabilidade, Programa de Pós-Graduação em Economia, Fortaleza (CE), 2016. / Submitted by CAEN PROGRAMA DE PÓS-GRADUAÇÃO EM ECONOMIA (mpe@caen.ufc.br) on 2017-11-10T19:43:02Z No. of bitstreams: 1 2017_tese_pflsouza.pdf: 1982619 bytes, checksum: 9115d3eb2881e0ff05c3c83e3bd409ab (MD5) / Approved for entry into archive by Márcia Araújo (marcia_m_bezerra@yahoo.com.br) on 2017-11-13T11:14:03Z (GMT) No. of bitstreams: 1 2017_tese_pflsouza.pdf: 1982619 bytes, checksum: 9115d3eb2881e0ff05c3c83e3bd409ab (MD5) / Made available in DSpace on 2017-11-13T11:14:03Z (GMT). No. of bitstreams: 1 2017_tese_pflsouza.pdf: 1982619 bytes, checksum: 9115d3eb2881e0ff05c3c83e3bd409ab (MD5) Previous issue date: 2016 / This thesis relates motherhood and marriage with salary. The database used is the PNAD of 2014. There are three chapters. In the first, it is made an analysis of the motherhood role in women's wage differentials. The main contribution is to place Brazil in the literature of motherhood pay gap, emphasizing mainly that the effects of maternity should exclude differences of productive characteristics between mothers and non mothers. Four regressionn specificantions and the Oaxaca-Blinder decomposition were used, both with correction for selection bias and considering adaptations to complex databases. Main results: i) salary penalties for mothers; ii) increase in the wage penalty with the number of children; (iii) stabilization of the penalty from the third child; iv) wage difference due to maternity similar to that found for genders; v) maternal characteristics as a determinant of lower wages, vi) lower effect of maternity on wages for mothers in typical male activities. In the second chapter analysis of the motherhood pay gap for all salary distribution, conditional and unconditional. In order to analyze conditional effects, quantitative regressions are adopted, whereas for the unconditional analysis it is used the decomposition of Melly (2006). The innovations are to make quantile study, selection bias correction in both techniques following an adaptation of Buchisky (2001), and adopt the decomposition of Melly (2006) to measure the motherhood pay gap. Main results: i) the greater maternity penalty is in the highest conditional quantiles; ii) the higher the number of children leave higher wage penalty in all the conditional deciles; iii) the wage gap due to maternity increases with the level of income; iv) most of the wage gap is not explained by differences in attributes between groups. In the third chapter it is argued that marriage generates a wage premium for men and a wage penalty for women, what can increase the gender pay differentials. This behavior is shown for the wage distribution. The methodologies used were: decomposition of the T-Theil index, unconditional quantile regressions (by RIF) and the decompositions of Oaxaca-Blinder (1973) and Firpo, Fortin and Lemieux (2009). Main results: i) there is a marriage wage premium for men; ii) there is a matrimonial salary penalty for women; iii) the division of domestic labor does not justify higher incomes for men; iv) there are indications of differential wage treatment for single men and married women in the labor market; v) the wage penalty of marriage for women is higher for those with higher incomes; vi) salary bonuses are higher for men with higher incomes; (vii) single marital status is the worst in terms of wages for men; (viii) married marital status is the worst in terms of wages for women. / Esta tese faz um paralelo entre maternidade e casamento com o salário. A base de dados usada é a PNAD de 2014. São três capítulos. No primeiro é feita uma análise do papel da maternidade nas de diferenças salariais femininas. A principal contribuição é situar o Brasil na literatura de motherhood pay gap, salientando principalmente que os efeitos da maternidade devem excluir diferenças de características produtivas entre mães e não mães. Foram usadas 4 especificações de regressão e decomposição de Oaxaca-Blinder, ambos com correção para viés de seleção e considerando as adaptações às bases de dados complexas. Principais resultados: i) penalização salarial para mães; ii) aumento da penalização salarial com o número de filhos; iii) estabilização da penalização a partir do terceiro filho; iv) diferença salarial devido a maternidade similar a encontrada para gêneros; v) característica mãe como definidora de menores salários, vi) menor efeito da maternidade sobre os salários para mães em atividades tipicamente masculinas. No segundo capitulo a análise do motherhood pay gap é feita para toda a distribuição salarial, condicional e não condicional. Para analisar efeitos condicionais regressões quantilicas são adotadas, enquanto para a análise incondicional utiliza-se da decomposição de Melly (2006). As inovações são fazer o estudo em quantis, correção de viés de seleção em ambas as técnicas seguindo uma adaptação de Buchisky (2001), e adotar a decomposição de Melly (2006) para medir o motherhood pay gap. Principais resultados: i) há uma maior penalização à maternidade nos quantis condicionais mais altos; ii) quanto maior o número de filhos maior a penalização salarial em todos os decis condicionais; iii) a diferença salarial devido a maternidade aumenta com o nível de renda; iv) a maior parte da diferença salarial é não é explicada por diferenças de atributos entre os grupos. No terceiro capitulo argumenta-se que o casamento gera um bônus salarial para o homem e uma penalização salarial para as mulheres, sendo um aumentador de diferenças salariais de gênero. Mostra-se este comportamento pela distribuição salarial. As metodologias usadas foram decomposição do índice de T-Theil, regressões quantilicas não condicionais (por RIF) e as decomposições de Oaxaca-Blinder (1973) e Firpo, Fortin e Lemieux (2009). Principais resultados: i) há um prêmio matrimonial para homens; ii) há uma penalização salarial matrimonial para mulheres; iii) a divisão do trabalho doméstico não justifica maiores rendimentos para os homens; iv) há indícios de tratamento salarial diferenciado a homens solteiros e mulheres casadas no mercado de trabalho; v) a penalização salarial do casamento para as mulheres é maior para as com maior renda; vi) a bonificação salarial é maior para os homens com maior renda; vii) o estado civil solteiro é o pior em termos salariais para os homens; viii) o estado civil casada é o pior em termos salariais para as mulheres.
59

Essays in nonparametric econometrics and infinite dimensional mathematical statistics / Ensaios em econometria não-paramétrica e estatística matemática em dimensão infinita

Horta, Eduardo de Oliveira January 2015 (has links)
A presente Tese de Doutorado é composta de quatro artigos científicos em duas áreas distintas. Em Horta, Guerre e Fernandes (2015), o qual constitui o Capítulo 2 desta Tese, é proposto um estimador suavizado no contexto de modelos de regressão quantílica linear (Koenker e Basset, 1978). Uma representação de Bahadur-Kiefer uniforme é obtida, a qual apresenta uma ordem assintótica que domina aquela correspondente ao estimador clássico. Em seguida, prova-se que o viés associado à suavização é negligenciável, no sentido de que o termo de viés é equivalente, em primeira ordem, ao verdadeiro parâmetro. A taxa precisa de convergência é dada, a qual pode ser controlada uniformemente pela escolha do parâmetro de suavização. Em seguida, são estudadas propriedades de segunda ordem do estimador proposto, em termos do seu erro quadrático médio assintótico, e mostra-se que o estimador suavizado apresenta uma melhoria em relação ao usual. Como corolário, tem-se que o estimador é assintoticamente normal e consistente à ordem p n. Em seguida, é proposto um estimador consistente para a matriz de covariância assintótica, o qual não depende de estimação de parâmetros auxiliares e a partir do qual pode-se obter diretamente intervalos de confiança assintóticos. A qualidade do método proposto é por fim ilustrada em um estudo de simulação. Os artigos Horta e Ziegelmann (2015a, 2015b, 2015c) se originam de um ímpeto inicial destinado a generalizar os resultados de Bathia et al. (2010). Em Horta e Ziegelmann (2015a), Capítulo 3 da presente Tese, é investigada a questão de existência de certos processos estocásticos, ditos processos conjugados, os quais são conduzidos por um segundo processo cujo espaço de estados tem como elementos medidas de probabilidade. Através dos conceitos de coerência e compatibilidade, obtémse uma resposta afirmativa à questão anterior. Baseado nas noções de medida aleatória (Kallenberg, 1973) e desintegração (Chang e Pollard, 1997; Pollard, 2002), é proposto um método geral para construção de processos conjugados. A teoria permite um rico conjunto de exemplos, e inclui uma classe de modelos de mudança de regime. Em Horta e Ziegelmann (2015b), Capítulo 4 desta Tese, é proposto – em relação com a construção obtida em Horta e Ziegelmann (2015a) – o conceito de processo fracamente conjugado: um processo estocástico real a tempo contínuo, conduzido por uma sequência de funções de distribuição aleatórias, ambos conectados por uma condição de compatibilidade a qual impõe que aspectos da distribuição do primeiro processo são divisíveis em uma quantidade enumerável de ciclos, dentro dos quais este tem como marginais, precisamente, o segundo processo. Em seguida, mostra-se que a metodologia de Bathia et al. (2010) pode ser aplicada para se estudar a estrutura de dependência de processos fracamente conjugados, e com isso obtém-se resultados de consistência à ordem p n para os estimadores que surgem naturalmente na teoria. Adicionalmente, a metodologia é ilustrada através de uma implementação a dados financeiros. Especificamente, o método proposto permite que características da dinâmica das distribuições de processos de retornos sejam traduzidas em termos de um processo escalar latente, a partir do qual podem ser obtidas previsões de quantidades associadas a essas distribuições. Em Horta e Ziegelmann (2015c), Capítulo 5 da presente Tese, são obtidos resultados de consistência à ordem p n em relação à estimação de representações espectrais de operadores de autocovariância de séries de tempo Hilbertianas estacionárias, em um contexto de medições imperfeitas. Os resultados são uma generalização do método desenvolvido em Bathia et al. (2010), e baseiam-se no importante fato de que elementos aleatórios em um espaço de Hilbert separável são quase certamente ortogonais ao núcleo de seu respectivo operador de covariância. É dada uma prova direta deste fato. / The present Thesis is composed of 4 research papers in two distinct areas. In Horta, Guerre, and Fernandes (2015), which constitutes Chapter 2 of this Thesis, we propose a smoothed estimator in the framework of the linear quantile regression model of Koenker and Bassett (1978). A uniform Bahadur-Kiefer representation is provided, with an asymptotic rate which dominates the standard quantile regression estimator. Next, we prove that the bias introduced by smoothing is negligible in the sense that the bias term is firstorder equivalent to the true parameter. A precise rate of convergence, which is controlled uniformly by choice of bandwidth, is provided. We then study second-order properties of the smoothed estimator, in terms of its asymptotic mean squared error, and show that it improves on the usual estimator when an optimal bandwidth is used. As corollaries to the above, one obtains that the proposed estimator is p n-consistent and asymptotically normal. Next, we provide a consistent estimator of the asymptotic covariance matrix which does not depend on ancillary estimation of nuisance parameters, and from which asymptotic confidence intervals are straightforwardly computable. The quality of the method is then illustrated through a simulation study. The research papers Horta and Ziegelmann (2015a;b;c) are all related in the sense that they stem from an initial impetus of generalizing the results in Bathia et al. (2010). In Horta and Ziegelmann (2015a), Chapter 3 of this Thesis, we address the question of existence of certain stochastic processes, which we call conjugate processes, driven by a second, measure-valued stochastic process. We investigate primitive conditions ensuring existence and, through the concepts of coherence and compatibility, obtain an affirmative answer to the former question. Relying on the notions of random measure (Kallenberg (1973)) and disintegration (Chang and Pollard (1997), Pollard (2002)), we provide a general approach for construction of conjugate processes. The theory allows for a rich set of examples, and includes a class of Regime Switching models. In Horta and Ziegelmann (2015b), Chapter 4 of the present Thesis, we introduce, in relation with the construction in Horta and Ziegelmann (2015a), the concept of a weakly conjugate process: a continuous time, real valued stochastic process driven by a sequence of random distribution functions, the connection between the two being given by a compatibility condition which says that distributional aspects of the former process are divisible into countably many cycles during which it has precisely the latter as marginal distributions. We then show that the methodology of Bathia et al. (2010) can be applied to study the dependence structure of weakly conjugate processes, and therewith provide p n-consistency results for the natural estimators appearing in the theory. Additionally, we illustrate the methodology through an implementation to financial data. Specifically, our method permits us to translate the dynamic character of the distribution of an asset returns process into the dynamics of a latent scalar process, which in turn allows us to generate forecasts of quantities associated to distributional aspects of the returns process. In Horta and Ziegelmann (2015c), Chapter 5 of this Thesis, we obtain p n-consistency results regarding estimation of the spectral representation of the zero-lag autocovariance operator of stationary Hilbertian time series, in a setting with imperfect measurements. This is a generalization of the method developed in Bathia et al. (2010). The generalization relies on the important property that centered random elements of strong second order in a separable Hilbert space lie almost surely in the closed linear span of the associated covariance operator. We provide a straightforward proof to this fact.
60

Maintaining Stream Data Distribution Over Sliding Window

Chen, Jian January 2018 (has links)
In modern applications, it is a big challenge that analyzing the order statistics about the most recent parts of the high-volume and high velocity stream data. There are some online quantile algorithms that can keep the sketch of the data in the sliding window and they can answer the quantile or rank query in a very short time. But most of them take the GK algorithm as the subroutine, which is not known to be mergeable. In this paper, we propose another algorithm to keep the sketch that maintains the order statistics over sliding windows. For the fixed-size window, the existing algorithms can’t maintain the correctness in the process of updating the sliding window. Our algorithm not only can maintain the correctness but also can achieve similar performance of the optimal algorithm. Under the basis of maintaining the correctness, the insert time and query time are close to the best results, while others can't maintain the correctness. In addition to the fixed-size window algorithm, we also provide the time-based window algorithm that the window size varies over time. Last but not least, we provide the window aggregation algorithm which can help extend our algorithm into the distributed system.

Page generated in 0.0334 seconds