• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 456
  • 205
  • 61
  • 32
  • 30
  • 28
  • 26
  • 21
  • 7
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1036
  • 127
  • 126
  • 123
  • 100
  • 93
  • 83
  • 80
  • 76
  • 75
  • 68
  • 64
  • 62
  • 59
  • 57
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

An Investigation of Distribution Functions

Su, Nan-cheng 24 June 2008 (has links)
The study of properties of probability distributions has always been a persistent theme of statistics and of applied probability. This thesis deals with an investigation of distribution functions under the following two topics: (i) characterization of distributions based on record values and order statistics, (ii) properties of the skew-t distribution. Within the extensive characterization literature there are several results involving properties of record values and order statistics. Although there have been many well known results already developed, it is still of great interest to find new characterization of distributions based on record values and order statistics. In the first part, we provide the conditional distribution of any record value given the maximum order statistics and study characterizations of distributions based on record values and the maximum order statistics. We also give some characterizations of the mean value function within the class of order statistics point processes, by using certain relations between the conditional moments of the jump times or current lives. These results can be applied to characterize the uniform distribution using the sequence of order statistics, and the exponential distribution using the sequence of record values, respectively. Azzalini (1985, 1986) introduced the skew-normal distribution which includes the normal distribution and has some properties like the normal and yet is skew. This class of distributions is useful in studying robustness and for modeling skewness. Since then, skew-symmetric distributions have been proposed by many authors. In the second part, the so-called generalized skew-t distribution is defined and studied. Examples of distributions in this class, generated by the ratio of two independent skew-symmetric distributions, are given. We also investigate properties of the skew-symmetric distribution.
162

Four Essays on Building Conditional Correlation GARCH Models.

Nakatani, Tomoaki January 2010 (has links)
This thesis consists of four research papers. The main focus is on building the multivariate Conditional Correlation (CC-) GARCH models. In particular, emphasis lies on considering an extension of CC-GARCH models that allow for interactions or causality in conditional variances. In the first three chapters, misspecification testing and parameter restrictions in these models are discussed. In the final chapter, a computer package for building major variants of the CC-GARCH models is presented. The first chapter contains a brief introduction to the CC-GARCH models as well as a summary of each research paper. The second chapter proposes a misspecification test for modelling of the conditional variance part of the Extended Constant CC-GARCH model. The test is designed for testing the hypothesis of no interactions in the conditional variances. If the null hypothesis is true, then the conditional variances may be described by the standard CCC-GARCH model. The test is constructed on the Lagrange Multiplier (LM) principle that only requires the estimation of the null model. Although the test is derived under the assumption of the constant conditional correlation, the simulation experiments suggest that the test is also applicable to building CC-GARCH models with changing conditional correlations. There is no asymptotic theory available for these models, which is why simulation of the test statistic in this situation has been necessary. The third chapter provides yet another misspecification test for modelling of the conditional variance component of the CC-GARCH models, whose parameters are often estimated in two steps. The estimator obtained through these two steps is a two-stage quasi-maximum likelihood estimator (2SQMLE). Taking advantage of the asymptotic results for 2SQMLE, the test considered in this chapter is formulated using the LM principle, which requires only the estimation of univariate GARCH models. It is also shown that the test statistic may be computed by using an auxiliary regression. A robust version of the new test is available through another auxiliary regression. All of this amounts to a substantial simplification in computations compared with the test proposed in the second chapter. The simulation experiments show that, under both under both Gaussian and leptokurtic innovations, as well as under changing conditional correlations, the new test has reasonable size and power properties. When modelling the conditional variance, it is necessary to keep the sequence of conditional covariance matrices positive definite almost surely for any time horizon. In the fourth chapter it is demonstrated that under certain conditions some of the parameters of the model can take negative values while the conditional covariance matrix remains positive definite almost surely. It is also shown that even in the simplest first-order vector GARCH representation, the relevant parameter space can contain negative values for some parameters, which is not possible in the univariate model. This finding makes it possible to incorporate negative volatility spillovers into the CC-GARCH framework. Many new GARCH models and misspecification testing procedures have been recently proposed in the literature. When it comes to applying these models or tests, however, there do not seem to exist many options for the users to choose from other than creating their own computer programmes. This is especially the case when one wants to apply a multivariate GARCH model. The last chapter of the thesis offers a remedy to this situation by providing a workable environment for building CC-GARCH models. The package is open source, freely available on the Internet, and designed for use in the open source statistical environment R. With this package can estimate major variants of CC-GARCH models as well as simulate data from the CC-GARCH data generating processes with multivariate normal or Student's t innovations. In addition, the package is equipped with the necessary functions for conducting diagnostic tests such as those discussed in the third chapter of this thesis. / <p>Diss. Stockholm : Handelshögskolan, 2010. Sammanfattning jämte 4 uppsatser.</p>
163

A abordagem de martingais para o estudo de ocorrência de palavras em ensaios independentes / The martingale approach in the study of words occurrence in independent experiments

Masitéli, Vanessa 07 April 2017 (has links)
Submitted by Ronildo Prado (ronisp@ufscar.br) on 2017-08-16T18:49:11Z No. of bitstreams: 1 DissVM.pdf: 10400529 bytes, checksum: 6f3a8dfea497dd3a1543a2b5847ad36e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-16T18:49:21Z (GMT) No. of bitstreams: 1 DissVM.pdf: 10400529 bytes, checksum: 6f3a8dfea497dd3a1543a2b5847ad36e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-16T18:49:27Z (GMT) No. of bitstreams: 1 DissVM.pdf: 10400529 bytes, checksum: 6f3a8dfea497dd3a1543a2b5847ad36e (MD5) / Made available in DSpace on 2017-08-16T18:49:35Z (GMT). No. of bitstreams: 1 DissVM.pdf: 10400529 bytes, checksum: 6f3a8dfea497dd3a1543a2b5847ad36e (MD5) Previous issue date: 2017-04-07 / Não recebi financiamento / Let {Xn} be a sequence of i.i.d. random variables taking values in an enumerable alphabet. Given a finite collection of words, we observe this sequence till the moment T at which one of these words appears as a run. In this work we apply the martingale approach introduced by Li (1980) and Gerber e Li (1981) in order to study the waiting time until one of the words occurs for the first time, the mean of T and the probability of a word to be the first one to appear. / Seja {Xn} uma sequência de variáveis aleatórias i.i.d. assumindo valores num alfabeto enumerável. Dada uma coleção de palavras finita, observamos esta sequência até o momento T em que uma dessas palavras apareça emX1,X2, .... Neste trabalho utilizamos a abordagem de martingais, introduzida por Li (1980) e Gerber e Li ( 981), para estudar o tempo de espera até que uma das palavras ocorra pela primeira vez, o tempo médio de T e a probabilidade de uma palavra ser a primeira a aparecer.
164

Aspects théoriques et pratiques dans l'estimation non paramétrique de la densité conditionnelle pour des données fonctionnelles / Theoretical and practical aspects in non parametric estimation of the conditional density with functional data

Madani, Fethi 11 May 2012 (has links)
Dans cette thèse, nous nous intéressons à l'estimation non paramétrique de la densité conditionnelle d'une variable aléatoire réponse réelle conditionnée par une variable aléatoire explicative fonctionnelle de dimension éventuellement fi nie. Dans un premier temps, nous considérons l'estimation de ce modèle par la méthode du double noyaux. Nous proposons une méthode de sélection automatique du paramètre de lissage (global et puis local) intervenant dans l'estimateur à noyau, et puis nous montrons l'optimalité asymptotique du paramètre obtenu quand les observations sont indépendantes et identiquement distribuées. Le critère adopté est issu du principe de validation croisée. Dans cette partie nous avons procédé également à la comparaison de l'efficacité des deux types de choix (local et global). Dans la deuxième partie et dans le même contexte topologique, nous estimons la densité conditionnelle par la méthode des polynômes locaux. Sous certaines conditions, nous établissons des propriétés asymptotiques de cet estimateur telles que la convergence presque-complète et la convergence en moyenne quadratique dans le cas où les observations sont indépendantes et identiquement distribuées. Nous étendons aussi nos résultats au cas où les observations sont de type α- mélangeantes, dont on montre la convergence presque-complète (avec vitesse de convergence) de l'estimateur proposé. Enfi n, l'applicabilité rapide et facile de nos résultats théoriques, dans le cadre fonctionnel, est illustrée par des exemples (1) sur des données simulées, et (2) sur des données réelles. / In this thesis, we consider the problem of the nonparametric estimation of the conditional density when the response variable is real and the regressor is valued in a functional space. In the rst part, we use the double kernels method's as a estimation method where we focus on the choice of the smoothing parameters. We construct a data driven method permitting to select optimally and automatically bandwidths. As main results, we study the asymptotic optimality of this selection method in the case where observations are independent and identically distributed (i.i.d). Our selection rule is based on the classical cross-validation ideas and it deals with the both global and local choices. The performance of our approach is illustrated also by some simulation results on nite samples where we conduct a comparison between the two types of bandwidths choices (local and global). In the second part, we adopt a functional version of the local linear method, in the same topological context, to estimate some functional parameters. Under some general conditions, we establish the almost-complete convergence (with rates) of the proposed estimator in the both cases ( the i.i.d. case and the α-mixing case) . As application, we use the conditional density estimator to estimate the conditional mode estimation and to derive some asymptotic proprieties of the constructed estimator. Then, we establish the quadratic error of this estimator by giving its exact asymptotic expansion (involved in the leading in the bias and variance terms). Finally, the applicability of our results is then veri ed and validated for (1) simulated data, and (2) some real data.
165

Prediction of Stock Return Volatility Using Internet Data / Prediction of Stock Return Volatility Using Internet Data

Juchelka, Tomáš January 2017 (has links)
The thesis investigates relationship between daily stock return volatility of Dow Jones Industrial Average stocks and data obtained on Twitter, the social media network. The Twitter data set contains a number of tweets, categorized according to their polarity, i.e. positive, negative and neutral sentiment of tweets. We construct two classes of models, GARCH and ARFIMA, where for either of them we research basic model setting and setting with additional Twitter variables. Our goal is to compare, which of them predicts the one day ahead volatility most precisely. Besides, we provide commentary regarding the effects of Twitter volume variables on future stock volatility. The analysis has revealed that the best performing model, given the length and structure of our data set, is the ARFIMA model augmented on Twitter volume residuals. In the context of the thesis, Twitter volume residuals represent unexpected activity on the social media network and are obtained as residuals from Twitter volume autoregression. Plain ARFIMA model was the second best and plain volume augmented ARFIMA was in third place. This means that all three ARFIMA models outperformed all three GARCH models in our research. Regarding the Twitter estimation parameters, we found that higher the activity the higher tomorrow's stock...
166

[en] SIMULATION AND STOCHASTIC OPTIMIZATION FOR ENERGY CONTRACTING OF LARGE CONSUMERS / [pt] SIMULAÇÃO E OTIMIZAÇÃO ESTOCÁSTICA PARA CONTRATAÇÃO DE ENERGIA ELÉTRICA DE GRANDES CONSUMIDORES

EIDY MARIANNE MATIAS BITTENCOURT 09 November 2016 (has links)
[pt] A contratação de energia elétrica no Brasil por parte de grandes consumidores é feita de acordo com o nível de tensão e considerando dois ambientes: o Ambiente Regulado e o Ambiente Livre. Os grandes consumidores são aqueles que possuem carga igual ou superior a 3 MW, atendidos em qualquer nível de tensão e a energia pode ser contratada em quaisquer desses ambientes. Um grande desafio para esses consumidores é determinar a melhor alternativa de contratação. Para tratar este problema, é preciso ter em conta que o consumo de energia e a demanda de potência requerida são variáveis desconhecidas no momento da contratação do consumidor, sendo necessário estimá-las. Esta dissertação propõe atacar este problema por uma metodologia que envolve simulação de cenários futuros de demanda máxima de potência e energia total consumida e otimização estocástica dos cenários simulados para definir o melhor contrato. Dada a natureza estocástica do problema, empregou-se o CVaR (Conditional Value at Risk) como medida de risco para o problema de otimização. Para ilustrar, os resultados da contratação foram obtidos para um grande consumidor real considerando a modalidade Verde A4 no Ambiente Regulado e um contrato de quantidade no Ambiente Livre. / [en] The energy contracting in Brazil for large consumers is done according to the voltage level and considering two environments: the Regulated Environment and the Free Environment. Large consumers are those characterized by installed load equal to or greater than 3 MW, supplied at any voltage level and its energy contract can be chosen between any of these two environments. A major challenge for these consumers is to determine the best alternative of contracting. To address this problem, it must be taken into account that the energy consumption and the required power demand are unknown variables by the time of consumer contracting, being necessary to estimate them. This dissertation proposes to tackle this problem by a methodology based on the simulation of future scenarios of maximum power demand and total consumed energy and on stochastic optimization of these simulated scenarios in order to define the best contract. Given the stochastic nature of the problem, it was used the CVaR (Conditional Value at Risk) as a measure of risk for the optimization problem. To illustrate, the contracting results were obtained for a large real consumer considering the Green Tariff group A4 in the Regulated Environment and a quantity contract in the Free Environment.
167

[en] ALGORITHMS FOR TABLE STRUCTURE RECOGNITION / [pt] ALGORITMOS PARA RECONHECIMENTO DE ESTRUTURAS DE TABELAS

YOSVENI ESCALONA ESCALONA 26 June 2020 (has links)
[pt] Tabelas são uma forma bastante comum de organizar e publicar dados. Por exemplo, a Web possui um enorme número de tabelas publicadas em HTML, embutidas em documentos em PDF, ou que podem ser simplesmente baixadas de páginas Web. Porém, tabelas nem sempre são fáceis de interpretar pois possuem uma grande variedade de características e são organizadas de diversas formas. De fato, um grande número de métodos e ferramentas foram desenvolvidos para interpretação de tabelas. Esta dissertação apresenta a implementação de um algoritmo, baseado em Conditional Random Fields (CRFs), para classificar as linhas de uma tabela em linhas de cabeçalho, linhas de dados e linhas de metadados. A implementação é complementada por dois algoritmos para reconhecimento de tabelas em planilhas, respectivamente baseados em regras e detecção de regiões. Por fim, a dissertação descreve os resultados e os benefícios obtidos pela aplicação dos algoritmos a tabelas em formato HTML, obtidas da Web, e a tabelas em forma de planilhas, baixadas do Web site da Agência Nacional de Petróleo. / [en] Tables are widely adopted to organize and publish data. For example, the Web has an enormous number of tables, published in HTML, imbedded in PDF documents, or that can be simply downloaded from Web pages. However, tables are not always easy to interpret because of the variety of features and formats used. Indeed, a large number of methods and tools have been developed to interpret tables. This dissertation presents the implementation of an algorithm, based on Conditional Random Fields (CRFs), to classify the rows of a table as header rows, data rows or metadata rows. The implementation is complemented by two algorithms for table recognition in a spreadsheet document, respectively based on rules and on region detection. Finally, the dissertation describes the results and the benefits obtained by applying the implemented algorithms to HTML tables, obtained from the Web, and to spreadsheet tables, downloaded from the Brazilian National Petroleum Agency.
168

Can development initiatives reduce the recruitment of adolescents to organised crime groups? Perspectives of the recipients of the Prospera Conditional Cash Transfer Programme in Mexico

Breckin, Edmund F.J. January 2022 (has links)
This thesis explores the role of Development policy as an alternative to the traditional public security focused strategies for tackling organised crime violence in Latin America and the Caribbean. To do so, it builds bridges between the academic literature of criminology and development. It examines the public experiences of insecurity in Mexico and the social impacts of a development initiative, the Conditional Cash Transfer (CCT) programme in two municipalities in Mexico. The thesis poses questions about the impacts of Development initiatives upon organised crime violence from the perspectives of those living within areas affected by violence. The CCT programmes seek to address poverty in the short and long-term and research has begun to explore the potential of these programmes to diminish violence and crime, almost exclusively from a quantitative research approach, whereas this study adopts a qualitative design. This research is based on data gathered through interviews, observations, and focus groups to examine the perspectives and experiences of current and former CCT recipients, CCT administrators, public security officials, members of the public, NGO leads, and ex-gang affiliated individuals. This micro-level qualitative methodology adopted in this research contrasts the almost exclusively macro-level, econometric evaluations which have dominated CCT and organised crime research. The findings demonstrated that respondents perceived CCTs as significant in reducing the propensity of young men participating in organised crime violence in their localities. The perspectives of participants in this study provided enough evidence to overturn a common narrative of ‘prevention doesn’t work’ and suggest that in each of the areas targeted by the study there is potential for a reduction of organised crime rooted in development initiatives according to respondents.
169

Modeling volatility for the Swedish stock market

Vega Ezpeleta, Emilio January 2016 (has links)
This thesis will investigate if adding an exogenous variable (implied volatility) to the variance equation will increase the performance for the GARCH(1,1) and EGARCH(1,1) models based on the OMXS30 index. These models are also compared with the implied volatility itself as a forecasting/modeling method. To evaluate the models the realized variance will be used as an unbiased estimator of the conditional variance. The findings suggest that adding implied volatility to the variance equation increase the overall performance.
170

Study of the Higgs boson decay H → ZZ(∗) → 4ℓ and inner detector performance studies with the ATLAS experiment

Selbach, Karoline Elfriede January 2014 (has links)
The Higgs mechanism is the last piece of the SM to be discovered which is responsible for giving mass to the electroweak W± and Z bosons. Experimental evidence for the Higgs boson is therefore important and is currently explored at the Large Hadron Collider (LHC) at CERN. The ATLAS experiment (A Toroidal LHC ApparatuS) is analysing a wide range of physics processes from collisions produced by the LHC at a centre-of-mass energy of 7-8TeV and a peak luminosity of 7.73×10³³ cm−2s−1. This thesis concentrates on the discovery and mass measurement of the Higgs boson. The analysis using the H → ZZ(∗) → 4ℓ channel is presented, where ℓ denotes electrons or muons. Statistical methods with non-parametric models are successfully cross-checked with parametric models. The per-event errors studied to improve the mass determination decreases the total mass uncertainty by 9%. The other main focus is the performance of the initial, and possible upgraded, layouts of the ATLAS inner detector. The silicon cluster size, channel occupancy and track separation in jets are analysed for a detailed understanding of the inner detector. The inner detector is exposed to high particle fluxes and is crucial for tracking and vertexing. The simulation of the detector performance is improved by adjusting the cross talk of adjacent hit pixels and the Lorentz Angle in the digitisation. To improve the ATLAS detector for upgrade conditions, the performance is studied with pile-up of up to 200. Several possible layout configurations were considered before converging on the baseline one used for the Letter of Intent. This includes increased granularity in the Pixel and SCT and additional silicon detector layers. This layout was validated to accomplish the design target of an occupancy < 1% throughout the whole inner detector. The H → ZZ(∗) → 4ℓ analysis benefits from the excellent momentum resolution, particularly for leptons down to pT = 6GeV. The current inner detector is designed to provide momentum measurements of low pT charged tracks with resolution of σpT /pT = 0.05% pT ⊕ 1% over a range of |η| < 2.5. The discovery of a new particle in July 2012 which is compatible with the Standard model Higgs boson included the 3.6σ excess of events observed in the H → ZZ(∗) → 4ℓ channel at 125GeV. The per-event error was studied using a narrow mass range, concentrated around the signal peak (110GeV< mH < 150GeV). The error on the four-lepton invariant mass is derived and its probability density function (pdf) is multiplied by the conditional pdf of the four-lepton invariant mass given the error. Applying a systematics model dependent on the true mass of the discovered particle, the new fitting machinery was developed to exploit additional statistical methods for the mass measurement resulting in a discovery with 6.6σ at mH = 124.3+0.6−0.5(stat)+0.5−0.3(syst)GeV and μ = 1.7±0.5 using the full 2011 and 2012 datasets.

Page generated in 0.104 seconds