131 |
A relaÃÃo entre receitas e despesas nos MunicÃpios Brasileiros: uma anÃlise sob as TÃcnicas de Bootstrap / The relationship between revenue and expenditure in Brazilian Municipalities: an analysis from the Bootstrap TechniquesRafael Carneiro da Costa 26 May 2010 (has links)
Conselho Nacional de Desenvolvimento CientÃfico e TecnolÃgico / Trabalhos recentes mostraram que a teoria assintÃtica traz resultados equivocados nos
testes de causalidade quando o MÃtodo de Momentos Generalizados (MGM) Ã
utilizado. Este estudo re-examina a relaÃÃo dinÃmica entre receitas prÃprias, despesas
correntes e transferÃncias correntes para os governos municipais brasileiros no perÃodo
de 2000 a 2008. A estimaÃÃo do modelo de dados em painel dinÃmico à feita atravÃs do
MGM, mas os testes de especificaÃÃo utilizam valores crÃticos gerados por bootstrap
para fornecer melhor aproximaÃÃo à distribuiÃÃo da estatÃstica de teste. Uma defasagem
de dois anos à encontrada na equaÃÃo de despesas, mas nenhuma dinÃmica à observada
nas equaÃÃes de receitas prÃprias e de transferÃncias, sugerindo a hipÃtese de que
receitas passadas afetam despesas correntes / Recent works has shown that the asymptotic theory provides misleading results in
causality tests when the Generalized Method of Moments (GMM) is used. This study
re-examines the dynamic relationship between own revenues, current expenditures and
current grants to municipal governments in Brazil in the period 2000 to 2008. The
dynamic panel data model estimation is done by GMM, but the specification tests use
bootstrap critical values to provide a better approximation to the distribution of the test
statistic. A lag of two years is found in the expenditure equation, but no dynamics is
observed in the own revenues and transfers equations, suggesting the hypothesis that
past revenues affect current expenditures
|
132 |
Associações entre rating de crédito e estrutura de capitais de empresas listadas na América LatinaSilva, Dany Rogers 19 October 2012 (has links)
Submitted by DANY Rogers (danyrogers@pontal.ufu.br) on 2012-11-13T17:38:59Z
No. of bitstreams: 1
Tese_VersãoFinal.pdf: 995043 bytes, checksum: b0f42b67aa58d63a0d387fff6aff0867 (MD5) / Approved for entry into archive by Suzinei Teles Garcia Garcia (suzinei.garcia@fgv.br) on 2012-11-13T18:12:05Z (GMT) No. of bitstreams: 1
Tese_VersãoFinal.pdf: 995043 bytes, checksum: b0f42b67aa58d63a0d387fff6aff0867 (MD5) / Made available in DSpace on 2012-11-13T18:14:28Z (GMT). No. of bitstreams: 1
Tese_VersãoFinal.pdf: 995043 bytes, checksum: b0f42b67aa58d63a0d387fff6aff0867 (MD5)
Previous issue date: 2012-10-19 / A credit rating of low (or high) risk enables a reduction (or increase) the spread paid by the issuer at the time of issuance of credit, as well as in capturing financing and bank lendings. So, the rating appears as a relevant aspect in the decisions of the capital structure of a company, mostly for the possibility of influencing on their levels of debt. However, despite the importance given by the market players and the existence of empirical evidence of the effect of the rating about the capital structure of a company, the few existing studies on the associations between trends of reclassifications of credit ratings and decisions on structure of capital of a firm does not has approached the Latin American markets. In markets of Latin America are not common studies showing that companies internally evaluate the imminence of a reclassification about their rating and, from this, alter the composition of the capital structure so as to avoid causing a downgrade, or even to stimulate the occurrence of an upgrade, in their credit risk classification. Accordingly, the purpose of this research is to analyze the impact of trends in the credit rating reclassifications about decisions structure of capital of listed companies in Latin America. To verify the existence of this association were applied data belonging to all non-financial listed companies in Latin America, possessors of ratings issued by the three major international rating agencies (i.e. Stardand & Poor´s, Moody´s and Fitch) in January 2010. In this way, took part in the research all listed companies in six different Latin American countries, in the period 2001-2010. The main empirical results suggest that: (i) reclassifications of credit ratings have no informational content for the decisions of the capital structure of listed companies in Latin America, in other words, no association was observed between trends of reclassifications credit rating and decisions about the composition of the capital structure of listed companies in Latin America; (ii) between companies considered in the survey, those that were in worst levels of risk and the imminent reclassification of credit rating, tended to use more debt than other companies analyzed in this research. / Um rating de crédito de baixo (ou alto) risco possibilita uma redução (ou elevação) do spread pago pelo emissor na ocasião da emissão de títulos de crédito, bem como na captação de financiamentos e empréstimos bancários. Assim, o rating apresenta-se como um aspecto relevante nas decisões de estrutura de capitais de uma empresa, sobretudo pela possibilidade de influenciar nos seus níveis de dívidas. Todavia, apesar da importância atribuída pelos agentes de mercado e a existência de indícios empíricos do efeito do rating sobre a estrutura de capitais de uma empresa, os poucos estudos já realizados acerca das associações entre as tendências de reclassificações dos ratings de crédito e as decisões de estrutura de capitais de uma firma não têm abordado os mercados latino-americanos. Não são comuns nos mercados da América Latina estudos analisando se as empresas avaliam internamente a iminência de uma reclassificação do seu rating e, a partir disso, alteram a sua composição de estrutura de capitais de modo a evitar que ocorra um downgrade, ou mesmo para estimular a ocorrência de um upgrade, em sua classificação de risco de crédito. Nesse sentido, o objetivo desta pesquisa é analisar o impacto das tendências de reclassificações do rating de crédito sobre as decisões de estrutura de capitais de empresas listadas da América Latina. Para verificar a existência dessa associação foram empregados dados pertencentes a todas as empresas não-financeiras listadas da América Latina, possuidoras de ratings emitidos pelas três principais agências de ratings internacionais (i.e. Stardand & Poor´s, Moody´s e Fitch) em janeiro de 2010. Desse modo, fizeram parte da pesquisa todas as empresas listadas em seis diferentes países latino-americanos, no período 2001-2010. Os principais resultados empíricos obtidos sugerem que: (i) as reclassificações dos ratings de crédito não possuem conteúdo informacional para as decisões de estrutura de capitais das empresas listadas da América Latina, ou seja, não foi observada associação entre as tendências de reclassificações do ratings de crédito e as decisões sobre composição das estruturas de capitais das empresas listadas da América Latina; (ii) entre as empresas consideradas na pesquisa, aquelas que se encontravam em níveis piores de riscos e na iminência de reclassificações do rating de crédito, tenderam a utilizar mais dívidas do que as outras empresas analisadas na pesquisa.
|
133 |
A tensor perspective on weighted automata, low-rank regression and algebraic mixturesRabusseau, Guillaume 20 October 2016 (has links)
Ce manuscrit regroupe différents travaux explorant les interactions entre les tenseurs et l'apprentissage automatique. Le premier chapitre est consacré à l'extension des modèles de séries reconnaissables de chaînes et d'arbres aux graphes. Nous y montrons que les modèles d'automates pondérés de chaînes et d'arbres peuvent être interprétés d'une manière simple et unifiée à l'aide de réseaux de tenseurs, et que cette interprétation s'étend naturellement aux graphes ; nous étudions certaines propriétés de ce modèle et présentons des résultats préliminaires sur leur apprentissage. Le second chapitre porte sur la minimisation approximée d'automates pondérés d'arbres et propose une approche théoriquement fondée à la problématique suivante : étant donné un automate pondéré d'arbres à n états, comment trouver un automate à m<n états calculant une fonction proche de l'originale. Le troisième chapitre traite de la régression de faible rang pour sorties à structure tensorielle. Nous y proposons un algorithme d'apprentissage rapide et efficace pour traiter un problème de régression dans lequel les sorties des tenseurs. Nous montrons que l'algorithme proposé est un algorithme d'approximation pour ce problème NP-difficile et nous donnons une analyse théorique de ses propriétés statistiques et de généralisation. Enfin, le quatrième chapitre introduit le modèle de mélanges algébriques de distributions. Ce modèle considère des combinaisons affines de distributions (où les coefficients somment à un mais ne sont pas nécessairement positifs). Nous proposons une approche pour l'apprentissage de mélanges algébriques qui étend la méthode tensorielle des moments introduite récemment. . / This thesis tackles several problems exploring connections between tensors and machine learning. In the first chapter, we propose an extension of the classical notion of recognizable function on strings and trees to graphs. We first show that the computations of weighted automata on strings and trees can be interpreted in a natural and unifying way using tensor networks, which naturally leads us to define a computational model on graphs: graph weighted models; we then study fundamental properties of this model and present preliminary learning results. The second chapter tackles a model reduction problem for weighted tree automata. We propose a principled approach to the following problem: given a weighted tree automaton with n states, how can we find an automaton with m<n states that is a good approximation of the original one? In the third chapter, we consider a problem of low rank regression for tensor structured outputs. We design a fast and efficient algorithm to address a regression task where the outputs are tensors. We show that this algorithm generalizes the reduced rank regression method and that it offers good approximation, statistical and generalization guarantees. Lastly in the fourth chapter, we introduce the algebraic mixture model. This model considers affine combinations of probability distributions (where the weights sum to one but may be negative). We extend the recently proposed tensor method of moments to algebraic mixtures, which allows us in particular to design a learning algorithm for algebraic mixtures of spherical Gaussian distributions.
|
134 |
Fast Solvers for Integtral-Equation based Electromagnetic SimulationsDas, Arkaprovo January 2016 (has links) (PDF)
With the rapid increase in available compute power and memory, and bolstered by the advent of efficient formulations and algorithms, the role of 3D full-wave computational methods for accurate modelling of complex electromagnetic (EM) structures has gained in significance. The range of problems includes Radar Cross Section (RCS) computation, analysis and design of antennas and passive microwave circuits, bio-medical non-invasive detection and therapeutics, energy harvesting etc. Further, with the rapid advances in technology trends like System-in-Package (SiP) and System-on-Chip (SoC), the fidelity of chip-to-chip communication and package-board electrical performance parameters like signal integrity (SI), power integrity (PI), electromagnetic interference (EMI) are becoming increasingly critical. Rising pin-counts to satisfy functionality requirements and decreasing layer-counts to maintain cost-effectiveness necessitates 3D full wave electromagnetic solution for accurate system modelling.
Method of Moments (MoM) is one such widely used computational technique to solve a 3D electromagnetic problem with full-wave accuracy. Due to lesser number of mesh elements or discretization on the geometry, MoM has an advantage of a smaller matrix size. However, due to Green's Function interactions, the MoM matrix is dense and its solution presents a time and memory challenge. The thesis focuses on formulation and development of novel techniques that aid in fast MoM based electromagnetic solutions.
With the recent paradigm shift in computer hardware architectures transitioning from single-core microprocessors to multi-core systems, it is of prime importance to parallelize the serial electromagnetic formulations in order to leverage maximum computational benefits. Therefore, the thesis explores the possibilities to expedite an electromagnetic simulation by scalable parallelization of near-linear complexity algorithms like Fast Multipole Method (FMM) on a multi-core platform.
Secondly, with the best of parallelization strategies in place and near-linear complexity algorithms in use, the solution time of a complex EM problem can still be exceedingly large due to over-meshing of the geometry to achieve a desired level of accuracy. Hence, the thesis focuses on judicious placement of mesh elements on the geometry to capture the physics of the problem without compromising on accuracy- a technique called Adaptive Mesh Refinement. This facilitates a reduction in the number of solution variables or degrees of freedom in the system and hence the solution time.
For multi-scale structures as encountered in chip-package-board systems, the MoM formulation breaks down for parts of the geometry having dimensions much smaller as compared to the operating wavelength. This phenomenon is popularly known as low-frequency breakdown or low-frequency instability. It results in an ill-conditioned MoM system matrix, and hence higher iteration count to converge when solved using an iterative solver framework. This consequently increases the solution time of simulation. The thesis thus proposes novel formulations to improve the spectral properties of the system matrix for real-world complex conductor and dielectric structures and hence form well-conditioned systems. This reduces the iteration count considerably for convergence and thus results in faster solution.
Finally, minor changes in the geometrical design layouts can adversely affect the time-to-market of a commodity or a product. This is because the intermediate design variants, in spite of having similarities between them are treated as separate entities and therefore have to follow the conventional model-mesh-solve workflow for their analysis. This is a missed opportunity especially for design variant problems involving near-identical characteristics when the information from the previous design variant could have been used to expedite the simulation of the present design iteration. A similar problem occurs in the broadband simulation of an electromagnetic structure. The solution at a particular frequency can be expedited manifold if the matrix information from a frequency in its neighbourhood is used, provided the electrical characteristics remain nearly similar. The thesis introduces methods to re-use the subspace or Eigen-space information of a matrix from a previous design or frequency to solve the next incremental problem faster.
|
135 |
Anatomy of smooth integersMehdizadeh, Marzieh 07 1900 (has links)
Dans le premier chapitre de cette thèse, nous passons en revue les outils de la théorie analytique
des nombres qui seront utiles pour la suite. Nous faisons aussi un survol des entiers
y−friables, c’est-à-dire des entiers dont chaque facteur premier est plus petit ou égal à y.
Au deuxième chapitre, nous présenterons des problèmes classiques de la théorie des nombres
probabiliste et donnerons un bref historique d’une classe de fonctions arithmétiques sur un
espace probabilisé.
Le problème de Erdos sur la table de multiplication demande quel est le nombre d’entiers
distincts apparaissant dans la table de multiplication N × N. L’ordre de grandeur de cette
quantité a été déterminé par Kevin Ford (2008). Dans le chapitre 3 de cette thèse, nous
étudions le nombre d’ensembles y−friables de la table de multiplication N × N. Plus concrètement,
nous nous concentrons sur le changement du comportement de la fonction A(x, y)
par rapport au domaine de y, où A(x, y) est une fonction qui compte le nombre d’entiers
y− friables distincts et inférieurs à x qui peuvent être représentés comme le produit de deux
entiers y− friables inférieurs à p
x.
Dans le quatrième chapitre, nous prouvons un théorème de Erdos-Kac modifié pour l’ensemble
des entiers y− friables. Si !(n) est le nombre de facteurs premiers distincts de n, nous prouvons
que la distribution de !(n) est gaussienne pour un certain domaine de y en utilisant la
méthode des moments. / The object of the first chapter of this thesis is to review the materials and tools in analytic
number theory which are used in following chapters. We also give a survey on the development
concerning the number of y−smooth integers, which are integers free of prime factors
greater than y.
In the second chapter, we shall give a brief history about a class of arithmetical functions
on a probability space and we discuss on some well-known problems in probabilistic number
theory.
We present two results in analytic and probabilistic number theory.
The Erdos multiplication table problem asks what is the number of distinct integers appearing
in the N × N multiplication table. The order of magnitude of this quantity was determined
by Kevin Ford (2008). In chapter 3 of this thesis, we study the number of y−smooth entries
of the N × N multiplication. More concretely, we focus on the change of behaviour of the
function A(x,y) in different ranges of y, where A(x,y) is a function that counts the number
of distinct y−smooth integers less than x which can be represented as the product of two
y−smooth integers less than p
x.
In Chapter 4, we prove an Erdos-Kac type of theorem for the set of y−smooth integers. If
!(n) is the number of distinct prime factors of n, we prove that the distribution of !(n) is
Gaussian for a certain range of y using method of moments.
|
136 |
CFO Turnover, Firm’s Debt-Equity Choice and Information EnvironmentTalukdar, Muhammad Bakhtear U 29 June 2016 (has links)
The CEO and CFO are the two key executives of a firm. They work cohesively to ensure the growth of the firm. After the adoption of the Sarbanes Oxley Act (SOX) in 2002, the importance of CFOs has increased due to their personal legal obligation in certifying the accuracy of financial statements. Only a few papers such as Mian (2001), Fee and Hadlock (2004), and Geiger and North (2006) focus on CFOs in the pre-SOX era. However, a vacuum exists in research focusing exclusively on CFOs in the post-SOX era. The purpose of this dissertation is to delve into a comprehensive investigation of the CFOs. More specifically, I answer three questions: a) does the CEO change lead to the CFO change? b) does the CFO appointment type affect the firm’s debt-equity choice? and c) does the CFO appointment affect the firm’s information environment?
I use Shumway’s (2001) dynamic hazard model in answering question ‘a’. For question ‘b’, I use instrumental variable (IV) regression under various estimation techniques to control for endogeneity. For part ‘c’, I use the cross sectional difference-in-difference (DND) methodology by pairing treatment firms with control firms chosen by the propensity scores matching (PSM).
I find there is about a 70% probability of CFO replacement after the CEO replacement. Both of their replacements are affected by prior year’s poor performance. In addition, as a custodian of the firm’s financial reporting, the CFO is replaced proactively due to a probability of restatement of earnings. I find firms with internal CFO hires issue more equity in the year of appointment than firms with external hires. The promoted CFO significantly improves the firm’s overall governance which helps the firm obtain external financing from equity issue. However, I find that CFO turnover does not significantly affect the firm’s information environment. To ensure that my finding is not due to mixing up of samples of good and distressed firms together, I separated distressed firms and re-ran my models and my finding still holds.
This dissertation fills the gap in the literature with regards to CFOs and their post SOX relationship with the firm.
|
137 |
Finance and Growth Nexus: CEE & Central Asia and Beyond / Finance and Growth Nexus: CEE & Central Asia and BeyondEnkhbold, Buuruljin January 2016 (has links)
Buuruljin Enkhbold Finance and Growth Nexus: CEE & Central Asia and Beyond Abstract (English) This thesis investigates the effect of financial development on economic growth using both global sample and regional samples focusing on Central and Eastern Europe (CEE) and Central Asia during the time period 1960-2013. The results of fixed effect panel and system GMM estimators suggest that the effect of private credit on growth had been neutral until 2007 and the effect turns negative if the time period is up to 2013. The negative effect of private credit on growth has been the largest for CEE and Central Asia, particularly for non-EU countries in the region. Stock market capitalisation and lending deposit spread have consistent effects regardless of the choice of time frame which implies that economies benefit from larger stock markets and lower lending deposit spread. Keywords: financial development, credit, stock market, spread, growth, CEE and Central Asia, generalized method of moments (GMM)
|
138 |
Pricing and Modeling Heavy Tailed Reinsurance Treaties - A Pricing Application to Risk XL Contracts / Prissättning och modellering av långsvansade återförsäkringsavtal - En prissättningstillämpning på Risk XL kontraktAbdullah Mohamad, Ormia, Westin, Anna January 2023 (has links)
To estimate the risk of a loss occurring for insurance takers is a difficult task in the insurance industry. It is an even more difficult task to price the risk for reinsurance companies which insures the primary insurers. Insurance that is bought by an insurance company, the cedent, from another insurance company, the reinsurer, is called treaty reinsurance. This type of reinsurance is the main focus in this thesis. A very common risk to insure, is the risk of fire in municipal and commercial properties which is the risk that is priced in this thesis. This thesis evaluates Länsförsäkringar AB's current pricing model which calculates the risk premium for Risk XL contracts. The goal of this thesis is to find areas of improvement for tail risk pricing. The risk premium can be calculated commonly by using one of three different types of pricing models, experience rating, exposure rating and frequency-severity rating. This thesis focuses on frequency-severity pricing, which is a model that assumes independence between the frequency and the severity of losses, and therefore splits the two into separate models. This is a very common model used when pricing Risk XL contracts. The risk premium is calculated with the help of loss data from two insurance companies, from a Norwegian and a Finnish insurance company. The main focus of this thesis is to price the risk with the help of extreme value theory, mainly with the method of moments method to model the frequency of losses, and peaks over threshold model to model the severity of the losses. In order to model the estimated frequency of losses by using the method of moments method, two distributions are compared, the Poisson and the negative binomial distribution. There are different distributions that can be used to model the severity of losses. In order to evaluate which distribution is optimal to use, two different Goodness of Fit tests are applied, the Kolmogorov-Smirnov and the Anderson-Darling test. The Peaks over threshold model is a model that can be used with the Pareto distribution. With the help of the Hill estimator we are able to calculate a threshold $u$, which regulates the tail of the Pareto curve. To estimate the rest of the ingoing parameters in the generalized Pareto distribution, the maximum likelihood and the least squares method are used. Lastly, the bootstrap method is used to estimate the uncertainty in the price which was calculated with the help of the estimated parameters. From this, empirical percentiles are calculated and set as guidelines to where the risk premium should lie between, in order for both the data sets to be considered fairly priced. / Att uppskatta risken för en skada ska inträffa för försäkringstagarna är svår uppgift i försäkringsbranschen. Det är en ännu svårare uppgift är att prissätta risken för återförsäkringsbolag som försäkrar direktförsäkrarna. Den försäkringen som köps av direkförsäkrarna, cedenten, från återförsäkrarna kallas treaty återförsäkring. Denna typ av återförsäkring är den som behandlas i denna avhandlig. En vanlig risk att prisätta är brandrisken för kommunala och industriella byggnader, vilket är risken som prissätts i denna avhandlnig. Denna avhandling utvärderar Länsförsäkringar AB's nuvarande prissättning som beräknar riskpremien för Risk XL kontrakt.Målet med denna avhandling är att hitta förbättringsområden för långsvansad affär. Riskpremien kan beräknas med hjälp av tre vanliga typer av prissättningsmodeller, experience rating, exposure rating och frequency-severity raring. Denna tes fokuserar endast på frequency-severity rating, vilket är en modell som antar att frekevensen av skador och storleken av de är oberoende, de delas därmed upp de i separata modeller. Detta är en väldigt vanlig modell som används vid prissättning av Risk XL kontrakt.Riskpremien beräknas med hjälp av skadedata från två försäkringsbolag, ett norskt och ett finskt försäkringsbolag.Det huvudsakliga fokuset i denna avhandling är att prissätta risken med hjälp av extremevärdesteori, huvudsakligen med hjälp av momentmetoden för att modellera frekvensen av skador och peaks over threshold metoden för att modellera storleken av de skadorna.För att kunna modellera den förväntade frekvensen av skador med hjälp av moment metoden så jämförs två fördelingar, Poissonfördelingen och den negativa binomialfördelningen. Det finns ett antal fördelningar som kan användas för att modellera storleken av skadorna. För att kunna avgöra vilken fördeling som är bäst att använda så har två olika Goodness of Fit test applicerats, Kolmogorov-Smirnov och Anderson-Darling testet.Peaks over threhsold modellen är en modell som kan användas med Paretofördelningen. Med hjälp av Hillestimatorn så beräknas en tröskel $u$ som regulerar paretokurvans uteseende. För att beräkna de resterande parametrarna i den generaliserade Paretofördelningen används maximum likliehood och minsta kvadratmetoden. Slutligen används bootstrap metoden för att skatta osäkerheten i risk premien som satts med hjälp av de skattade parametrarna. Utifrån den metoden så skapas percentiler som blir en riktlinje för vart risk premien bör ligga för de datasetten för att kunna anses vara rättvist prissatt.
|
139 |
POST-EMPLACEMENT LEACHING BEHAVIORS OF NANO ZERO VALENT IRON MODIFIED WITH CARBOXYMETHYLCELLULOSE UNDER SIMULATED AQUIFER CONDITIONSWilliams, Leslie Lavinia January 2013 (has links)
No description available.
|
140 |
Econometrics on interactions-based models: methods and applicationsLiu, Xiaodong 22 June 2007 (has links)
No description available.
|
Page generated in 0.0237 seconds