• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 163
  • 41
  • 23
  • 21
  • 19
  • 16
  • 12
  • 11
  • 9
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 372
  • 113
  • 104
  • 69
  • 68
  • 67
  • 56
  • 47
  • 44
  • 41
  • 32
  • 31
  • 27
  • 27
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

A Bayesian expected error reduction approach to Active Learning

Fredlund, Richard January 2011 (has links)
There has been growing recent interest in the field of active learning for binary classification. This thesis develops a Bayesian approach to active learning which aims to minimise the objective function on which the learner is evaluated, namely the expected misclassification cost. We call this approach the expected cost reduction approach to active learning. In this form of active learning queries are selected by performing a `lookahead' to evaluate the associated expected misclassification cost. \paragraph{} Firstly, we introduce the concept of a \textit{query density} to explicitly model how new data is sampled. An expected cost reduction framework for active learning is then developed which allows the learner to sample data according to arbitrary query densities. The model makes no assumption of independence between queries, instead updating model parameters on the basis of both which observations were made \textsl{and} how they were sampled. This approach is demonstrated on the probabilistic high-low game which is a non-separable extension of the high-low game presented by \cite{Seung_etal1993}. The results indicate that the Bayes expected cost reduction approach performs significantly better than passive learning even when there is considerable overlap between the class distributions, covering $30\%$ of input space. For the probabilistic high-low game however narrow queries appear to consistently outperform wide queries. We therefore conclude the first part of the thesis by investigating whether or not this is always the case, demonstrating examples where sampling broadly is favourable to a single input query. \paragraph{} Secondly, we explore the Bayesian expected cost reduction approach to active learning within the pool-based setting. This is where learning is limited to a finite pool of unlabelled observations from which the learner may select observations to be queried for class-labels. Our implementation of this approach uses Gaussian process classification with the expectation propagation approximation to make the necessary inferences. The implementation is demonstrated on six benchmark data sets and again demonstrates superior performance to passive learning.
32

Korporátní akvizice a očekávané akciové výnosy: Meta-analýza / Corporate Acquisitions and Expected Stock Returns: A Meta-Analysis

Parreau, Thibault January 2019 (has links)
This thesis aims at investigating the puzzling relationship between cor- porate acquisitions and expected stock returns by reviewing numerous studies on this topic through the use of state of the art meta-analysis tools. Such an analysis is required because many papers examined this relationship but their results varied. We therefore collected 421 estimates from 20 papers and led multiple regressions to test for the presence of publication bias. Throughout this analysis we indeed found evidence supporting the existence of publication bias. Furthermore, we decided to apply Bayesian Model Averaging to reduce the model uncertainty and find out why our abnormal returns estimates greatly vary across stud- ies. Our results suggest that one of the most important drivers are the standard-error terms. This subsequently proves that publication bias is the most responsible for the heterogeneity amongst our estimates. Our analysis fails to demonstrate any positive effects from M&A activity on a firm post-acquisition performance. We suggest that other motives are under-represented in the underlying theory that aims to assess M&A outcomes. Keywords Mergers and Acquisitions, Stock Returns, Abnormal Re- turns, Meta-Analysis, Publication bias Author's e-mail thibault.parreau@gmail.com Supervisor's e-mail...
33

Prêmios realizados e esperados no Brasil / Realized and expected premium in Brazil

França, Michael Tulio Ramos de 27 November 2015 (has links)
Dado que o investimento no mercado acionário envolve incerteza, devíamos esperar que seu retorno médio fosse relativamente superior a uma aplicação livre de risco para compensar o investidor pelo risco adicional que ele incorre quando aplica seus recursos em ações. Entretanto, não encontramos tal evidência quando analisamos o comportamento do mercado acionário brasileiro. Isto porque, considerando os retornos realizados médio dos últimos vinte anos, o prêmio histórico foi relativamente baixo. Assim, naturalmente surge à questão se tal estimativa corresponde a um valor razoável para inferirmos o futuro comportamento do mercado acionário. Para responder a esta questão, nossa metodologia constituiu em três etapas. Na primeira, revisamos a literatura em busca de técnicas de estimação do prêmio e selecionamos as abordagens baseado em artigos recentes, citações e disponibilidade de dados. Além disso, também realizamos algumas propostas de estimação. Em seguida, apresentamos os resultados das metodologias selecionadas para os anos recentes e observamos que as estimativas apresentaram certo grau de heterogeneidade. Na segunda etapa, testamos o desempenho dos modelos empíricos estimados usando testes de previsão fora da amostra. Os resultados apontaram que alguns modelos foram superiores ao prêmio histórico. Desta forma, encontramos evidências de que o prêmio histórico representa apenas mais uma fonte de informação para inferir o prêmio esperado e, se tomado sozinho, não constitui um procedimento de inferência razoável. Visto que cada modelo apresenta uma estratégia empírica para inferir o prêmio, todos deveriam representar uma fonte informacional sobre o prêmio futuro. Consequentemente, uma corrente da literatura recente destaca que a estratégia ótima pode ser agregar informações dos modelos individuais. Com este intuito, o último passo da metodologia foi combinar informações dos modelos que apresentaram melhor desempenho em relação ao prêmio histórico e verificar se tal procedimento aumentou a performance do poder preditivo dos modelos. Como resultado, verificamos que tal abordagem melhora e estabiliza a previsão do prêmio. / Given that investment in the stock market involves uncertainty, we should expect that the average return was relatively higher than a risk-free investment in order to compensate investors for the additional risk they incur. However, we find no such evidence when we analyze the Brazilian stock market behavior. This is because, considering the realized average returns of the past twenty years, the historic equity risk premium was relatively low. So, naturally, the question of whether such an estimate corresponds to a reasonable value to infer the future behavior of the stock market arises. To answer this question, our methodology consists of three stages. At first, we review the literature on risk premium estimation techniques and select the different approaches based on recent articles, quotes and availability of data. We also made some estimation proposals. We then proceed and present the results of the methodologies selected for the recent years and find that the estimates presented some degree of heterogeneity. On the second step, we test the performance of our estimates using out-of-sample predictive tests. The results showed that some models performed better than the historical premium. Thus, we find evidence that the historical premium is just another source of information to infer the expected award and, if taken alone, does not constitute a reasonable inference procedure. Since each model presents an empirical strategy to infer the premium, every one of them should represent an information source on the future premium. Consequently, a recent literature points out that the current optimal strategy may be to aggregate information from individual models. To this end, the last step of the methodology was to combine information of the models that performed better against the historical premium and verify that this procedure increased the power of the predictive performance of the models. As a result, we find that this approach improves and stabilizes the premium forecast.
34

THINKING POKER THROUGH GAME THEORY

Palafox, Damian 01 June 2016 (has links)
Poker is a complex game to analyze. In this project we will use the mathematics of game theory to solve some simplified variations of the game. Probability is the building block behind game theory. We must understand a few concepts from probability such as distributions, expected value, variance, and enumeration methods to aid us in studying game theory. We will solve and analyze games through game theory by using different decision methods, decision trees, and the process of domination and simplification. Poker models, with and without cards, will be provided to illustrate optimal strategies. Extensions to those models will be presented, and we will show that optimal strategies still exist. Finally, we will close this paper with an original work to an extension that can be used as a medium to creating more extensions and, or, different games to explore.
35

Credit Risk Valuation¡G.A Research with the KMV model -EDF for Taiwan Electronic Companies

Wang, Wan-jung 23 July 2007 (has links)
Abstract Ever since 1980, facing the impact of the more freedom of trading market and the fast developing on the new technology, financial market grows rapidly in prosperity. Especially the derivative financial goods are brought to the market, the financial organization¡¦s affairs and trading styles become more diversified, also added new risks of uncertainty. Furthermore, more complicated credit risk patterns caused the traditional measuring tools of financial risk among market participants, even risk structure and credit culture being severely challenged. During 1990, financial crisis or fraud cases consecutively happened in the international financial market, so the financial risk management has become a subject concerned by financial organizations, government and the public investors. However, credit risk is always the focus in all the financial risks. Especially the Basel Committee on Banking Supervision, (a branch of the Bank for International Settlements, BIS), published ¡§The New Basel Capital Accord¡¨ (Basel II). In this New Basel Capital Accord, it not only emphasizes the importance of credit risk, but also allows financial organizations to develop Internal Rating Based Approach, ¡§IRB¡¨ to evaluate and calculate proper risk capital. These operations for credit risk evaluation model¡¦s development have been focused on the academic circle, government, and business circle. Since Merton (1974) has applied options pricing model as a technology to evaluate the credit risk of enterprise, it has been drawn a lot of attention from western academic and business circles. Merton¡¦s Model is the theoretical foundation of structural models. Currently, the famous KMV Model in practically is the extension of application of Merton¡¦s Model. Merton¡¦s model is not only based on a strict and comprehensive theory but also used market information stock price as an important variance to evaluate the credit risk. This makes credit risk to be a real-time monitored at a much higher frequency. This advantage has made it widely applied by the academic and business circle for a long time. According to this research topics: (1) Credit risk holds geographical and culture character. Though credit risk evaluating model introduced from the foreign, yet it still has to be modified locally and it also needs more supports from local theory and practical case study. (2)Structural model is based on ¡§look-forward¡¨ analysis. It implies market-based information contents. (3) After prudent and careful analytical consideration about domestic capital market, the electronic business is the mainstream of domestic stock market, and also the competitive business for Taiwan in the world, meantime, electronic business has a higher level of sensitivity in three phases of profit, prosperity and risk. So that, I choose electronic companies in the public stock market as my research target and time frame is across 2004 to 2006, by means of KMV model which is a mainstream of structural model to evaluate credit risk, developed by Moody¡¦s Co. USA. I also referred to ¡§Small and Medium Enterprise Credit Guarantee Fund Main Guarantee Business Default Probability and Credit Risk Valuation Research Report¡¨, authored by C. J. Kuo (2006) for the variable definition and selections giving very thorough considerations. As I proceed a series of research in using EDF (Expected Default Frequency) of KMV model as well as a number of empirical investigation procedures in integrity and individual electronic business. I find out that EDF of KMV model it can obtain the prominent effect in credit risk and the prediction ability in advance. This paper can provide research result as a reference to risk-manager and to assist investors and governor to discern the depth of risks that the enterprise involved and then to decide the policy of strategy investment and level of risk management. Eventually to minimize the cost of credit checking and enterprise capitals, while to maximize the managerial efficiency and the profitability is the contribution of this paper could be.
36

Idiosyncratic Risk and Expected Returns in REITs

Imazeki, Toyokazu 26 April 2012 (has links)
The Modern Portfolio Theory (MPT) argues that all unsystematic risk can be diversified away thus there should be no relationship between idiosyncratic risk and return. Ooi, Wang and Webb (2009) employ the Fama-French (1993) three-factor model (FF3) to estimate the level of nonsystematic return volatility in REITs as a proxy for idiosyncratic risk. They find a significant positive relationship between expected returns and conditionally estimated idiosyncratic risk contrary to the MPT. In this research, I examine other potential sources of systematic risk in REITs which may explain the seeming violation of the MPT found by Ooi et al (2009). I re-examine the proportion of idiosyncratic risk in REITs with Carhart’s (1997) momentum factor, which is largely applied on the FF3 to control for the persistency of stock returns as supplemental risk in the finance literature. Next, I conduct cross-sectional regression and test the significance of the relationship between idiosyncratic risk and expected returns. I further analyze the role of property sector on idiosyncratic risk as well as on its relationship with expected returns. I argue three conclusions. First, momentum has a relatively minor effect on the idiosyncratic risk consistent with the financial literature. Second, the effect of momentum is not strong enough to cause a significant change in the relationship between idiosyncratic risk and expected returns. Third, a REIT portfolio diversified across property sectors neutralizes the relationship between idiosyncratic risk and expected returns, though the contribution of each property sector is not statistically significant.
37

The Research on Credit Risk Premium and Default Rate of Banking's

Chung, Kwang 25 June 2005 (has links)
none
38

none

Chiang, Hui-Chun 12 February 2007 (has links)
none
39

Company accounts receivable risk control and build on default account early warning model

Lee, Hui-Ping 04 July 2007 (has links)
It is the key what determined the future of a company the economic behavior practiced from the commercial credit, and the performance of a customer decides the probability of the bad debt from the account receivable. To avoid the bad running of a business unit from terrible cash flow from account receivable, and lead to financial crisis or failure, I try to dig in the problem of the business to give credit failure. Finally, I hope to run a system of crisis prediction to avoid this kind of problem. Try to use the KMV Model on the companies which were listed on the stock exchange market belonged to the Printed Circuit Board (PCB) industry from 2004 to 2006. The result of verification ,the Distance-to-Default(DD) average is about 3.4982; and the Expected-Default-Frequency(EDF) probability average locates on 0.0084. In addition , used the size of capitalization and the analysis of financial ratios to evaluate the internal credit line system in a clinical way, and upgrade the risk management of credit, risk judgment measurement to decrease the loss in the meanwhile.
40

Parallelisierung Ersatzmodell-gestützter Optimierungsverfahren

Schmidt, Hansjörg 05 March 2009 (has links) (PDF)
Bei der Entwicklung neuer Produkte nehmen numerische Simulationen eine immer größere Rolle ein. Dadurch entsteht die Möglichkeit, relativ kostengünstig das neue Produkt zu testen, noch bevor ein teurer Prototyp angefertigt werden muss. Diese Möglichkeit weckt das Verlangen, Teile des Designprozesses zu automatisieren. Aber selbst mit den modernsten Algorithmen und Rechnern sind einige dieser Simulationen sehr zeitaufwändig, d.h. im Bereich von Minuten bis Stunden. Beispiele aus dem Automobilbereich dafür sind Kettentriebssimulationen, Strömungssimulationen oder Crashsimulationen. Mathematisch stehen dafür das Lösen von Differential-Algebraischen Gleichungen und partiellen Differentialgleichungen. Ziele des teilweise automatischen Designprozesses sind die Funktionsfähigkeit und möglichst optimale weitere Eigenschaften wie beispielsweise Leistung oder Kosten. In dieser Arbeit werden Optimierungsprobleme betrachtet, bei denen die Auswertung der Zielfunktion eine numerische Simulation erfordert. Um solche Probleme in annehmbarer Zeit lösen zu können, braucht man also Optimierungsverfahren, die mit wenigen Funktionsauswertungen schon gute Näherungen des globalen Optimums finden können. In dieser Arbeit werden Ersatzmodell-gestützte Optimierungsverfahren, die eine Kriging-Approximation benutzen, betrachtet. Diese Verfahren besitzen die oben genannten Anforderungen, sind aber nur eingeschränkt parallelisierbar. Die Arbeit gliedert sich wie folgt. Die für diese Arbeit benötigten Grundlagen der Optimierung werden im zweiten Kapitel vorgestellt. Das dritte Kapitel beschäftigt sich mit der Theorie der Kriging- Approximation. Die Verwendung eines Ersatzmodells zur Optimierung und die Parallelisierung der entstehenden Verfahren sind das Thema des vierten Kapitels. Im fünften Kapitel werden die vorgestellten Verfahren numerisch verifiziert und es werden Vorschläge für die Anwendung gegeben. Das sechste Kapitel gibt einen Überblick über die Kettentriebskonstruktion und die Verwendung der vorgestellten Algorithmen. Das letzte Kapitel fasst die erreichten Ziele zusammen und gibt Vorschläge für weitere Verbesserungen und Forschungsthemen.

Page generated in 0.0425 seconds