Spelling suggestions: "subject:"bayes' rule"" "subject:"bayes' mule""
1 |
台灣地區新上市/上櫃公司資訊結構與股價行為之研究 / A Study on the Effect of Information Structure on Valuation of Initial Public Offerings邵靄如 Unknown Date (has links)
本研究首先以資訊差異模型描述IPOs股價橫斷面與縱斷面的比較與變化。在橫斷面比較上,本研究利用貝氏定理,令投資人在擬掛牌公司釋放出歷史資訊後,首先修正其事前信念以獲得事後信念,最後再以事後有限的資訊數量推估下一期的報酬率,然因歷史資訊之數量與品質不等,下一期預測報酬率之β係數亦顯然不同。資訊結構較佳者,估計風險較低,β係數較小;資訊結構較差者,估計風險較高,β係數較大。因此,為吸引投資人對資訊結構較差之IPOs的興趣,在必要報酬率要求較高的前提下,資訊結構較差之IPOs的承銷價格必須低訂,以製造投資人可以獲利的空間。因此,在其他條件相同的情況下,資訊結構較差之IPOs其掛牌初期的股價報酬率應該優於資訊結構較佳的IPOs。
其次,在IPOs縱斷面股價行為差異之模型推導上,本研究將市場IPOs區分成資訊結構佳者與資訊結構較差者,在資訊數量與發行時間成正相關的假設下,推導出當掛牌時間t趨近時,證券間之資訊差異效果遞減,且新發行證券之β係數遞減,因而進一步推論,就所有IPOs而言,後市股價報酬率將低於估計風險相對較高的掛牌初期股價報酬率。
另外,本研究之實證共分三個層次:第一層次就IPOs橫斷面股價行為方面。本研究首先就不同發行市場的IPOs之初期股價表現進行驗證。不同發行市場對擬掛牌公司之輔導期間與體質結構有不同的要求,一般而言,集中市場之發行面較店頭市場嚴格,因此,集中市場IPOs之資訊結構理論上比店頭市場IPOs佳。實證結果發現,資訊結構較佳之集中市場IPOs,其初期投資報酬率比店頭市場IPOs差。是故,實證結果支持不同發行市場之資訊結構差異假說。繼之,根據過去文獻與個案訪談的整理,若以內部人持股比、企業規模、企業年齡、承銷商聲譽、會計師聲譽與是否轉換發行市場作為單一發行市場內資訊結構優劣分際的標準時,發現,集中市場內資訊差異效果顯著;然店頭市場內,卻只有在空頭時期上櫃之IPOs,其初期投資報酬率具有資訊差異效果。
實證之第二層次為檢定IPOs縱斷面之股價變化是否亦具有資訊差異效果。首先就不同發行市場做比較,實證結果發現,集中市場因資訊結構較佳,正式掛牌前投資人與發行公司間資訊不對稱情形較不嚴重,因此,當蜜月期過後,股價逐漸迴歸真值時,掛牌一年後股價之修正幅度較資訊結構相對不佳的店頭市場IPOs小,因此,不同發行市場間,IPOs資訊結構之縱斷面差異效果獲得支持。另外,集中市場IPOs類屬資訊結構較佳者,其後市股價下修程度遠比資訊結構差者來得少。至於店頭市場之差異效果,雖然資訊結構較佳者其股價修正幅度小於資訊結構較差者,然兩者間並未達到統計上顯著差異性,因此店頭市場縱斷面之資訊差異效果並未獲得支持。
實證之第三層次,為檢定IPOs錯估訊號來源。實證結果發現,集中市場內,由聲譽較差之承銷商輔導上市及上市前每股盈餘越少之IPOs,越容易產生價格錯估行為。而店頭市場內,越是由聲譽較差之承銷商輔導上櫃或類屬傳統產業類股之IPOs,越容易產生價格錯估行為。 / The objective of this study is twofold. First, the paper develops a model to examine cross-sectionally and dynamically the effects of differential information on various initial public offerings (IPOs). Second, this paper examines the initial return and the after-market performance for IPOs, particularly the security valuation effects of structural differences in available information. There is a diversity of information among issuing firms at the time of their offering and particularly under certain trading system and certain market conditions.
Through Bayesian model development, we support the effect of differential information among IPOs of structural differences. From empirical evidence, we find that during hot market conditions and under over-the-counter (OTC) trading system and for firms characterized by poor levels of available information, the market values of issuing firms are more likely to be overestimated in the immediate after-market. We also find positive overestimation of market values to be more likely for IPOs of smaller earnings per share (EPS) and those marketed by the less prestigious underwriters under Taiwan Security Exchange (TSE) trading system, and for IPOs other than hi-tech securities and those marketed by the less prestigious underwriters under OTC trading system.
|
2 |
Spike-Based Bayesian-Hebbian Learning in Cortical and Subcortical MicrocircuitsTully, Philip January 2017 (has links)
Cortical and subcortical microcircuits are continuously modified throughout life. Despite ongoing changes these networks stubbornly maintain their functions, which persist although destabilizing synaptic and nonsynaptic mechanisms should ostensibly propel them towards runaway excitation or quiescence. What dynamical phenomena exist to act together to balance such learning with information processing? What types of activity patterns do they underpin, and how do these patterns relate to our perceptual experiences? What enables learning and memory operations to occur despite such massive and constant neural reorganization? Progress towards answering many of these questions can be pursued through large-scale neuronal simulations. In this thesis, a Hebbian learning rule for spiking neurons inspired by statistical inference is introduced. The spike-based version of the Bayesian Confidence Propagation Neural Network (BCPNN) learning rule involves changes in both synaptic strengths and intrinsic neuronal currents. The model is motivated by molecular cascades whose functional outcomes are mapped onto biological mechanisms such as Hebbian and homeostatic plasticity, neuromodulation, and intrinsic excitability. Temporally interacting memory traces enable spike-timing dependence, a stable learning regime that remains competitive, postsynaptic activity regulation, spike-based reinforcement learning and intrinsic graded persistent firing levels. The thesis seeks to demonstrate how multiple interacting plasticity mechanisms can coordinate reinforcement, auto- and hetero-associative learning within large-scale, spiking, plastic neuronal networks. Spiking neural networks can represent information in the form of probability distributions, and a biophysical realization of Bayesian computation can help reconcile disparate experimental observations. / <p>QC 20170421</p>
|
3 |
Making Models with BayesOlid, Pilar 01 December 2017 (has links)
Bayesian statistics is an important approach to modern statistical analyses. It allows us to use our prior knowledge of the unknown parameters to construct a model for our data set. The foundation of Bayesian analysis is Bayes' Rule, which in its proportional form indicates that the posterior is proportional to the prior times the likelihood. We will demonstrate how we can apply Bayesian statistical techniques to fit a linear regression model and a hierarchical linear regression model to a data set. We will show how to apply different distributions to Bayesian analyses and how the use of a prior affects the model. We will also make a comparison between the Bayesian approach and the traditional frequentist approach to data analyses.
|
4 |
Bayesovské a neuronové sítě / Bayesian and Neural NetworksHložek, Bohuslav January 2017 (has links)
This paper introduces Bayesian neural network based on Occams razor. Basic knowledge about neural networks and Bayes rule is summarized in the first part of this paper. Principles of Occams razor and Bayesian neural network are explained. A real case of use is introduced (about predicting landslide). The second part of this paper introduces how to construct Bayesian neural network in Python. Such an application is shown. Typical behaviour of Bayesian neural networks is demonstrated using example data.
|
Page generated in 0.0565 seconds