• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 144
  • 132
  • 12
  • Tagged with
  • 144
  • 144
  • 107
  • 66
  • 53
  • 50
  • 47
  • 36
  • 36
  • 31
  • 27
  • 27
  • 24
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

模 糊 無 母 數 統 計 檢 定 及 其在 高 齡 化 社 會 調 查 之 應 用 / The Fuzzy nonparametric statistical test and its application on the survey of an aging society

趙淑倫 Unknown Date (has links)
在逐漸高齡化的社會中,關注老人的生活議題並加以分析益顯重要。在研究老人問題時,由於研究對象均曾經歷不同的時空背景與人生閱歷,各個體間存在的差異極大;不同族群的老人對其所慣用語彙的理解與表達亦不盡相同。故若利用傳統的統計分析研究結果,強迫人們採用二元邏輯的方式思考與解釋,可能會導致偏差或錯誤的結論。且傳統的統計檢定方法,往往假定取樣的樣本滿足某個分配,因而導致過多的解釋,影響決策品質。 因此,為避免因誤解老人而造成虛耗社會成本,使有限的社會資源得以充分運用,本文於研究老人身心特質與個人期待時,嘗試以模糊理論的軟計算,提出反模糊化轉換。並應用中位數檢定及變異數檢定,建立當統計參數為模糊數或模糊區間時之小樣本無母數模糊統計檢定方法模型。由實證例子分析結果顯示,我們提出的檢定方法,能有效分析模糊樣本的問題,並進而期望能對老人議題的分析和決策有所貢獻,及將此方法運用於其它模糊性議題之研究。 / In a gradually aging society, it is important to pay attention to and then analyze elderly people’s life issues. When a study about elderly people is undertaken, the subjects are very inconsistent given their diverse life experience, and various subgroups of subjects have quite different understanding of and way of expressing a vocabulary. Therefore, analyzing study result with conventional statistical analysis method, which forces thinking in a binary logic way, may lead to biased or erroneous conclusion. Furthermore, a conventional statistical test, usually assuming a certain distribution for its samples, may lead to exaggerated explanation, which is detrimental to the quality of a decision. So, in order to avoid the waste of our social cost from misunderstanding of the elderly, and to make the most use of our limited social resource, when we investigated the elderly people’s personal characteristics and expectations, we tried to apply the soft calculation of the fuzzy theory, proposed the counter-fuzzy transformation, and, by using the median test and variance test, established a nonparametric fuzzy statistical model for small-sized samples and for parameters of the fuzzy number or fuzzy interval types. The analyses of real-world examples demonstrated that this method of statistical test can analyze the problems of fuzzy samples, and can hopefully contribute to improved analysis and decision making of the elderly people’s issues, and apply this method on the investigation of other fuzzy social issues. Key words: counter-fuzzy transformation, fuzzy statistical analysis, median test, variance test, aging society.
112

模糊統計在數學教師教學評鑑調查之應用 / Application of Fuzzy Statists in the Teaching Evaluation of Mathematical Teachers

林青昊 Unknown Date (has links)
十二年國教及中小學教師評鑑即將上路,教學方向的調整與教師能力的提升在不久的將來將列為重要的教師績效指標之一。教師應如何轉型,及如何提升學生的上課狀況,都可透過教學問卷的回饋來作參考;教學問卷可直接且快速的反應學生想法並成為師生溝通的交流管道,使教師反省自我教學方式及技巧,進而改善;因此,在使用問卷時,若利用傳統的統計分析方式來研究結果,強迫學生採用二元邏輯的方式思考與解釋問卷結果,將可能會導致偏差或錯誤的結論。本論文應用模糊理論的概念,以模糊問卷為工具,利用模糊德菲法探討學生喜歡的數學老師類型,再提出新反模糊化值,並藉由模糊威克生等級和檢定及變異數檢定方法,分析學生滿意度是否會因性別、年紀、成績、背景而有所不同,最後討論學校老師及校外老師間的滿意度是否有差別。由實證例子分析結果顯示,我們提出的檢定方法,能有效分析模糊樣本的問題,進而期望能對教學問卷的分析和決策有所貢獻,並將此方法運用於其它模糊性議題之研究。 / The twelve-year compulsory education and the evaluation of the primary and secondary school teachers are brought into practice. The way teachers organizing their teaching strategies and improving their capability will be the key indicators of teachers’ performance review in the coming future. How will teachers fine-tune their instructional skill to attract students’ focus and then to boost students’ learning motivation and academic performance, is binding to the result of a practical Teaching Assessment System. The satisfaction questionnaires designed to students as teachers’ Teaching Assessment is a good evaluation tool to have student’s feedback on teachers’ performance. The questionnaires can quickly and directly reflect the thoughts of the students and serve as a communication channel between the teachers and the students, which can help teachers examine and ameliorate how and what they teach. The result could be used as reference for teachers to enhance the quality and effectiveness of teaching. In those varies of statistical methods used as analysis, if conventional statistical analysis is adapted to analyze the questionnaires and force the students to think and to explain through binary logic, it may result in deviations or erroneous consequences. Furthermore, it may drive to the exaggerated interpretation and detrimental decision. The study, based on Fuzzy Delphi Methods, aims to apply the concept of fuzzy theory and uses fuzzy questionnaires as a tool to analyze what kinds of mathematics teachers the students like. We propose the counter-fuzzy transformation, by using the Fuzzy Wilcoxon Rank-Sum Test and variance test to assay whether the students’ satisfaction differ owing to gender, age, grade, or their family background. Lastly, we will discuss whether the satisfaction is different between the school teachers and the teachers in other schools. The result demonstrates that assaying method, as using fuzzy statistics analysis, is a functional and competent way to analyze fuzzy sampling data through its aims and objectives. We believe it could sustain to support related analysis and decision making on Teaching Assessment, and also could be used to other fuzzy test study.
113

員工分紅制度對台灣上市櫃電子業經營績效關聯性之研究

盧明煇 Unknown Date (has links)
本研究以2000年至2004年台灣上市櫃的623家電子業為研究對象,探討員工分紅制度對企業經營績效的影響。本研究採用兩階段法,第一階段,採用DEA併用單變量統計之變異數分析法(ANOVA)及無母數分析法(Wilcoxon兩樣本檢定;K-W多樣本檢定),來驗證電子業實行員工分紅對企業經營績效的影響。第二階段,DEA併用Tobit迴歸模型,比較第一階段單變量統計檢定的研究結果。研究結果發現: (1)電子產業內有發放員工分紅者的企業經營績效顯著較低,同時電子業發放前一年度員工分紅者對當年度的經營績效為負向顯著相關。 (2)電子產業內發放員工現金紅利對企業經營績效的影響顯著高於股票紅利者,同時電子業發放前一年度員工股票紅利者對當年度的企業經營績效為負向顯著相關。 (3)電子產業內員工分紅佔公司市值比例高者對企業經營績效的影響劣於員工分紅佔公司市值比例低者,且在增加其他控制變數後,電子業發放前一年度員工分紅佔公司市值比例高者對當年度的企業經營績效為負向顯著相關。 (4)電子產業內員工分紅佔薪資比例高者對企業經營績效的影響優於員工分紅佔薪資比例低者,且在增加其他控制變數後,電子業發放前一年度員工分紅佔薪資比例高者對當年度的企業經營績效為正向顯著相關。
114

臺灣社會保險所得重分配效果於不同城鄉間之影響

簡雅惠 Unknown Date (has links)
社會安全制度,以社會保險及公共救助為主體,兩者之中尤以社會保險為骨幹,社會保險通常扮演著重要的角色。當中一項重要的功能即為所得(財富)重分配功能,亦即政府借助社會保險之力,達成安定經濟社會與改善國民所得分配不均,以達公平之目標。 本文在實證方法上採用「吉尼係數法」與「變異係數法」來計算社會保險的所得重分配效果。利用民國八十五年至民國九十一年行政院主計處「中華民國臺灣地區家庭收支調查報告」之調查資料,探討臺灣地區所得分配不均度上升的原因是否來自於城鄉差異,其次是社會保險政策對於平衡城鄉差距是否有助益。 為了衡量社會保險的所得重分配效果是否會因城鄉發展程度之不同而有所差異,將臺灣地區內之城市分為都市、城鎮及鄉村三級,其分層標準係依照行政院主計處「中華民國臺灣地區家庭收支調查報告」之標準分類。本研究以城鄉別與社會保險為研究主軸,探討臺灣社會保險的所得重分配效果是否在不同城鄉間會有所影響。 綜合研究結果及分析,對於民國八十五至九十一年社會保險實施的所得重分配效果所得到的結論為:1.臺灣地區自民國八十五年後無論是區分層級或整體所得分配效果上的吉尼係數均有逐漸縮小的趨勢,代表政府對於平均所得分配之努力是有所成效的。2.在吉尼係數法下,除了「都市層」外,社會保險實施後「城鎮層」、「鄉村層」與整體所得分配效果的吉尼係數值均高於較社會保險實施前,顯示社會保險政策在平衡城鄉所得差異上的力量似乎薄弱了些。3.在變異係數法下,無論是分層效果或是整體效果實施社會保險後整體的所得分配平均化力量均減弱,故社會保險政策在平均所得分配的效果上似乎沒有達到預期的成效。4.綜合上述兩種方法,除了吉尼係數法下的「都市層」有達成社會保險的所得重分配效果外,吉尼係數法與變異係數法的其他層級和整體效果分析均顯示出實施社會保險未達成所得重分配的效果。 / Social insurance and public rescue are two main components of social security system. Especially, social insurance is also the skeleton of social security system, which has many important functions, one of which is improving the inequity of people’s income assignment. It means that the government redistributes people’s income through social insurance to achieve the goal of equity and further to stabilize economic society. This article uses the data of "Republic of China Taiwan area family budget survey reported", which comes from 1996 to 2002 Directorate-General of Budget, Accounting and Statistics, Executive Yuan, R.O.C.(Taiwan), as investigation material. We calculate the income redistribution effect of social insurance by means of "Gini Coefficient method" and "Coefficient of Variation method". This article has two issues, one of which discusses whether the income inequality in Taiwan does come from the difference between city and countryside. The other one is the benefit of social security policy to balance of disparity of city and countryside. In order to assess whether the income redistribution effects of social security has the difference between cities, we divide the cities in Taiwan into three groups: metropolis, countries and villages, according to standard classification of the investigation material. We use difference between cities and social insurance as two axes of our study to evaluate the effect of income redistribution between different cities. To the effect of social insurance on income redistribution from 1996 to 2002, our study has following findings. First, regardless of classification or summation analysis, the Gini coefficient of income redistribution was gradually reducing from 1996 to 2002. This means that income redistribution policy of government is effective. Second, in Gini Coefficient method, country group and village group had higher Gini Coefficient than before executing social insurance policy. The conclusion shows the influence of social insurance was still not efficient. Third, in Coefficient of Variation method, classification and summation analysis both revealed income redistribution was weaker than before executing social insurance policy, so the policy did not achieve the expected effect. From the above findings, although the metropolis group in Coefficient method did improve income redistribution, other analysis did not achieve the goal of income redistribution.
115

偏常態因子信用組合下之效率估計值模擬 / Efficient Simulation in Credit Portfolio with Skew Normal Factor

林永忠, Lin, Yung Chung Unknown Date (has links)
在因子模型下,損失分配函數的估算取決於混合型聯合違約分配。蒙地卡羅是一個經常使用的計算工具。然而,一般蒙地卡羅模擬是一個不具有效率的方法,特別是在稀有事件與複雜的債務違約模型的情形下,因此,找尋可以增進效率的方法變成了一件迫切的事。 對於這樣的問題,重點採樣法似乎是一個可以採用且吸引人的方法。透過改變抽樣的機率測度,重點採樣法使估計量變得更有效率,尤其是針對相對複雜的模型。因此,我們將應用重點採樣法來估計偏常態關聯結構模型的尾部機率。這篇論文包含兩個部分。Ⅰ:應用指數扭轉法---一個經常使用且為較佳的終點採樣技巧---於條件機率。然而,這樣的程序無法確保所得的估計量有足夠的變異縮減。此結果指出,對於因子在選擇重點採樣上,我們需要更進一步的考慮。Ⅱ:進一步應用重點採樣法於因子;在這樣的問題上,已經有相當多的方法在文獻中被提出。在這些文獻中,重點採樣的方法可大略區分成兩種策略。第一種策略主要在選擇一個最好的位移。最佳的位移值可透過操作不同的估計法來求得,這樣的策略出現在Glasserman等(1999)或Glasserman與Li (2005)。 第二種策略則如同在Capriotti (2008)中的一樣,則是考慮擁有許多參數的因子密度函數作為重點採樣的候選分配。透過解出非線性優化問題,就可確立一個未受限於位移的重點採樣分配。不過,這樣的方法在尋找最佳的參數當中,很容易引起另一個效率上的問題。為了要讓此法有效率,就必須在使用此法前,對參數的穩健估計上,投入更多的工作,這將造成問題更行複雜。 本文中,我們說明了另一種簡單且具有彈性的策略。這裡,我們所提的演算法不受限在如同Gaussian模型下決定最佳位移的作法,也不受限於因子分配函數參數的估計。透過Chiang, Yueh與Hsie (2007)文章中的主要概念,我們提供了重點採樣密度函數一個合理的推估並且找出了一個不同於使用隨機近似的演算法來加速模擬的進行。 最後,我們提供了一些單因子的理論的證明。對於多因子模型,我們也因此有了一個較有效率的估計演算法。我們利用一些數值結果來凸顯此法在效率上,是遠優於蒙地卡羅模擬。 / Under a factor model, computation of the loss density function relies on the estimates of some mixture of the joint default probability and joint survival probability. Monte Carlo simulation is among the most widely used computational tools in such estimation. Nevertheless, general Monte Carlo simulation is an ineffective simulation approach, in particular for rare event aspect and complex dependence between defaults of multiple obligors. So a method to increase efficiency of estimation is necessary. Importance sampling (IS) seems to be an attractive method to address this problem. Changing the measure of probabilities, IS makes an estimator to be efficient especially for complicated model. Therefore, we consider IS for estimation of tail probability of skew normal copula model. This paper consists of two parts. First, we apply exponential twist, a usual and better IS technique, to conditional probabilities and the factors. However, this procedure does not always guarantee enough variance reduction. Such result indicates the further consideration of choosing IS factor density. Faced with this problem, a variety of approaches has recently been proposed in the literature ( Capriotti 2008, Glasserman et al 1999, Glasserman and Li 2005). The better choices of IS density can be roughly classified into two kinds of strategies. The first strategy depends on choosing optimal shift. The optimal drift is decided by using different approximation methods. Such strategy is shown in Glasserman et al 1999, or Glasserman and Li 2005. The second strategy, as shown in Capriotti (2008), considers a family of factor probability densities which depend on a set of real parameters. By formulating in terms of a nonlinear optimization problem, IS density which is not limited the determination of drift is then determinate. The method that searches for the optimal parameters, however, incurs another efficiency problem. To keep the method efficient, particular care for robust parameters estimation needs to be taken in preliminary Monte Carlo simulation. This leads method to be more complicated. In this paper, we describe an alternative strategy that is straightforward and flexible enough to be applied in Monte Carlo setting. Indeed, our algorithm is not limited to the determination of optimal drift in Gaussian copula model, nor estimation of parameters of factor density. To exploit the similar concept developed for basket default swap valuation in Chiang, Yueh, and Hsie (2007), we provide a reasonable guess of the optimal sampling density and then establish a way different from stochastic approximation to speed up simulation. Finally, we provide theoretical support for single factor model and take this approach a step further to multifactor case. So we have a rough but fast approximation that execute entirely with Monte Carlo in general situation. We support our approach by some portfolio examples. Numerical results show that such algorithm is more efficient than general Monte Carlo simulation.
116

利用混合模型估計風險值的探討

阮建豐 Unknown Date (has links)
風險值大多是在假設資產報酬為常態分配下計算而得的,但是這個假設與實際的資產報酬分配不一致,因為很多研究者都發現實際的資產報酬分配都有厚尾的現象,也就是極端事件的發生機率遠比常態假設要來的高,因此利用常態假設來計算風險值對於真實損失的衡量不是很恰當。 針對這個問題,本論文以歷史模擬法、變異數-共變異數法、混合常態模型來模擬報酬率的分配,並依給定的信賴水準估算出風險值,其中混合常態模型的參數是利用準貝式最大概似估計法及EM演算法來估計;然後利用三種風險值的評量方法:回溯測試、前向測試與二項檢定,來評判三種估算風險值方法的優劣。 經由實證結果發現: 1.報酬率分配在左尾臨界機率1%有較明顯厚尾的現象。 2.利用混合常態分配來模擬報酬率分配會比另外兩種方法更能準確的捕捉到左尾臨界機率1%的厚尾。 3.混合常態模型的峰態係數值接近於真實報酬率分配的峰態係數值,因此我們可以確認混合常態模型可以捕捉高峰的現象。 關鍵字:風險值、厚尾、歷史模擬法、變異數-共變異教法、混合常態模型、準貝式最大概似估計法、EM演算法、回溯測試、前向測試、高峰 / Initially, Value at Risk (VaR) is calculated by assuming that the underline asset return is normal distribution, but this assumption sometimes does not consist with the actual distribution of asset return. Many researchers have found that the actual distribution of the underline asset return have Fat-Tail, extreme value events, character. So under normal distribution assumption, the VaR value is improper compared with the actual losses. The paper discuss three methods. Historical Simulated method - Variance-Covariance method and Mixture Normal .simulating those asset, return and VaR by given proper confidence level. About the Mixture Normal Distribution, we use both EM algorithm and Quasi-Bayesian MLE calculating its parameters. Finally, we use tree VaR testing methods, Back test、Forward tes and Binomial test -----comparing its VaR loss probability We find the following results: 1.Under 1% left-tail critical probability, asset return distribution has significant Fat-tail character. 2.Using Mixture Normal distribution we can catch more Fat-tail character precisely than the other two methods. 3.The kurtosis of Mixture Normal is close to the actual kurtosis, this means that the Mixture Normal distribution can catch the Leptokurtosis phenomenon. Key words: Value at Risk、VaR、Fat tail、Historical simulation method、 Variance-Covariance method、Mixture Normal distribution、Quasi-Bayesian MLE、EM algorithm、Back test、 Forward test、 Leptokurtosis
117

混合線性模型推測問題之研究

洪可音 Unknown Date (has links)
當線性模型中包含隨機效果項時,若將之視為固定效果或直接忽略,往往會造成嚴重的推測偏差,故應以混合線性模型為架構。若模式中只包含一個隨機效果項,則模式中有兩個變異數成份,若包含 個隨機效果項,則模式中有 個變異數成份。本論文主要在介紹至少兩個變異數成份時固定效果及隨機效果線性組合的最佳線性不偏推測量(BLUP),及其推測區間之推導與建立。然而BLUP實為變異數比率的函數,若變異數比率未知,而以最大概似法(Maximum Likelihood Method)或殘差最大概似法(Residual Maximum Likelihood Method)估計出變異數比率,再代入BLUP中,則得到的是經驗最佳線性不偏推測量(EBLUP)。至於推測區間則與EBLUP的均方誤有關,本論文先介紹如何求算其漸近不偏估計量,再介紹EBLUP之推測誤差除以 後,其自由度的估算方法,據以建構推測區間。 / When random effects are contained in the model, if they are treated as fixed effects or ignore, then it may result in serious prediction bias. Instead, mixed linear model is to be considered. If there is one source of random effects, then the model has two variance components, while it has variance components, if the model contains random effects. This study primarily presents the derivation of the best linear unbiased predictor (BLUP) of a linear combination of the fixed and random effects, and then the conduction of the prediction interval when the model contains at least two variance components. However, BLUP is a function of variance ratios. If the variance ratios are unknown, we can replace them by their maximum likelihood estimates or residual maximum likelihood estimates, then we can get empirical best linear unbiased predictor (EBLUP). Because prediction interval is relating to the mean squared error (MSE) of EBLUP, so the study first introduces how to get its approximate unbiased estimator, m<sub>a</sub> , then introduces how to evaluate the degrees of freedom of the ratio of the prediction error for the EBLUP and m<sub>a</sub> <sup>1/2</sup> , in order to use both of them to establish the prediction interval.
118

異質性投資組合下的改良式重點取樣法 / Modified Importance Sampling for Heterogeneous Portfolio

許文銘 Unknown Date (has links)
衡量投資組合的稀有事件時,即使稀有事件違約的機率極低,但是卻隱含著高額資產違約時所帶來的重大損失,所以我們必須要精準地評估稀有事件的信用風險。本研究係在估計信用損失分配的尾端機率,模擬的模型包含同質模型與異質模型;然而蒙地卡羅法雖然在風險管理的計算上相當實用,但是估計機率極小的尾端機率時模擬不夠穩定,因此為增進模擬的效率,我們利用Glasserman and Li (Management Science, 51(11),2005)提出的重點取樣法,以及根據Chiang et al. (Joural of Derivatives, 15(2),2007)重點取樣法為基礎做延伸的改良式重點取樣法,兩種方法來對不同的投資組合做模擬,更是將改良式重點取樣法推廣至異質模型做討論,本文亦透過變異數縮減效果來衡量兩種方法的模擬效率。數值結果顯示,比起傳統的蒙地卡羅法,此兩種方法皆能達到變異數縮減,其中在同質模型下的改良式重點取樣法有很好的表現,模擬時間相當省時,而異質模型下的重點取樣法也具有良好的估計效率及模擬的穩定性。 / When measuring portfolio credit risk of rare-event, even though its default probabilities are low, it causes significant losses resulting from a large number of default. Therefore, we have to measure portfolio credit risk of rare-event accurately. In particular, our goal is estimating the tail of loss distribution. Models we simulate are including homogeneous models and heterogeneous models. However, Monte Carlo simulation is useful and widely used computational tool in risk management, but it is unstable especially estimating small tail probabilities. Hence, in order to improve the efficiency of simulation, we use importance sampling proposed by Glasserman and Li (Management Science, 51(11),2005) and modified importance sampling based on importance sampling which proposed by Chiang et al. (2007 Joural of Derivatives, 15(2),). Simulate different portfolios by these two of simulations. On top of that, we extend and discuss the modified importance sampling simulation to heterogeneous model. In this article, we measure efficiency of two simulations by variance reduction. Numerical results show that proposed methods are better than Monte Carlo and achieve variance reduction. In homogeneous model, modified importance sampling has excellent efficiency of estimating and saves time. In heterogeneous model, importance sampling also has great efficiency of estimating and stability.
119

網路圖書市場與傳統圖書市場定價行為之研究

王亭享 Unknown Date (has links)
摘 要 台灣地區近日網路購物的人數快速成長,因此,消費者對於網路商品的需求大增,是否能從網路商店購買到價廉物美的商品乃為消費者關心的焦點。且根據資策會的調查,書籍是網路購物者的最愛,基於探討網路商店定價行為的國外文獻大抵僅以美國為研究對象,偏重於簡單的統計與迴歸分析,欠缺較深入的統計方法如單因子變異數分析法及集群分析法,加上國內缺乏針對網路商店定價行為從事分析的實證文獻,於是本研究將以書籍為研究對象,蒐集國內傳統書店和網路書店銷售書籍的價格,綜合了迴歸分析、無母數統計、單因子變異數分析及集群分析四大方法從事書店定價行為的分析。本研究先以複迴歸分析探討不同書店之書籍售價的決定因素;再以無母數統計法比較不同購買數量及不同種類的書籍售價;然後,以單因子變異數分析和Turkey、Scheffe兩兩比較法剖析書籍的售價與書店的關聯性;最後,以策略群組分析法探索書店間經營策略的類似性及差異性。 結果顯示,網路書店彼此之間的價格競爭較實體書店之間激烈。書店的書籍銷售價格和資本額、員工數成正向關係,資本額越大、員工數越多則書籍銷售價格越高,書店歷史、商品種類則和書店書籍銷售價格有負向關係,書店歷史越久、商品種類越多,則書籍銷售價格越便宜。再者,比較不同購買數量的書籍售價後發現消費者只要購買兩本書籍以上,網路書店比實體書店便宜,而若消費者只購買一本書籍,在網路書店購買會較吃虧。還有,在工商企管、健康旅遊及文學類三類書籍的價格比較下,書籍售價排序由低到高為文學類、健康旅遊類、工商企管類。接下來,從書籍與書店的關聯性來看,誠品網路書店的書籍銷售價格最高,華文網和搜主義的書籍最便宜。此外,本研究發現,實體書店所開設的網路分店書籍的售價都較實體書店高,而且純網路書店的書籍銷售價格顯著較實體書店的網路分店便宜。 最後,在策略群組分析法中發現19家書店可分為6個群組:節省開銷,專於本業型 (政大書城、上達書局、聯經出版社、今日書局、搜主義網路書店、三民網路書店及誠品網路書店)、圖書館結合百貨公司型 (誠品書店)、大規模經營型 (金石堂書店、新學友書局、博客來網路書店及新絲路網路書店);第四群小百貨公司型 (建宏書局、三民書局、摩爾書店、金石堂網路書店及華文網路書店);第五群致力服務型 (何嘉仁書店);第六群國際級圖書館型 (紀伊國屋書店)。
120

以變異數比率法檢定指數選擇權之買賣權平價理論——馬可夫狀態轉換模型之應用

秦秀琪 Unknown Date (has links)
本研究目的在於探討Put-Call Parity(PCP)所隱含的買權、賣權與標的資產間的價格變動關係。藉由探討PCP偏差程度的動態行為,推論若PCP的偏差為隨機漫步過程,則無法達到長期穩定,隱含PCP的廣義關係無法成立;反之,若PCP的偏差具有回歸平均特性,表示長期會達到穩定狀態,則PCP的廣義關係成立。 在研究方法上本文以變異數比率法檢定指數選擇權的PCP偏差是否為隨機漫步過程,採用隱含利率和實際無風險利率的差代表PCP的偏差程度,利用馬可夫轉換模型描繪PCP偏差的動態行為,並使用Gibbs Sampling演算法說明參數的不確定性。 本文以S&P500和DAX為研究標的,並探討股利不確定性是否影響PCP廣義關係,得到下列結論: 1、 對於S&P 500指數選擇權而言,不論是以日資料或週資料估計VR,S&P 500的PCP偏差都無法提供回歸平均的證據,隱含S&P 500的PCP廣義關係無法成立。 2、 對於DAX指數選擇權而言,檢定日資料的結果發現,DAX之PCP偏差在長期時(40~50日)有明顯的回歸平均的證據;而在檢定週資料時,使用原始資料法在90%信心水準下,不論取任何lag都可拒絕虛無假設,使用標準化資料則無法提供明顯的回歸平均證據。 3、 比較S&P 500和DAX,檢定日資料與週資料的結果都發現,DAX的p-value都比S&P 500小,並且S&P 500的PCP偏差都無法提供回歸平均的證據,而DAX有明顯回歸平均現象,隱含在消除股利的不確定性後,指數選擇權PCP的廣義關係式成立之證據較強烈。

Page generated in 0.0301 seconds