• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 50
  • 7
  • Tagged with
  • 57
  • 57
  • 29
  • 23
  • 19
  • 17
  • 15
  • 15
  • 14
  • 14
  • 13
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

用馬可夫鏈蒙地卡羅法估計隨機波動模型:台灣匯率市場的實證研究

賴耀君, Lai,Simon Unknown Date (has links)
針對金融時序資料變異數不齊一的性質,隨機波動模型除了提供於ARCH族外的另一選擇;且由於其設定隱含波動本身亦為一個隨機波動函數,藉由設定隨時間改變且自我相關的條件變異數,使得隨機波動模型較ARCH族來得有彈性且符合實際。傳統上處理隨機波動模型的參數估計往往需要面對到複雜的多維積分,此問題可藉由貝氏分析裡的馬可夫鏈蒙地卡羅法解決。本文主要的探討標的,即在於利用馬可夫鏈蒙地卡羅法估計美元/新台幣匯率隨機波動模型參數。除原始模型之外,模型的擴充分為三部分:其一為隱含波動的二階自我回歸模型;其二則為藉由基本模型的修改,檢測匯率市場上的槓桿效果;最後,我們嘗試藉由加入scale mixture的方式以驗證金融時序資料中常見的厚尾分配。
52

正負向未來思考、行為激發/抑制系統與憂鬱症狀間模式之初探 / The Exploratory Model of Positive and Negative Future Thinking, Behavioral Activation/Inhibition Systems, and Depressive Symptons

胡肇勳 Unknown Date (has links)
本論文根據無望感的相關理論,以三種方式探討貝氏無望感量表內的正負向期待為一個概念的兩面,或是表徵兩種不同的概念。首先,本論文進行探索性與驗證性因素分析,以考驗一與二因素模式的模式適合度。再者,MacLeod與Byrne(1996)認為兩類未來思考對憂鬱症狀的影響是相互獨立運作,但可能有過於簡化的限制,故本論文提出四種的可能模式並加以檢驗。最後,根據Trew(2011)所提出的整合性模式,提出行為抑制與激發系統導致憂鬱症狀之兩種競爭模式,檢驗無望感或正負向未來思考在此模式中所扮演的中介角色,以及兩個系統之機制間有互動的可能性。主要的研究結果如下:(1)探索性與驗證性因素結果均支持貝氏無望感量表的二因素結構,並以正負向未來思考加以命名;(2)支持模式二的假設,負向未來思考為正向未來思考與憂鬱症狀間的部分中介變項,但正向未來思考並非是負向未來思考與憂鬱症狀間的部分中介變項;(3)競爭模式二具備較佳的模式適合度,支持Trew(2011)認為憂鬱症時須同時注意BAS與BIS各自不同影響途徑的觀點,亦彰顯正向未來思考的保護因子角色;(4)更重要的是,支持無望感量表中正負向期待內容應被視為兩種不同且各自存在的概念。最後並提出本論文研究限制與對憂鬱症的臨床理論與實務上之建議。 / This study investigated the relation between the positive and negative expectations assessed in Beck Hopelessness Scale (BHS) by three ways. First, exploratory and confirmatory factor analysis were used to test the goodness of fit of one-factor and two-factor models. Besides, MacLeod and Byrne (1996) stated that two kinds of future thinkings influenced the depressive symptoms independently, but this statement had some limitations. Therefore, this study proposed four models to test the hypotheses. Based on Trew ‘s (2011) integrated model, two competing models illustrating the relations among BAS, BIS and depression were proposed to examine the mediation effect of hopelessness or future thinkings. The main results were: (1) the two-factor model of BHS was supported in exploratory and confirmatory factor analyses; (2) the second model was supported that negative future thinking was the partial mediator between depression and positive future thinking; (3) the competing model 2 had the better goodness of fit, supporting that BAS and BIS had important but different pathways to influence the development of depression, and positive future thinking played the protective role in this process; (4) Most importantly, the perspective that the positive and negative expectations assessed in BHS should be treated as two different kinds of constructs respectively was supported. Finally, the limitations of this study and the suggestions for the theories and clinical treatment of depression were discussed.
53

資訊檢索之學術智慧 / Research Intelligence Involving Information Retrieval

杜逸寧, Tu, Yi-Ning Unknown Date (has links)
偵測新興議題對於研究者而言是一個相當重要的問題,研究者如何在有限的時間和資源下探討同一領域內的新興議題將比解決已經成熟的議題帶來較大的貢獻和影響力。本研究將致力於協助研究者偵測新興且具有未來潛力的研究議題,並且從學術論文中探究對於研究者在做研究中有幫助的學術智慧。在搜尋可能具有研究潛力的議題時,我們假設具有研究潛力的議題將會由同一領域中較具有影響力的作者和刊物發表出,因此本研究使用貝式估計的方法去推估同一領域中相關的研究者和學術刊物對於該領域的影響力,進而藉由這些資訊可以找出未來具有潛力的新興候選議題。此外就我們所知的議題偵測文獻中對於認定一個議題是否已經趨於成熟或者是否新穎且具有研究的潛力仍然缺乏有效及普遍使用的衡量工具,因此本研究試圖去發展有效的衡量工具以評估議題就本身的發展生命週期是否仍然具有繼續投入的學術價值。 本研究從許多重要的資料庫中挑選了和資料探勘和資訊檢索相關的論文並且驗證這些在會議論文中所涵蓋的議題將會領導後續幾年期刊論文相似的議題。此外本研究也使用了一些已經存在的演算法並且結合這些演算法發展一個檢測的流程幫助研究者去偵測學術論文中的領導趨勢並發掘學術智慧。本研究使用貝式估計的方法試圖從已經發表的資訊和被引用的資訊來建構估計作者和刊物的影響力的事前機率與概似函數,並且計算出同一領域重要的作者和刊物的影響力,當這些作者和刊物的論文發表時將會相對的具有被觀察的價值,進而檢定這些新興候選議題是否會成為新興議題。而找出的重要研究議題雖然已經縮小探索的範圍,但是仍然有可能是發展成熟的議題使得具有影響力的作者和刊物都必須討論,因此需要評估議題未來潛力的指標或工具。然而目前文獻中對於評估議題成熟的方法僅著重在議題的出現頻率而忽視了議題的新穎度也是重要的指標,另一方面也有只為了找出新議題並沒有顧及這個議題是否具有未來的潛力。更重要的是單一的使用出現頻率的曲線只能在議題已經成熟之後才能確定這是一個重要的議題,使得這種方法成為落後的指標。 本研究試圖提出解決這些困境的指標進而發展成衡量新興議題潛力的方法。這些指標包含了新穎度指標、發表量指標和偵測點指標,藉由這些指標和曲線可以在新興議題的偵測中提供更多前導性的資訊幫助研究者去建構各自領域中新興議題的偵測標準。偵測點所代表的意義並非這個議題開始新興的正確日期,它代表了這個議題在自己發展的生命週期上最具有研究的潛力和價值的時間點,因此偵測點會根據後來的蓬勃發展而在時間上產生遞延的結果,這表示我們的指標可以偵測出議題生命力的延續。相對於傳統的次數分配曲線可以看出議題的崛起和衰退,本研究的發表量指標更能以生命週期的概念去看出議題在各個時間點的發展潛力。本研究希望從這些過程中所發現的學術智慧可以幫助研究者建構各自領域的議題偵測標準,節省大量人力與時間於探究新興議題。本研究所提出的新方法不僅可以解決影響因子這個指標的缺點,此外還可以使用作者和刊物的影響力去針對一個尚未累積任何索引次數的論文進行潛力偵測,解決Google 學術搜尋目前總是在論文已經被很多檢索之後才能確定論文重要性的缺點,學者總是希望能夠領先發現重要的議題或論文。然而,我們以議題為導向的檢索方法相信可以更確實的滿足研究者在搜尋議題或論文上的需求。 / This research presents endeavors that seek to identify the emerging topics for researchers and pinpoint research intelligence via academic papers. It is intended to reveal the connection between topics investigated by conference papers and journal papers which can help the research decrease the plenty of time and effort to detect all the academic papers. In order to detect the emerging research topics the study uses the Bayesian estimation approach to estimate the impact of the authors and publications may have on a topic and to discover candidate emerging topics by the combination of the impact authors and publications. Finally the research also develops the measurement tools which could assess the research potential of these topics to find the emerging topics. This research selected huge of papers in data mining and information retrieval from well-known databases and showed that the topics covered by conference papers in a year often leads to similar topics covered by journal papers in the subsequent year and vice versa. This study also uses some existing algorithms and combination of these algorithms to propose a new detective procedure for the researchers to detect the new trend and get the academic intelligence from conferences and journals. The research uses the Bayesian estimation approach and citation analysis methods to construct the prior distribution and likelihood function of the authors and publications in a topic. Because the topics published by these authors and publications will get more attention and valuable than others. Researchers can assess the potential of these candidate emerging topics. Although the topics we recommend decrease the range of the searching space, these topics may so popular that even all of the impact authors and publications discuss it. The measurement tools or indices are need. But the current methods only focus on the frequency of subjects, and ignore the novelty of subjects which is critical and beyond the frequency study or only focus one of them and without considering the potential of the topics. Some of them only use the curve of published frequency will make the index as a backward one. This research tackles the inadequacy to propose a set of new indices of novelty for emerging topic detection. They are the novelty index (NI) and the published volume index (PVI). These indices are then utilized to determine the detection point (DP) of emerging topics. The detection point (DP) is not the real time which the topic starts to be emerging, but it represents the topic have the highest potential no matter in novelty or hotness for research in its life cycle. Different from the absolute frequent method which can really find the exact emerging period of the topic, the PVI uses the accumulative relative frequency and tries to detect the research potential timing of its life cycle. Following the detection points, the intersection decides the worthiness of a new topic. Readers following the algorithms presented this thesis will be able to decide the novelty and life span of an emerging topic in their field. The novel methods we proposed can improve the limitations of impact factor proposed by ISI. Besides, it uses the impact power of the authors and the publication in a topic to measure the impact power of a paper before it really has been an impact paper can solve the limitations of Google scholar’s approach. We suggest that the topic oriented thinking of our methods can really help the researchers to solve their problems of searching the valuable topics.
54

在序列相關因子模型下探討動態模型化投資組合信用風險 / Dynamic modeling portfolio credit risk under serially dependent factor model

游智惇, Yu, Chih Tun Unknown Date (has links)
獨立因子模型廣泛的應用在信用風險領域,此模型可用來估計經濟資本與投資組合的損失率分配。然而獨立因子模型假設因子獨立地服從同分配,因而可能會得到估計不精確的違約機率與資產相關係數。因此我們在本論文中提出序列相關因子模型來改進獨立因子模型的缺失,同時可以捕捉違約率的動態行為與授信戶間相關性。我們也分別從古典與貝氏的角度下估計序列相關因子模型。首先,我們在序列相關因子模型下利用貝氏的方法應用馬可夫鍊蒙地卡羅技巧估計違約機率與資產相關係數,使用標準普爾違約資料進行外樣本資料預測,能夠證明序列相關因子模型是比獨立因子模型合理。第二,蒙地卡羅期望最大法與蒙地卡羅最大概似法這兩種估計方法也使用在本篇論文。從模擬結果發現,若違約資料具有較大的序列相關與資產相關特性,蒙地卡羅最大概似法能夠配適的比蒙地卡羅期望最大法好。 / The independent factor model has been widely used in the credit risk field, and has been applied in estimating the economic capital allocations and loss rate distribution on a credit portfolio. However, this model assumes independent and identically distributed common factor which may produce inaccurate estimates of default probabilities and asset correlation. In this thesis, we address a serially dependent factor model (SDFM) to improve this phenomenon. This model can capture both dynamic behavior of default risk and dependence among individual obligors. We also address the estimation of the SDFM from both frequentist and Bayesian point of view. Firstly, we consider the Bayesian approach by applying Markov chain Monte Carlo (MCMC) techniques in estimating default probability and asset correlation under SDFM. The out-of-sample forecasting for S&P default data provide strong evidence to support that the SDFM is more reliable than the independent factor model. Secondly, we use two frequentist estimation methods to estimate the default probability and asset correlation under SDFM. One is Monte Carlo Expectation Maximization (MCEM) estimation method along with a Gibbs sampler and an acceptance method and the other is Monte Carlo maximum likelihood (MCML) estimation method with importance sampling techniques.
55

排列檢定法應用於空間資料之比較 / Permutation test on spatial comparison

王信忠, Wang, Hsin-Chung Unknown Date (has links)
本論文主要是探討在二維度空間上二母體分佈是否一致。我們利用排列 (permutation)檢定方法來做比較, 並藉由費雪(Fisher)正確檢定方法的想法而提出重標記 (relabel)排列檢定方法或稱為費雪排列檢定法。 我們透過可交換性的特質證明它是正確 (exact) 的並且比 Syrjala (1996)所建議的排列檢定方法有更高的檢定力 (power)。 本論文另提出二個空間模型: spatial multinomial-relative-log-normal 模型 與 spatial Poisson-relative-log-normal 模型 來配適一般在漁業中常有的右斜長尾次數分佈並包含很多0 的空間資料。另外一般物種可能因天性或自然環境因素像食物、溫度等影響而有群聚行為發生, 這二個模型亦可描述出空間資料的群聚現象以做適當的推論。 / This thesis proposes the relabel (Fisher's) permutation test inspired by Fisher's exact test to compare between distributions of two (fishery) data sets locating on a two-dimensional lattice. We show that the permutation test given by Syrjala (1996} is not exact, but our relabel permutation test is exact and, additionally, more powerful. This thesis also studies two spatial models: the spatial multinomial-relative-log-normal model and the spatial Poisson-relative-log-normal model. Both models not only exhibit characteristics of skewness with a long right-hand tail and of high proportion of zero catches which usually appear in fishery data, but also have the ability to describe various types of aggregative behaviors.
56

空間相關存活資料之貝氏半參數比例勝算模式 / Bayesian semiparametric proportional odds models for spatially correlated survival data

張凱嵐, Chang, Kai lan Unknown Date (has links)
近來地理資訊系統(GIS)之資料庫受到不同領域的統計學家廣泛的研究,以期建立及分析可描述空間聚集效應及變異之模型,而描述空間相關存活資料之統計模式為公共衛生及流行病學上新興的研究議題。本文擬建立多維度半參數的貝氏階層模型,並結合空間及非空間隨機效應以描述存活資料中的空間變異。此模式將利用多變量條件自回歸(MCAR)模型以檢驗在不同地理區域中是否存有空間聚集效應。而基準風險函數之生成為分析貝氏半參數階層模型的重要步驟,本研究將利用混合Polya樹之方式生成基準風險函數。美國國家癌症研究院之「流行病監測及最終結果」(Surveillance Epidemiology and End Results, SEER)資料庫為目前美國最完整的癌症病人長期追蹤資料,包含癌症病人存活狀況、多重癌症史、居住地區及其他分析所需之個人資料。本文將自此資料庫擷取美國愛荷華州之癌症病人資料為例作實證分析,並以貝氏統計分析中常用之模型比較標準如條件預測指標(CPO)、平均對數擬邊際概似函數值(ALMPL)、離差訊息準則(DIC)分別測試其可靠度。 / The databases of Geographic Information System (GIS) have gained attention among different fields of statisticians to develop and analyze models which account for spatial clustering and variation. There is an emerging interest in modeling spatially correlated survival data in public health and epidemiologic studies. In this article, we develop Bayesian multivariate semiparametric hierarchical models to incorporate both spatially correlated and uncorrelated frailties to answer the question of spatial variation in the survival patterns, and we use multivariate conditionally autoregressive (MCAR) model to detect that whether there exists the spatial cluster across different areas. The baseline hazard function will be modeled semiparametrically using mixtures of finite Polya trees. The SEER (Surveillance Epidemiology and End Results) database from the National Cancer Institute (NCI) provides comprehensive cancer data about patient’s survival time, regional information, and others demographic information. We implement our Bayesian hierarchical spatial models on Iowa cancer data extracted from SEER database. We illustrate how to compute the conditional predictive ordinate (CPO), the average log-marginal pseudo-likelihood (ALMPL), and deviance information criterion (DIC), which are Bayesian criterions for model checking and comparison among competing models.
57

含存活分率之貝氏迴歸模式

李涵君 Unknown Date (has links)
當母體中有部份對象因被治癒或免疫而不會失敗時,需考慮這群對象所佔的比率,即存活分率。本文主要在探討如何以貝氏方法對含存活分率之治癒率模式進行分析,並特別針對兩種含存活分率的迴歸模式,分別是Weibull迴歸模式以及對數邏輯斯迴歸模式,導出概似函數與各參數之完全條件後驗分配及其性質。由於聯合後驗分配相當複雜,各參數之邊際後驗分配之解析形式很難表達出。所以,我們採用了馬可夫鏈蒙地卡羅方法(MCMC)中的Gibbs抽樣法及Metropolis法,模擬產生參數值,以進行貝氏分析。實證部份,我們分析了黑色素皮膚癌的資料,這是由美國Eastern Cooperative Oncology Group所進行的第三階段臨床試驗研究。有關模式選取的部份,我們先分別求出各對象在每個模式之下的條件預測指標(CPO),再據以算出各模式的對數擬邊際概似函數值(LPML),以比較各模式之適合性。 / When we face the problem that part of subjects have been cured or are immune so they never fail, we need to consider the fraction of this group among the whole population, which is the so called survival fraction. This article discuss that how to analyze cure rate models containing survival fraction based on Bayesian method. Two cure rate models containing survival fraction are focused; one is based on the Weibull regression model and the other is based on the log-logistic regression model. Then, we derive likelihood functions and full conditional posterior distributions under these two models. Since joint posterior distributions are both complicated, and marginal posterior distributions don’t have closed form, we take Gibbs sampling and Metropolis sampling of Markov Monte Carlo chain method to simulate parameter values. We illustrate how to conduct Bayesian analysis by using the data from a melanoma clinical trial in the third stage conducted by Eastern Cooperative Oncology Group. To do model selection, we compute the conditional predictive ordinate (CPO) for every subject under each model, then the goodness is determined by the comparing the value of log of pseudomarginal likelihood (LPML) of each model.

Page generated in 0.0167 seconds