• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 72
  • 18
  • 7
  • 1
  • Tagged with
  • 98
  • 58
  • 38
  • 28
  • 25
  • 21
  • 19
  • 19
  • 18
  • 18
  • 17
  • 17
  • 15
  • 13
  • 13
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

資訊檢索之學術智慧 / Research Intelligence Involving Information Retrieval

杜逸寧, Tu, Yi-Ning Unknown Date (has links)
偵測新興議題對於研究者而言是一個相當重要的問題,研究者如何在有限的時間和資源下探討同一領域內的新興議題將比解決已經成熟的議題帶來較大的貢獻和影響力。本研究將致力於協助研究者偵測新興且具有未來潛力的研究議題,並且從學術論文中探究對於研究者在做研究中有幫助的學術智慧。在搜尋可能具有研究潛力的議題時,我們假設具有研究潛力的議題將會由同一領域中較具有影響力的作者和刊物發表出,因此本研究使用貝式估計的方法去推估同一領域中相關的研究者和學術刊物對於該領域的影響力,進而藉由這些資訊可以找出未來具有潛力的新興候選議題。此外就我們所知的議題偵測文獻中對於認定一個議題是否已經趨於成熟或者是否新穎且具有研究的潛力仍然缺乏有效及普遍使用的衡量工具,因此本研究試圖去發展有效的衡量工具以評估議題就本身的發展生命週期是否仍然具有繼續投入的學術價值。 本研究從許多重要的資料庫中挑選了和資料探勘和資訊檢索相關的論文並且驗證這些在會議論文中所涵蓋的議題將會領導後續幾年期刊論文相似的議題。此外本研究也使用了一些已經存在的演算法並且結合這些演算法發展一個檢測的流程幫助研究者去偵測學術論文中的領導趨勢並發掘學術智慧。本研究使用貝式估計的方法試圖從已經發表的資訊和被引用的資訊來建構估計作者和刊物的影響力的事前機率與概似函數,並且計算出同一領域重要的作者和刊物的影響力,當這些作者和刊物的論文發表時將會相對的具有被觀察的價值,進而檢定這些新興候選議題是否會成為新興議題。而找出的重要研究議題雖然已經縮小探索的範圍,但是仍然有可能是發展成熟的議題使得具有影響力的作者和刊物都必須討論,因此需要評估議題未來潛力的指標或工具。然而目前文獻中對於評估議題成熟的方法僅著重在議題的出現頻率而忽視了議題的新穎度也是重要的指標,另一方面也有只為了找出新議題並沒有顧及這個議題是否具有未來的潛力。更重要的是單一的使用出現頻率的曲線只能在議題已經成熟之後才能確定這是一個重要的議題,使得這種方法成為落後的指標。 本研究試圖提出解決這些困境的指標進而發展成衡量新興議題潛力的方法。這些指標包含了新穎度指標、發表量指標和偵測點指標,藉由這些指標和曲線可以在新興議題的偵測中提供更多前導性的資訊幫助研究者去建構各自領域中新興議題的偵測標準。偵測點所代表的意義並非這個議題開始新興的正確日期,它代表了這個議題在自己發展的生命週期上最具有研究的潛力和價值的時間點,因此偵測點會根據後來的蓬勃發展而在時間上產生遞延的結果,這表示我們的指標可以偵測出議題生命力的延續。相對於傳統的次數分配曲線可以看出議題的崛起和衰退,本研究的發表量指標更能以生命週期的概念去看出議題在各個時間點的發展潛力。本研究希望從這些過程中所發現的學術智慧可以幫助研究者建構各自領域的議題偵測標準,節省大量人力與時間於探究新興議題。本研究所提出的新方法不僅可以解決影響因子這個指標的缺點,此外還可以使用作者和刊物的影響力去針對一個尚未累積任何索引次數的論文進行潛力偵測,解決Google 學術搜尋目前總是在論文已經被很多檢索之後才能確定論文重要性的缺點,學者總是希望能夠領先發現重要的議題或論文。然而,我們以議題為導向的檢索方法相信可以更確實的滿足研究者在搜尋議題或論文上的需求。 / This research presents endeavors that seek to identify the emerging topics for researchers and pinpoint research intelligence via academic papers. It is intended to reveal the connection between topics investigated by conference papers and journal papers which can help the research decrease the plenty of time and effort to detect all the academic papers. In order to detect the emerging research topics the study uses the Bayesian estimation approach to estimate the impact of the authors and publications may have on a topic and to discover candidate emerging topics by the combination of the impact authors and publications. Finally the research also develops the measurement tools which could assess the research potential of these topics to find the emerging topics. This research selected huge of papers in data mining and information retrieval from well-known databases and showed that the topics covered by conference papers in a year often leads to similar topics covered by journal papers in the subsequent year and vice versa. This study also uses some existing algorithms and combination of these algorithms to propose a new detective procedure for the researchers to detect the new trend and get the academic intelligence from conferences and journals. The research uses the Bayesian estimation approach and citation analysis methods to construct the prior distribution and likelihood function of the authors and publications in a topic. Because the topics published by these authors and publications will get more attention and valuable than others. Researchers can assess the potential of these candidate emerging topics. Although the topics we recommend decrease the range of the searching space, these topics may so popular that even all of the impact authors and publications discuss it. The measurement tools or indices are need. But the current methods only focus on the frequency of subjects, and ignore the novelty of subjects which is critical and beyond the frequency study or only focus one of them and without considering the potential of the topics. Some of them only use the curve of published frequency will make the index as a backward one. This research tackles the inadequacy to propose a set of new indices of novelty for emerging topic detection. They are the novelty index (NI) and the published volume index (PVI). These indices are then utilized to determine the detection point (DP) of emerging topics. The detection point (DP) is not the real time which the topic starts to be emerging, but it represents the topic have the highest potential no matter in novelty or hotness for research in its life cycle. Different from the absolute frequent method which can really find the exact emerging period of the topic, the PVI uses the accumulative relative frequency and tries to detect the research potential timing of its life cycle. Following the detection points, the intersection decides the worthiness of a new topic. Readers following the algorithms presented this thesis will be able to decide the novelty and life span of an emerging topic in their field. The novel methods we proposed can improve the limitations of impact factor proposed by ISI. Besides, it uses the impact power of the authors and the publication in a topic to measure the impact power of a paper before it really has been an impact paper can solve the limitations of Google scholar’s approach. We suggest that the topic oriented thinking of our methods can really help the researchers to solve their problems of searching the valuable topics.
92

在序列相關因子模型下探討動態模型化投資組合信用風險 / Dynamic modeling portfolio credit risk under serially dependent factor model

游智惇, Yu, Chih Tun Unknown Date (has links)
獨立因子模型廣泛的應用在信用風險領域,此模型可用來估計經濟資本與投資組合的損失率分配。然而獨立因子模型假設因子獨立地服從同分配,因而可能會得到估計不精確的違約機率與資產相關係數。因此我們在本論文中提出序列相關因子模型來改進獨立因子模型的缺失,同時可以捕捉違約率的動態行為與授信戶間相關性。我們也分別從古典與貝氏的角度下估計序列相關因子模型。首先,我們在序列相關因子模型下利用貝氏的方法應用馬可夫鍊蒙地卡羅技巧估計違約機率與資產相關係數,使用標準普爾違約資料進行外樣本資料預測,能夠證明序列相關因子模型是比獨立因子模型合理。第二,蒙地卡羅期望最大法與蒙地卡羅最大概似法這兩種估計方法也使用在本篇論文。從模擬結果發現,若違約資料具有較大的序列相關與資產相關特性,蒙地卡羅最大概似法能夠配適的比蒙地卡羅期望最大法好。 / The independent factor model has been widely used in the credit risk field, and has been applied in estimating the economic capital allocations and loss rate distribution on a credit portfolio. However, this model assumes independent and identically distributed common factor which may produce inaccurate estimates of default probabilities and asset correlation. In this thesis, we address a serially dependent factor model (SDFM) to improve this phenomenon. This model can capture both dynamic behavior of default risk and dependence among individual obligors. We also address the estimation of the SDFM from both frequentist and Bayesian point of view. Firstly, we consider the Bayesian approach by applying Markov chain Monte Carlo (MCMC) techniques in estimating default probability and asset correlation under SDFM. The out-of-sample forecasting for S&P default data provide strong evidence to support that the SDFM is more reliable than the independent factor model. Secondly, we use two frequentist estimation methods to estimate the default probability and asset correlation under SDFM. One is Monte Carlo Expectation Maximization (MCEM) estimation method along with a Gibbs sampler and an acceptance method and the other is Monte Carlo maximum likelihood (MCML) estimation method with importance sampling techniques.
93

近世儒学における<日用>の思想化 -日常の営みと学問をめぐる言説空間の生成と展開-

李, 芝映 24 March 2014 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(教育学) / 甲第18019号 / 教博第160号 / 新制||教||146(附属図書館) / 30877 / 京都大学大学院教育学研究科教育科学専攻 / (主査)教授 駒込 武, 教授 西平 直, 准教授 山名 淳, 教授 辻本 雅史(京都大学名誉教授) / 学位規則第4条第1項該当 / Doctor of Philosophy (Education) / Kyoto University / DGAM
94

利用混合模型估計風險值的探討

阮建豐 Unknown Date (has links)
風險值大多是在假設資產報酬為常態分配下計算而得的,但是這個假設與實際的資產報酬分配不一致,因為很多研究者都發現實際的資產報酬分配都有厚尾的現象,也就是極端事件的發生機率遠比常態假設要來的高,因此利用常態假設來計算風險值對於真實損失的衡量不是很恰當。 針對這個問題,本論文以歷史模擬法、變異數-共變異數法、混合常態模型來模擬報酬率的分配,並依給定的信賴水準估算出風險值,其中混合常態模型的參數是利用準貝式最大概似估計法及EM演算法來估計;然後利用三種風險值的評量方法:回溯測試、前向測試與二項檢定,來評判三種估算風險值方法的優劣。 經由實證結果發現: 1.報酬率分配在左尾臨界機率1%有較明顯厚尾的現象。 2.利用混合常態分配來模擬報酬率分配會比另外兩種方法更能準確的捕捉到左尾臨界機率1%的厚尾。 3.混合常態模型的峰態係數值接近於真實報酬率分配的峰態係數值,因此我們可以確認混合常態模型可以捕捉高峰的現象。 關鍵字:風險值、厚尾、歷史模擬法、變異數-共變異教法、混合常態模型、準貝式最大概似估計法、EM演算法、回溯測試、前向測試、高峰 / Initially, Value at Risk (VaR) is calculated by assuming that the underline asset return is normal distribution, but this assumption sometimes does not consist with the actual distribution of asset return. Many researchers have found that the actual distribution of the underline asset return have Fat-Tail, extreme value events, character. So under normal distribution assumption, the VaR value is improper compared with the actual losses. The paper discuss three methods. Historical Simulated method - Variance-Covariance method and Mixture Normal .simulating those asset, return and VaR by given proper confidence level. About the Mixture Normal Distribution, we use both EM algorithm and Quasi-Bayesian MLE calculating its parameters. Finally, we use tree VaR testing methods, Back test、Forward tes and Binomial test -----comparing its VaR loss probability We find the following results: 1.Under 1% left-tail critical probability, asset return distribution has significant Fat-tail character. 2.Using Mixture Normal distribution we can catch more Fat-tail character precisely than the other two methods. 3.The kurtosis of Mixture Normal is close to the actual kurtosis, this means that the Mixture Normal distribution can catch the Leptokurtosis phenomenon. Key words: Value at Risk、VaR、Fat tail、Historical simulation method、 Variance-Covariance method、Mixture Normal distribution、Quasi-Bayesian MLE、EM algorithm、Back test、 Forward test、 Leptokurtosis
95

漲跌停板限制下之股票報酬機率分配

葉宜欣, Yeh, Yi-Shian Unknown Date (has links)
股票市場的報酬率相對於金融市埸是非常重要的,因為其背後的真實機率分配對各種資產定價及選擇權的評價模型都有決定性的影響。本文考慮台灣股票市埸具有漲跌停板的限制來驗證實證中股票報酬機率分配的「厚尾」的現象,希望透過我們的研究能對財務理論在國內金融市埸的應用有更進一步的了解。我們選定了常態分配、對數常態分配及一般化第二種貝它分配 (GB2)來當作是台灣股票報酬率的真實機率分配,以動差法比較再以概似比檢定法(LR test)選出一表現最好的機率分配。由選取的25支國內股票中發現一般化第二種貝它分配 (GB2)可以解釋偏態和峰態對報酬率的影響並且也是概似比檢定法所選出的最適報酬率分配,由此可知一般化第二種貝它分配 (GB2)較為適合作為台灣股票報酬的真實機率分配。
96

排列檢定法應用於空間資料之比較 / Permutation test on spatial comparison

王信忠, Wang, Hsin-Chung Unknown Date (has links)
本論文主要是探討在二維度空間上二母體分佈是否一致。我們利用排列 (permutation)檢定方法來做比較, 並藉由費雪(Fisher)正確檢定方法的想法而提出重標記 (relabel)排列檢定方法或稱為費雪排列檢定法。 我們透過可交換性的特質證明它是正確 (exact) 的並且比 Syrjala (1996)所建議的排列檢定方法有更高的檢定力 (power)。 本論文另提出二個空間模型: spatial multinomial-relative-log-normal 模型 與 spatial Poisson-relative-log-normal 模型 來配適一般在漁業中常有的右斜長尾次數分佈並包含很多0 的空間資料。另外一般物種可能因天性或自然環境因素像食物、溫度等影響而有群聚行為發生, 這二個模型亦可描述出空間資料的群聚現象以做適當的推論。 / This thesis proposes the relabel (Fisher's) permutation test inspired by Fisher's exact test to compare between distributions of two (fishery) data sets locating on a two-dimensional lattice. We show that the permutation test given by Syrjala (1996} is not exact, but our relabel permutation test is exact and, additionally, more powerful. This thesis also studies two spatial models: the spatial multinomial-relative-log-normal model and the spatial Poisson-relative-log-normal model. Both models not only exhibit characteristics of skewness with a long right-hand tail and of high proportion of zero catches which usually appear in fishery data, but also have the ability to describe various types of aggregative behaviors.
97

空間相關存活資料之貝氏半參數比例勝算模式 / Bayesian semiparametric proportional odds models for spatially correlated survival data

張凱嵐, Chang, Kai lan Unknown Date (has links)
近來地理資訊系統(GIS)之資料庫受到不同領域的統計學家廣泛的研究,以期建立及分析可描述空間聚集效應及變異之模型,而描述空間相關存活資料之統計模式為公共衛生及流行病學上新興的研究議題。本文擬建立多維度半參數的貝氏階層模型,並結合空間及非空間隨機效應以描述存活資料中的空間變異。此模式將利用多變量條件自回歸(MCAR)模型以檢驗在不同地理區域中是否存有空間聚集效應。而基準風險函數之生成為分析貝氏半參數階層模型的重要步驟,本研究將利用混合Polya樹之方式生成基準風險函數。美國國家癌症研究院之「流行病監測及最終結果」(Surveillance Epidemiology and End Results, SEER)資料庫為目前美國最完整的癌症病人長期追蹤資料,包含癌症病人存活狀況、多重癌症史、居住地區及其他分析所需之個人資料。本文將自此資料庫擷取美國愛荷華州之癌症病人資料為例作實證分析,並以貝氏統計分析中常用之模型比較標準如條件預測指標(CPO)、平均對數擬邊際概似函數值(ALMPL)、離差訊息準則(DIC)分別測試其可靠度。 / The databases of Geographic Information System (GIS) have gained attention among different fields of statisticians to develop and analyze models which account for spatial clustering and variation. There is an emerging interest in modeling spatially correlated survival data in public health and epidemiologic studies. In this article, we develop Bayesian multivariate semiparametric hierarchical models to incorporate both spatially correlated and uncorrelated frailties to answer the question of spatial variation in the survival patterns, and we use multivariate conditionally autoregressive (MCAR) model to detect that whether there exists the spatial cluster across different areas. The baseline hazard function will be modeled semiparametrically using mixtures of finite Polya trees. The SEER (Surveillance Epidemiology and End Results) database from the National Cancer Institute (NCI) provides comprehensive cancer data about patient’s survival time, regional information, and others demographic information. We implement our Bayesian hierarchical spatial models on Iowa cancer data extracted from SEER database. We illustrate how to compute the conditional predictive ordinate (CPO), the average log-marginal pseudo-likelihood (ALMPL), and deviance information criterion (DIC), which are Bayesian criterions for model checking and comparison among competing models.
98

含存活分率之貝氏迴歸模式

李涵君 Unknown Date (has links)
當母體中有部份對象因被治癒或免疫而不會失敗時,需考慮這群對象所佔的比率,即存活分率。本文主要在探討如何以貝氏方法對含存活分率之治癒率模式進行分析,並特別針對兩種含存活分率的迴歸模式,分別是Weibull迴歸模式以及對數邏輯斯迴歸模式,導出概似函數與各參數之完全條件後驗分配及其性質。由於聯合後驗分配相當複雜,各參數之邊際後驗分配之解析形式很難表達出。所以,我們採用了馬可夫鏈蒙地卡羅方法(MCMC)中的Gibbs抽樣法及Metropolis法,模擬產生參數值,以進行貝氏分析。實證部份,我們分析了黑色素皮膚癌的資料,這是由美國Eastern Cooperative Oncology Group所進行的第三階段臨床試驗研究。有關模式選取的部份,我們先分別求出各對象在每個模式之下的條件預測指標(CPO),再據以算出各模式的對數擬邊際概似函數值(LPML),以比較各模式之適合性。 / When we face the problem that part of subjects have been cured or are immune so they never fail, we need to consider the fraction of this group among the whole population, which is the so called survival fraction. This article discuss that how to analyze cure rate models containing survival fraction based on Bayesian method. Two cure rate models containing survival fraction are focused; one is based on the Weibull regression model and the other is based on the log-logistic regression model. Then, we derive likelihood functions and full conditional posterior distributions under these two models. Since joint posterior distributions are both complicated, and marginal posterior distributions don’t have closed form, we take Gibbs sampling and Metropolis sampling of Markov Monte Carlo chain method to simulate parameter values. We illustrate how to conduct Bayesian analysis by using the data from a melanoma clinical trial in the third stage conducted by Eastern Cooperative Oncology Group. To do model selection, we compute the conditional predictive ordinate (CPO) for every subject under each model, then the goodness is determined by the comparing the value of log of pseudomarginal likelihood (LPML) of each model.

Page generated in 0.027 seconds