• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 83
  • 71
  • 12
  • 4
  • 2
  • Tagged with
  • 89
  • 89
  • 47
  • 46
  • 43
  • 35
  • 33
  • 29
  • 27
  • 26
  • 24
  • 24
  • 23
  • 23
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

高精密送り系の開発に関する研究

藤田, 智哉 26 March 2012 (has links)
Kyoto University (京都大学) / 0048 / 新制・課程博士 / 博士(工学) / 甲第16842号 / 工博第3563号 / 新制||工||1539(附属図書館) / 29517 / 京都大学大学院工学研究科マイクロエンジニアリング専攻 / (主査)教授 松原 厚, 教授 松久 寛, 教授 西脇 眞二 / 学位規則第4条第1項該当
42

Damage Identification of Structures by Minimum Constitutive Relation Error and Sparse Regularization / 構成則誤差最小化とスパース正則化を用いた構造物の損傷同定解析

Guo, Jia 24 September 2019 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第22062号 / 工博第4643号 / 新制||工||1724(附属図書館) / 京都大学大学院工学研究科建築学専攻 / (主査)教授 竹脇 出, 教授 大崎 純, 教授 池田 芳樹 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
43

曲線配適於磁振造影之應用

簡仲徽 Unknown Date (has links)
在醫學領域中,磁振造影(Magnetic Resonance Imaging, MRI)因為具有良好的空間解析度及對比度,且不會對人體產生任何輻射性或侵入性的傷害,所以在疾病診斷中為經常被醫師們使用的輔助工具。其中利用磁振造影測量患者腦部血流情形所攝得之對比劑濃度與時間關係曲線圖,更是醫學界在對付腦血管病變(Brain Lesion)時的診斷利器。然而截至目前為止,我們尚未有一個較正確且快速的方法可以用來配適其對比劑濃度與時間關係曲線中的參數。所以在本論文中,我們嘗試以統計上的觀點,利用幾種不同的配適方法,找出與原始觀察值最為接近之估計值。 在本研究中使用的配適方法有—「迴歸分析法」、「Whittaker修勻法」、「非線性函數參數修勻法」及「核修勻法(Kernel Graduation)」。 本論文將以往醫學界慣用的「乘方性誤差項」改變為「加成性誤差項」,再以不同的誤差項,利用電腦模擬出各組假資料(Pseudo Data)後,以上述的四種方式對原始觀察值進行參數配適與函數估計。綜合模擬資料與真實資料所配適的比較結果,我們認為在幾種方法中,最穩健(Robust)的配適法是「Whittaker修勻法」。而在本論文中進行配適的真實資料,應該具有較大的誤差項,才導致非線性函數參數修勻法不能得出很好的估計值。 / With greater resolution, higher contrast and no radiative hurt to human body, Magnetic Resonance Imaging (MRI) is widely used by doctors in diagnosing diseases. The concentration of the contrast agent v.s. time curves which generated by MRI for cerebral blood flowing is very useful to doctors when giving treatments to brain lesion. However, we still have no precise and quick solution for fitting the curve of the concentration of the contrast agent vs. time. Therefore, this essay tries to use some different statistical fitting methods to find the closest estimates to the crude observations. We will use four different fitting methods here—"Regression Analysis", "Whittaker Graduation", "Nonlinear Function Parametric Graduation", "Kernel Graduation". This essaywill change the "multiple error term" which was usually used in the medical field to "additive error term". After using different sizes of error terms to generate pseudo data by computer simulation, we fit the parameters and estimate the values of the function to the crude data we've created with the four fitting methods mentioned above. Comparing the fitting result of the simulation data and the real data, we think the most robust fitting method is " Whittaker Graduation". The real data we have fitted in this essay may contain a greater error term, it would make " Nonlinear Function Parametric Graduation" get inadequate fitting values.
44

衡量臺灣證券市場上槓桿及反向指數股票型基金之績效 / Evaluating the Performance of Leveraged and Inverse Exchange-Traded Funds in Taiwanese Stock Market

彭思涵 Unknown Date (has links)
本文以臺灣證交所上市的前九檔槓桿及反向指數股票型基金(LETFs)作為 樣本,根據 Charupat and Miu(2014)研究方法衡量其績效。傳統衡量指數股票 型基金(ETFs)績效的方式,為單純將基金淨值報酬對指數累積報酬做簡單迴 歸,若將此方法應用在衡量 LETFs 之績效上,由於許多影響基金淨值報酬的因素 沒被分離出來,常造成迴歸結果存在嚴重偏誤,或是難以解釋。本文是第一篇研 究國內 LETFs 績效的著作,透過在迴歸式中納入複利效果、融資效果,以更精確 的方式比較分析影響 LETFs 基金淨值報酬的因素,及各 LETFs 之管理績效。本 文實證結果除了證實融資效果確實存在,也證實大部分複利效果及融資效果的理 論性質,最重要的是,顯示出追蹤上証 180 指數的三檔 LETFs 在準確複製報酬槓 桿倍數上比較傑出,而追蹤台灣 50 指數的三檔 LETFs 在基金管理效率方面有比 較優秀的表現。 / Using Leveraged and Inverse Exchange-Traded Funds (LETF) listed in the Taiwan Stock Exchange, this thesis evaluates the performance of these LETFs based on the methodology proposed by Charupat and Miu (2014). The traditional approach of performance evaluation of ETFs is to regress the fund’s net asset value (NAV) returns on the underlying index’s returns. However, such an approach fails to account for important factors, such as compounding and financing effects, that affect the NAV of the LETFs, and unavoidably leads to serious estimation biases. This is the first thesis which evaluates the performance of LETFs listed in the Taiwan Stock Exchange. By considering compounded effect and financing effect in the regression model, the proposed method is more precise and appropriate in disentangling factors that affect the performance of the LETFs. Our empirical evidence shows how compounding effect, financing costs, and management factors influence LETFs’ tracking errors. Most of all, the three LETFs tracking the SSE180 index have the best tracking ability of the underlying asset return, while the LETFs tracking the FTSE TWSE Taiwan 50 index have the best management performance among all LETFs examined in this these.
45

迴歸模型中自我相關誤差之貝氏分析

蔡淑女, Cai, Shu-Ru Unknown Date (has links)
本文旨在以貝氏分析法來探討誤差項具有自我相關的迴歸模型。全文一冊約三萬兩仟 字,共分為六章,十二節。內容如下: 第一章 緒論:說明迴歸模型,自我相關誤差的意義,及貝氏分析法之理論體系。 第二章 誤差項具有一階自我相關的簡單迴歸模型:分析以傳統抽樣理論法及貝氏分 析法對模型作分析並比較其結果。 第三章 多元迴歸模型:以貝氏法分析自我相關誤差之多元迴歸模型。 第四章 事前分配及其他假設的考慮。 第五章 我國民間消費與個人可支用所得迴歸模型的分析。 第六章 結論。
46

迴歸係數脊估計式的研究

蘇淑妙, Su, Shu-Miao Unknown Date (has links)
傳統的迴歸係數估計式為最小二乘法估計式,受自變數間共線性的影響很大。當自變 數間之共線性影響愈大時,則自變數與因變數間之關係愈趨於不穩定,且最小二乘法 估計式之變異數增大,致使模型喪失所應具備的解釋能力。因此荷肯(Hoerl, Kenn- ard )二氏於一九七○年提出脊估計式,以改善最小二乘法估計式受共線性影響的缺 點。脊估計式與最小二乘法估計式的最大不同,在於脊估計式中多了一個大於零的常 數。 第一章為緒論。 第二章說明共線性產生的原因及其影響,進而推演出脊估計式的理論基礎及其幾何意 義;包括脊估計式的偏誤、期望值、變異數、均方誤差及脊估計式與最小二乘法估計 式的關係。 第三章脊估計式與其他估計式的比較,包括最小二乘法估計式、主成分估計式、等比 例縮小估計式,均以均方誤差作為其比較的標準。 第四章討論各種k 值的決定方法及其模擬結果。包括脊軌法,荷肯二氏反覆計算法, 直接脊估計式法,McDonald & Galarneau脊估計式法。 第五章綜合以上各章的結果,並就個人的觀點,比較第四章中各種k 值的決定法。
47

評估不同模型在樣本外的預測能力 / 利用支向機來做預測的結合

蔡欣民, Tsai Shin-Ming Unknown Date (has links)
明天股票的價格是會漲還是會跌呢? 明天到底會不會下雨? 下期樂透開獎會是哪些號碼呢? 未來不知道會發生哪些事情? 大家總是希望能夠未卜先知、洞悉未來! 可是我們要如何進行預測呢? 本文比較了不同時間序列模型的預測績效, 而且測試預測的結合是否能夠改進預測的準確度? 時間序列模型的研究在近年來非常蓬勃地發展, 所以本文簡單介紹了時間序列模型(Time series models)當中的線性AR模型、非線性TAR模型、非線性STAR模型, 以及這些模型該如何來進行在樣本外的預測。 同時本文說明了預測的結合(Combined forecast)該如何進行? 預測結合的目的是希望能夠達到截長補短的效果! 除了傳統迴歸(Regression-based)方法和變動係數(Time-varying coefficients)方法外, 本文提出了兩種非迴歸類型的預測結合方法, 績效權數(Fitness weight)和支向機(Support Vector Machine)。 其中主要的焦點放在支向機, 因為迴歸方法可能會有共線性的問題, 支向機則是沒有這個問題。 本文實證的結果顯示, 在時間序列模型方面, 非線性模型的預測能力, 在大多數的情形底下, 都不如簡單的線性AR模型; 在預測結合的方面, 支向機的績效是和迴歸方法的績效是差不多的, 這兩者都比變動係數方法的績效來得穩固, 可是如果基底模型的預測值存在共線性的問題或樣本數目過少的問題, 那麼支向機的績效是優於迴歸方法的績效。 最後, 時間序列模型的預測績效會受到資料性質的影響, 而有極大的改變, 或許我們可以考慮使用比較保險的預測策略-預測結合, 因為預測結合的預測誤差範圍是小於時間序列模型的預測誤差範圍!
48

類神經網路在汽車保險費率擬訂的應用 / Artificial Neural Network Applied to Automobile Insurance Ratemaking

陳志昌, Chen, Chi-Chang Season Unknown Date (has links)
自1999年以來,台灣汽車車體損失險的投保率下降且損失率逐年上升,與強制第三責任險損失率逐年下降形成強烈對比,理論上若按個人風險程度計收保費,吸引價格認同的被保險人加入並對高風險者加費,則可提高投保率並且確保損失維持在合理範圍內。基於上述背景,本文採用國內某產險公司1999至2002年汽車車體損失保險資料為依據,探討過去保費收入與未來賠款支出的關係,在滿足不偏性的要求下,尋求降低預測誤差變異數的方法。 研究結果顯示:車體損失險存在保險補貼。以最小誤差估計法計算的新費率,可以改善收支不平衡的現象,但對於應該減費的低風險保戶,以及應該加費的高高風險保戶,以類神經網路推計的加減費系統具有較大加減幅度,因此更能有效的區分高低風險群組,降低不同危險群組間的補貼現象,並在跨年度的資料中具有較小的誤差變異。 / In the past five years, the insured rate of Automobile Material Damage Insurance (AMDI) has been declined but the loss ratio is climbing, in contrast to the decreasing trend in the loss ratio of the compulsory automobile liability insurance. By charging corresponding premium based on individual risks, we could attract low risk entrant and reflect the highly risk costs. The loss ratio can thus be modified to a reasonable level. To further illustrate the concept, we aim to take the AMDI to study the most efficient estimator of the future claim. Because the relationship of loss experience (input) and future claim estimation (output) is similar to the human brain performs. We can analyze the relation by minimum bias procedure and artificial neural network, reducing error with overall rate level could go through with minimum error of classes or individual, demonstrated using policy year 1999 to 2002 data. According to the thesis, cross subsidization exists in Automobile Material Damage Insurance. The new rate produced by minimum bias estimate can alleviate the unbalance between the premium and loss. However the neural network classification rating can allocate those premiums more fairly, where ‘fairly’ means that higher premiums are paid by those insured with greater risk of loss and vice-versa. Also, it is the more efficient than the minimum bias estimator in the panel data.
49

獨立與非獨立性資料之多重比較

李昀叡 Unknown Date (has links)
同時比較多個樣本間的差異,可用ANOVA來檢定,但ANOVA只能得到樣本間有差異的訊息,無法明確指出是哪些樣本間有差異,需要使用多重比較找出樣本間的差異。本文主要探討相關的離散型資料的多重比較,以型I誤差與檢定力兩指標找出最適的多重比較法。本文依序探討獨立的連續型資料、相關的連續型資料、獨立的離散型資料、相關的離散型資料,並針對相關型的資料提出修正法。綜合型I誤差與檢定力兩指標來看,在樣本間的平均差異小時,Shaffer’s first procedure Test (1986)、Procedure 4 by Bergmann and Hommel (1988)為兩兩比較下較佳的修正法,Hochberg Test (1988)為多對ㄧ比較下較佳的修正法;樣本間平均差異大時,Bonferroni 為兩兩比較下較佳的修正法,Hochberg (1988)、Simes (1986)為多對ㄧ比較下較佳的修正法。 / Analysis of variance (ANOVA) is usually applied to check whether there are differences among more than two treatments. However, even there are differences, multiple comparison procedures are still needed to determine which pair(s) of treatments are different. In this study, we use simulation to compare the frequently used multiple comparison procedures, including many-to-one and pair-wise, and type-I error and power are used to measure the performance of procedures. Two types of data were considered, independently and correlated distributed data. If the differences among treatments are small, Shaffer’s first procedure test (1986) and Procedure 4 by Bergmann and Hommel (1988) are the best in pair-wise case, and Hochberg test (1988) is the best in many-to-one case. If the differences among treatments are large, the Bonferroni procedure is the best in pair-wise case, and the procedures by Hochberg (1988) and Simes (1986) are the best in many-to-one case.
50

應用資料採礦技術於資料庫加值中的誤差指標及模型準則 / ERROR INDEX AND MODEL CRITERIA FOR VALUE- ADDED DATABASE IN DATA MINING

包寶茹 Unknown Date (has links)
運用資料來幫助企業做出正確且適當的政策是一個存在已久的觀念,在傳統統計上我們通常會將拿到的資料庫直接去作分析,然而對資料採礦(Data Mining)來說,常面臨資料不夠的瓶頸,亦導致資料庫的價值往往不夠。若,我們能利用調查的樣本,推估出目標資料庫中所欠缺的欄位在調查樣本中與其它欄位的關係,便可回推至目標資料庫將原本所欠缺的欄位補齊,將資料庫加大,亦即資料加值(value-added),那麼,未來要用到這些欄位來分析資料時只要抽樣進行分析即可,如此,也可有效降低企業的成本支出或浪費。 本研究之目的在於整合過去各學者所提出之統計理論與方法,找出誤差指標及模型準則來說明擴充的欄位是有可信度的。由於在目標資料庫擴充欄位時,會產生誤差值,而誤差值的大小往往會影響我們用來判斷此擴充欄位的可行性及可信度,因此本研究並不考慮使用何種抽樣方法,而是假設在簡單隨機抽樣下來進行探討,判別在資料加值前後所造成預測值與實際值之間的差異情形,進一步來做比較。針對欲擴充目標的欄位型態分為連續型和類別型來尋找適當的指標及準備作為我們選擇判斷的指標。類別型欄位利用相似性觀念建立判斷指標,連續型欄位則利用距離觀念、相關性的架構下來討論,如此,可建立合理的誤差指標及模型準則針對欲擴充目標欄位的型態來判斷其擴充的欄位是否具有可信度,並評估其可用價值的高低。 本研究實證結果發現資料庫加值為一可行的方法,從推估資料帶入模式後所得預測值與原始觀測值間計算其相似度皆在九成以上,說明擴充的欄位是有可信度的。 關鍵詞:資料採礦、資料加值、誤差指標、模型準則、相似性 / In recent years, the application of data mining has received good credits and acceptances from a variety of industries such as the finance industry, the insurance industry, and the electronics industry and so on for its success in extracting valuable information translated to opportunities from the database. Database value-added is a new idea not yet fully mature. Its applications on the different databases will have different effect, therefore, the goal of this research is to find the valid and accountable model criteria as a mean to determine if the added columns make any improvement to the database, hence the overall results in terms of predictions. After selecting the model based upon its appropriateness to the data type, we applied the error index and model criteria to evaluate for the performance of the model, if the model has accurately predicted the added-value column. The criterion used in this research is RMSE for the continuous data type and F-value for the discrete data type. Our findings in this research support our attempts that the error index and model criteria used in this research do give us an accountability measure in determining the reliability of adding the columns to the database. Keywords: Data mining, Database value-added, Database, Error index, Model criteria

Page generated in 0.0226 seconds