31 |
固定給付制退休金之最佳控管:隨機模擬方法之應用張乃懿, Chang, Nai Yi Unknown Date (has links)
本研究中以隨機模擬的方法應用於退休金最佳控制理論中,並將下跌風險(Downside Risks)加入二次最佳化函數中作為最適化準則,再以英國與美加地區不同提撥率模型做為研究對象,觀察不同情境下之結果。Haberman(1994)首先提出以最適化方法應用於固定給付制退休金基金上,並具體建立二次最適化準則,以提撥與資產的變異作為控制因子。Chang(2003)以下跌風險的觀念,指出退休金基金經營時管理人常較注意提撥過多與資產不足風險,若經營時考慮下跌風險,則會產生與原來考量不同之結果。本文以Chang(2003)之研究為基礎,將其建議之最佳化函數做為考量下跌風險之依據,並提出改良英國與美加地區之提撥率模型,採模擬的方式進行最佳化,探討其對不同提撥率模型之影響。研究結果發現若以隨機模擬作為最佳控制方法,在不同人口假設及精算模型下,會產生相同之結果,且發現下跌風險對於不同提撥率模型有不同之影響,其中建議的英式模型有效降低風險,而美式提撥率模型對於提撥率比例與資產負債比例在最佳化下有較理想之結果。最重要的,退休金基金管理人可利用隨機模擬的方式進行最佳化控制,以提供決策之參考依據。
|
32 |
無證兒童少年身分取得與權益保障-以無證移工在臺所生子女為例李孟珊 Unknown Date (has links)
隨著入臺工作的移工人數逐年增加,逃逸隱匿、逾期居留的無證移工在臺生育子女的現象屢見不鮮,這些兒童、少年的國籍與身分認定,連繫於父或母的國籍身分與婚姻狀況,或因父母其中一人行方不明導致身分確認困難,從而產生各種錯綜複雜的身分態樣。身分未定的空窗期導致在臺生活的無證兒童、少年(以下簡稱無證兒少)面臨權益空白的困境。
本研究從國籍制度、移民政策與出生通報機制開始探討,針對無證移工在臺所生子女,以「生父為國人,生母為外國籍」、「生父不詳,生母為外國籍行方不明」、「隨外國籍父母在臺隱匿者」三種態樣,分別描述無證兒少身分取得及現況處遇方式;在權益保障方面,以姓名權與身分國籍權、教育權與健康權等相關法制為出發點,評述兒少權益之取得,仍受限於身分認定與基本權益無法脫鉤的規範中。此外,在婚生推定制度的架構下,國人生父認領無證兒少的程序,須由外國籍無證移工生母提具原屬國開立且經駐外館處驗證之婚姻狀況證明方得辦理,惟提證困難導致認領流程延宕,因此,本研究亦分析婚生推定制度對於無證兒少身分取得的影響。
透過文獻分析與深度訪談的方式,深入探究無證兒少面臨的脆弱處境,他們可能遭遺棄、被販賣、或跟著父母在臺四處逃逸、躲藏,成為社會底層的黑戶。本研究嘗試以「兒童、少年」的觀點,以及兒童權利公約規範的最佳利益為前提,就無證兒少的國籍身分與權益困境,提出短期與中長期建議,更期待藉由本研究,讓這些與我們生活在同一片土地的無證兒少受到更多關懷與重視。
|
33 |
多變量模擬輸出之統計分析許淑卿, XU, SHU-GING Unknown Date (has links)
本論文共一冊,分八章八節。
內容:本論文所擬探討之對象為多變量統計分配函數模擬(Simulation)之最佳停止
法則問題(Optimal Stopping Rule Problem ),此類問題之目的在於如何利用盡量
小的樣本數之觀察值來求得未知母數(Unknoron Parameter)的信區間(域)(Co-
nfidence interval )(Confidence Region),而此信賴區間(域)之寬度(Width
)及包含機率(Coverage Probability)均已事先指定。
以往研究對象多傴限於單變量統計分配函數,而多變量統計分配函數模擬之最佳停止
法則問題,仍尚在研究階段,因此本論文之重點乃在於探討如何求得滿足最佳停止法
則之最小樣本數。在此以多變量常態分配函數為重心,並進而嗜試推廣至其他多數量
統計分配函數。
|
34 |
模糊資料之軟統計分析及檢定張建瑋, Chang ,Chien-Wei Unknown Date (has links)
本文將模糊理論的觀念,應用在估計、檢定及時間數列分析上。研究重點包括離散型及連續型模糊樣本的定義與度量,模糊參數的最佳估計,模糊排序方法應用於無母數檢定,模糊相似度的定義、性質,以及如何將其應用於辨識不同時間數列間的落差l期相似程度等。我們首先將常見的模糊資料分為離散型及連續型,並針對不同類型的資料,給定對應的模糊平均數、模糊變異數等模糊參數的概念與一些重要性質。接著我們提出幾種估計方法,針對不同的模糊參數進行最佳估計並提出可行的評判準則。進一步地,我們將模糊排序方法應用於無母數檢定推論。最後我們提出模糊相似度的定義與度量。經由系統性的模擬與分析,我們建立兩時間數列間模糊相似度演算法則。實證分析方面,我們利用提出的方法對台灣的股價加權指數、個股股價進行估計及檢定;同時,針對台灣歷年GDP、民間消費、毛投資間的相似性進行偵測,以驗證我們提出的模糊參數估計、模糊無母數檢定及模糊相似度演算法的效率性與實用性。 / In this paper, we apply fuzzy theory in estimation, nonparametric test, and time series analysis. Our focus is on: How to define and measure the discrete type fuzzy data and continuous one? How to find the optimal estimators for fuzzy parameters? How to apply fuzzy ranking methods in nonparametric test when the data is vague? How to define and find the degree of fuzzy similarity between two time series? First, fuzzy data is classified according to its type, discrete or continuous. Then we give some definitions and properties on fuzzy mean, fuzzy variance for different type of fuzzy data. Next, we proposed some estimating methods and evaluation rules. Moreover we apply fuzzy ranking methods in nonparametric test, such as Sign test, Wilcoxon signed rank test, Wilcoxon rank sum test, and so on. Finally, we suggest the definitions as well as the algorithm for computing the degree of fuzzy similarity between two time series. We also give some simulate and empirical examples to illustrate the techniques and to analyze fuzzy data. Results show that fuzzy statistics with soft computing are more realistic and reasonable for the social science research.
|
35 |
多處理廠環境下逆物流最適訂單接受量與處理量之研究李惠卿, Lee, Huei Ching Unknown Date (has links)
逆物流(reverse logistics)代表了將使用過的產品從消費者手上收回、並將此資源重新在市場上再利用的一連串物流活動。其配送成本往往比正向物流高,對於回送之產品,在運送、儲存、處理、管理方面亦無規律通路,較正向供應鏈增加許多的複雜性和不確定性,企業往往選擇將逆向物流之活動外包給專業物流服務商。 / 對逆向物流服務商來說,既以營利為目標,便有營運範疇內法規、利潤、運輸成本、營運成之考量。過去逆向物流方面之研究主題,多以逆向供應鏈上的廠址設置為主,本研究針對同時具有多個處理廠的逆物流服務供應商進行探討,建立適合的營運模式,考慮多時期、多個逆物流處理廠、多種型態的退回商品,建立一數量決策模式,以逆物流服務商的最大營運利潤為目標,探討逆物流之下的最適合再生物料接受訂單數量、以及個別逆物流處理中心之最適合當期處理量,考慮可能因退回商品回收量之不確定性、處理產出比率的不確定性影響處理廠之中再生物料的實際產量。對於模式當中的不確定因子,本研究建構以情境為基礎的穩健最佳化之模式求得穩健解。 / Reverse logistics reflects a serial of activities including collecting return products from consumers, recycling, reusing, and reducing the amount of materials used. Implementing reverse logistics is complicated and costs more than forward logistics to a firm. Furthermore, there is not a regular way to handle those transportation, storage, processing and management process. In order to reduce cost and focus on core business, industries choose to outsource those processes to third-party reverse logistics provider. / Previous literatures used to focus on the topic of facility location allocation or designing the infrastructure of reverse logistics distribution channels. From a reverse logistics provider perspective, this research concerned about the operational profit of the reverse logistics service provider who has multiple collection sites and refurbishing processing facilities. This research attempts to maximum the net-profit and presents a multi-period, multiple processing facilities, and multi-type return products to optimize the solution of the quantity of processing return products in each refurbishing processing facilities and the quantity of used material ordered by industries. The formulation uses a scenario-based robust optimization approach to solve those uncertainty factors such as the volume of product collection, the usage rate of return product in this model.
|
36 |
最佳風險分散投資組合在台灣股票市場之應用—以元大台灣卓越50基金為例 / Application of most diversified portfolio in Taiwan stock market- Yuanta/P-shares Taiwan Top 50 ETF陳慶安, Chen, Ching An Unknown Date (has links)
本研究利用元大台灣50 ETF作為樣本資料,檢測2006年至2016年實證期間風險基礎指數和市值加權指數所分別建構的投資組合,其績效表現、風險表現、分散性表現的優劣性;其中Choueifaty, Froidure, and Reynier (2011) 所建構的最佳風險分散投資組合 (most diversified portfolio) 為近年來新起的風險基礎指數投資組合,我們將證實在獲得良好的投資組合分散性同時,如同其他的風險基礎指數投資組合的目標,同時也能獲得超越以追蹤市值加權指數為標的的投資組合績效。
本研究以夏普比率、信息比率、阿爾法作為衡量績效的指標;以標準差、貝他作為風險衡量的指摽;另以Choueifaty and Coignard (2008) 提出的分散性比率作為分散性衡量的指標。實證結果顯示,在整體實證期間,最佳風險分散投資組合在績效、風險、分散性的指標上皆有超越市值加權指數投資組合的能力,再以年為單位的個別期間,其績效衡量上大致優於市值加權指數投資組合,風險和分散性衡量上則優於市值加權指數投資組合的表現,但論以其整體表現,並非為本研究所提出的風險基礎指數投資組合中最佳者,因此投資人在選擇該類投資組合策略時,建議從該投資組合過去表現中判斷,選擇符合自己投資習慣者為之。 / This article examines the performance, risks and diversification of different types of portfolio strategies such as risk-based indexes and cap-weighted index during 2006- 2016. We introduce the recent most diversified portfolio (MDP), which was proposed by Choueifaty, Froidure, and Reynier (2011) and find the result that like the goal of other risk-based portfolios, which is to improve the risk-return profile of cap-weighted portfolio, MDP surpasses overall performance, risks and diversification compared to cap-weighted portfolio while achieving diversification.
We use Sharpe ratio, information ratio and alpha as the performance indicators, use standard deviation, beta as the risk indicators, and adopt diversification ratio (DR), which was proposed by Choueifaty and Coignard (2008), as the diversification indicator in our analysis. The results of this study show that MDP surpasses overall performance, risks and diversification compared to cap-weighted portfolio in the full empirical period. In addition, MDP is generally superior to cap-weighted portfolios in terms of performance in many single years of the whole period, and completely beat cap-weighted portfolios in terms of risks and diversification in every single year of the whole period. Although the ability of exceeding cap-weighted portfolio, MDP do not win first place of mentioned risk-based portfolios in our research. As a result, we suggest investors choose their portfolio strategies refer to its past performance, risks and diversification, and select the best according to their investment preference.
|
37 |
線性羅吉斯迴歸模型的最佳D型逐次設計 / The D-optimal sequential design for linear logistic regression model藍旭傑, Lan, Shiuh Jay Unknown Date (has links)
假設二元反應曲線為簡單線性羅吉斯迴歸模型(Simple Linear Logistic Regression Model),在樣本數為偶數的前題下,所謂的最佳D型設計(D-Optimal Design)是直接將半數的樣本點配置在第17.6個百分位數,而另一半則配置在第82.4個百分位數。很遺憾的是,這兩個位置在參數未知的情況下是無法決定的,因此逐次實驗設計法(Sequential Experimental Designs)在應用上就有其必要性。在大樣本的情況下,本文所探討的逐次實驗設計法在理論上具有良好的漸近最佳D型性質(Asymptotic D-Optimality)。尤其重要的是,這些特性並不會因為起始階段的配置不盡理想而消失,影響的只是收斂的快慢而已。但是在實際應用上,這些大樣本的理想性質卻不是我們關注的焦點。實驗步驟收斂速度的快慢,在小樣本的考慮下有決定性的重要性。基於這樣的考量,本文將提出三種起始階段設計的方法並透過模擬比較它們之間的優劣性。 / The D-optimal design is well known to be a two-point design for the simple linear logistic regression function model. Specif-ically , one half of the design points are allocated at the 17.6- th percentile, and the other half at the 82.4-th percentile. Since the locations of the two design points depend on the unknown parameters, the actual 2-locations can not be obtained. In order to dilemma, a sequential design is somehow necessary in practice. Sequential designs disscused in this context have some good properties that would not disappear even the initial stgae is not good enough under large sample size. The speed of converges of the sequential designs is influenced by the initial stage imposed under small sample size. Based on this, three initial stages will be provided in this study and will be compared through simulation conducted by C++ language.
|
38 |
無趨勢PBIB設計的建構和最佳化性質 / Construction and Optimality of Trend-free Versions of PBIB Designs黃建中, Hwang, Chien Chung Unknown Date (has links)
實驗設計中,我們假設在區塊中存在一趨勢效應(trend effect)。此趨勢效應影響觀察值,也影響我們對區塊效應(block effect)和處理效應(treatment effect)的估計。此種設計模式不同於一般的區塊設計模式,因此須將趨勢效應加人設計模式中。
Bradley and Yeh (1980)研究和討論此種趨勢效應在區塊設計模式中之影響,並定義出無趨勢設計(trend-free design)。所謂無趨勢設計,乃是在區塊設計模式中,趨勢效應被抵消不影響處理效應之分析。Bradley and Yeh (1983)推導了一個線性無趨勢設計存在的必要條件是r(k+1)≡0(mod 2)其中k為區塊大小,r為處理出現的次數。
Bradley and Yeh進一步預測任一滿足r(k+1)≡0(mod2)的區塊設計,經過在區塊中處理位置調整後,可變為一個線性無趨勢設計。本篇論文的主要目的乃是在探討給定一GD設計(group-divisibledesigns),檢驗和推導此預測是否為真。 / Yeh and Bradley conjectured that every binary connected block design with blocks of size k and a constant replication numberr for each treatment can be converted to a linear trend-free design by permuting the positions of theatments within blocks if and only if r(k+1)≡0 (mod 2). Chai and Majumdar (1993) proved that any BIB design which satisfies r(k+1)≡0 (mod 2) is even can be converted to a linear trend-free design. In this thesis, we want to examine this conjecture is true or not for group-divisible designs (GD designs).
|
39 |
簡單線性迴歸模式中解釋變數具測量誤差下控制問題之研究張文哲 Unknown Date (has links)
在解釋變數含測量誤差的簡單線性迴歸模式中,欲使第t+1期之產出Y達到某一目標值Y<sup>*</sup>,則必需控制第t+1期投入變數Z,若參數α,β為以知時,可以將其設定為θ=(Y<sup>*</sup>-α)/β。但當參數α,β為未知時,我們利用LSCE控制法則的設定方法,得到第t+1期設定的控制值Z<sub>t+1</sub>,而且在機率為1下,Z<sub>t+1</sub> 收斂至θ=(Y<sup>*</sup>-α)/β。而貝氏最佳控制法則部份則是由第t+1期的預測期望損失,找出使其為最小的Z值即是所應設定的第t+1期控制值Z<sub>t+1</sub>,並利用模擬結果來說明。
|
40 |
粒子群最佳化演算法於估測基礎矩陣之應用 / Particle swarm optimization algorithms for fundamental matrix estimation劉恭良, Liu, Kung Liang Unknown Date (has links)
基礎矩陣在影像處理是非常重要的參數,舉凡不同影像間對應點之計算、座標系統轉換、乃至重建物體三維模型等問題,都有賴於基礎矩陣之精確與否。本論文中,我們提出一個機制,透過粒子群最佳化的觀念來求取基礎矩陣,我們的方法不但能提高基礎矩陣的精確度,同時能降低計算成本。
我們從多視角影像出發,以SIFT取得大量對應點資料後,從中選取8點進行粒子群最佳化。取樣時,我們透過分群與隨機挑選以避免選取共平面之點。然後利用最小平方中值表來估算初始評估值,並遵循粒子群最佳化演算法,以最小疊代次數為收斂準則,計算出最佳之基礎矩陣。
實作中我們以不同的物體模型為標的,以粒子群最佳化與最小平方中值法兩者結果比較。實驗結果顯示,疊代次數相同的實驗,粒子群最佳化演算法估測基礎矩陣所需的時間,約為最小平方中值法來估測所需時間的八分之一,同時粒子群最佳化演算法估測出來的基礎矩陣之平均誤差值也優於最小平方中值法所估測出來的結果。 / Fundamental matrix is a very important parameter in image processing. In corresponding point determination, coordinate system conversion, as well as three-dimensional model reconstruction, etc., fundamental matrix always plays an important role. Hence, obtaining an accurate fundamental matrix becomes one of the most important issues in image processing.
In this paper, we present a mechanism that uses the concept of Particle Swarm Optimization (PSO) to find fundamental matrix. Our approach not only can improve the accuracy of the fundamental matrix but also can reduce computation costs.
After using Scale-Invariant Feature Transform (SIFT) to get a large number of corresponding points from the multi-view images, we choose a set of eight corresponding points, based on the image resolutions, grouping principles, together with random sampling, as our initial starting points for PSO. Least Median of Squares (LMedS) is used in estimating the initial fitness value as well as the minimal number of iterations in PSO. The fundamental matrix can then be computed using the PSO algorithm.
We use different objects to illustrate our mechanism and compare the results obtained by using PSO and using LMedS. The experimental results show that, if we use the same number of iterations in the experiments, the fundamental matrix computed by the PSO method have better estimated average error than that computed by the LMedS method. Also, the PSO method takes about one-eighth of the time required for the LMedS method in these computations.
|
Page generated in 0.0194 seconds