251 |
時間數列分析中控制設計之研究李朝元, Li, Zhao-Yuan Unknown Date (has links)
本文旨在探討控制設計,而誤差項採用自我迴歸移動平均隨幾模式,損失函數分為平
方誤差與一般函數兩種。全文一冊,共分五章,約三萬餘子。內容如下:
第一章 導論:說明控制設計之目的,理想控制設計的條件,及本文的結構。
第二章 自我迴歸移動平均隨機模式:說明模式的理論基礎,性質,應用及模式的建
立。
第三章 動態系統隨機模式:說明模式的性質,建立,及應用。
第四章 控制設計:分為前饋控制、回饋控制,及一般損失函數的控制。
第五章 結論:說明本文所採用方法的利弊。
|
252 |
運用知識模組化與再用發展平台經濟性創新理論之研究-以軟體元件與矽智財為例 / Economies of platform innovation theory through knowledge modularization and reuse: The cases of software components and silicon intellectual properties(SIPs).吳明機, Wu, Ming Ji Unknown Date (has links)
本論文主要在探索作為產業組織核心之「公司」,將其知識以公開或特定之標準或程序加以模組化(modularization)後,進行公司內部與外部以產品開發為主之知識分工(division of knowledge)與再用(reuse)活動,因而衍生的組織與管理問題,以及公司間知識移轉與學習問題。並希望藉由產業實證,發展以「知識模組化與再用」為基礎之技術創新理論。
研究過程採取紮根理論,針對了軟體產業四家公司與半導體設計業四家公司,分別就其採取軟體元件與矽智財之模組化創新現象進行深入訪談研究,進行編碼過程,將觀念類別抽象化為「績效與競爭力」、「研發知識模組技術力」、「知識模組再用力」、「知識模組平台演進力」、「組織政策與文化」、「產業基礎模組主導者之引導力」、「市場異質性」、「知識模組交易/交換成熟度」及「產業中介組織推動力」等九項。
根據實證發現,知識模組創新公司企業常規為(1)採取知識模組再用平台為核心之產品/服務創新模式;(2)以平台為考量之組織構型設計;(3)建立四項公司內部重要能力—包括研發知識模組技術力、知識模組再用力、知識模組再用平台演進力、及組織政策與文化。至於影響產業知識模組交換/交易之因素,則為(1)開放之平台知識模組來源;(2)營造利於知識模組再用之供需脈絡;(3)妥善運用產業網絡。
有關理論之建構,本研究選擇「平台經濟性」作為核心類別,並以「知識模組動態組合價值性」作為演化準則,經由主軸編碼與選擇編碼等程序,發展出九項命題,藉以建構「平台經濟性創新(economies of platforms innovation)」理論。根據該理論,本研究指出知識模組創新公司,可依據能力審視、能力構築、能力持續等三階段,建構其動態核心能力。
本研究最後針對產業與政府等實務界,提出綜合性建議如下:
一、對產業界之建議
應注意與學習辨識所處產業是否正進入後產業化階段之分合(dis-integration)過程所產生之知識分工趨勢,並參考本研究所提出之「平台經濟性創新理論」,研擬以「平台經濟性」為基礎之知識模組化創新策略。同時,應積極運用知識模組供需脈絡與產業網絡之力量。
二、對政府產業政策之建議
針對協助個別企業提升內部能力方面,可加強輔導企業發展以知識模組再用平台為基礎之研發計畫,並且建立標竿案例與最佳實務,以提供企業導入「平台經濟性創新策略」之參考。同時,針對有主導潛力之知識模組創新企業,協助其深化發展產業主流平台。
此外,與國際相較,台灣知識型企業之規模仍屬偏小,政府輔導機制可加強推動國際級產業基礎模組主導者與國內業者結盟、輔導建立夥伴廠商體系(e.g.旗艦計畫)、輔導建立知識模組交易/交換機制、協助釐清知識模組之智慧財產權爭議、以及積極參與國際標準制訂,並快速擴散相關資訊與技術供產業參考等。 / Knowledge modularization is a popular phenomenon in knowledge-based industries. This study explores issues related to companies, which use open or specific stan-dards/procedures to encapsulate their knowledge into modules, and then use such mod-ules to pursue internal and/or external division of knowledge and knowledge reuse activi-ties, for the purpose of developing products. The said issues include the organization and management issues, as well as knowledge transfer and learning. Through the process of empirical field investigations this study aims to develop a new technological innovation theory, which is based on knowledge modularization and reuse.
This study adopted the Grounded Theory, together with case studies, as the main methodology to guide the research process. Eight companies were selected as case stud-ies, which included four companies from the software industry and four design houses from the semiconductor industry. We interviewed these companies to discuss in-depth modularization innovation concerning software components in software industry and silicon intellectual properties (SIPs) in the semiconductor industry. The collected data is differentiated into nine conceptual categories, which are the (1) performance and com-petitiveness, (2) technology capabilities for developing knowledge modules, (3) capabili-ties for reusing knowledge modules, (4) evolution of knowledge module platforms, (5) organization policy and culture, (6) leadership in terms of basic industry modules, (7) market heterogeneity, (8) maturity of knowledge module transactions/exchanges, and (9) promotion of intermediary industry organizations.
According to the study's findings, knowledge module innovation companies usually adopt the following procedures : (1) use knowledge module reuse platforms as the core of product/service innovation models; (2) organization structure design based on platforms; establish four internal capabilities, including (i) enhance the technology capabilities for developing knowledge module, (ii) reusing knowledge modules, (iii) speed up the evolution of knowledge module reuse platforms, (5) establish organization policy and culture. As for factors impacting industry knowledge module exchanges/transactions, these include 1) knowledge module sources for open platforms, 2) create supply and demand beneficial to knowledge module reuse, 3) making good use of industry networks.
Regarding the formation of a theory, the “economies of platforms” are used as the core category, and develop the “dynamic combination value of knowledge modules” as a criterion of evolution. Through axial and selective coding, nine propositions are devel-oped to support and construct the theory of "economies of platform innovation". Accord-ing to this theory, the study finds that knowledge module innovation companies can build their dynamic core capabilities through three phases, including capabilities positioning, building up capabilities, and sustaining capabilities.
The study also proposes several suggestions for the industry and government:
1. Suggestions for the industry:
Companies should closely watch and learn to recognize whether the structure of the industry in which they operate is entering a dis-integration process leading to division of knowledge. If so, the companies can refer to the theory of "economies of platform in-novation", to take action on knowledge module innovation strategies based on the economies of platforms. Meanwhile, they should utilize the power of the sup-ply-demand of knowledge modules and industrial networks.
2. Suggestions for the government's industry policies:
For the purpose of helping individual firms raise their capabilities, the government could improve R&D assistance programs focused on the establishment of knowledge module reuse platforms. The government can also establish benchmarks or best practice cases as references for companies who would like to adopt innovation strategies for economies of platforms. Furthermore, knowledge module innovation companies with the potential to become industry leaders can be further assisted in developing mainstream industry platforms.
Besides, compared with international companies, the scale of knowledge-based companies in Taiwan is small. Therefore, the government can strengthen its efforts in promoting alliances between international industry leaders and Taiwanese companies, help Taiwanese companies to establish strategic partner networks, assist companies in establishing transaction/exchange mechanism for knowledge modules, clarify issues re-lated to intellectual properties of knowledge modules, participate in international stan-dards bodies, and provide up-to-date and relevant market and technology information.
|
253 |
位移與混合型離散過程對波動度模型之解析與實證 / Displaced and Mixture Diffusions for Analytically-Tractable Smile Models林豪勵, Lin, Hao Li Unknown Date (has links)
Brigo與Mercurio提出了三種新的資產價格過程,分別是位移CEV過程、位移對數常態過程與混合對數常態過程。在這三種過程中,資產價格的波動度不再是一個固定的常數,而是時間與資產價格的明確函數。而由這三種過程所推導出來的歐式選擇權評價公式,將會導致隱含波動度曲線呈現傾斜曲線或是微笑曲線,且提供了參數讓我們能夠配適市場的波動度結構。本文利用台指買權來實證Brigo與Mercurio所提出的三種歐式選擇權評價公式,我們發現校準結果以混合對數常態過程優於位移CEV過程,而位移CEV過程則稍優於位移對數常態過程。因此,在實務校準時,我們建議以混合對數常態過程為台指買權的評價模型,以達到較佳的校準結果。 / Brigo and Mercurio proposed three types of asset-price dynamics which are shifted-CEV process, shifted-lognormal process and mixture-of-lognormals process respectively. In these three processes, the volatility of the asset price is no more a constant but a deterministic function of time and asset price. The European option pricing formulas derived from these three processes lead respectively to skew and smile in the term structure of implied volatilities. Also, the pricing formula provides several parameters for fitting the market volatility term structure. The thesis applies Taiwan’s call option to verifying these three pricing formulas proposed by Brigo and Mercurio. We find that the calibration result of mixture-of-lognormals process is better than the result of shifted-CEV process and the calibration result of shifted-CEV process is a little better than the result of shifted-lognormal process. Therefore, we recommend applying the pricing formula derived from mixture-of-lognormals process to getting a better calibration.
|
254 |
運用使用者輸入欄位屬性偵測防禦資料隱碼攻擊 / Preventing SQL Injection Attacks Using the Field Attributes of User Input賴淑美, Lai, Shu Mei Unknown Date (has links)
在網路的應用蓬勃發展與上網使用人口不斷遞增的情況之下,透過網路提供客戶服務及從事商業行為已經是趨勢與熱潮,而伴隨而來的風險也逐步顯現。在一個無國界的網路世界,威脅來自四面八方,隨著科技進步,攻擊手法也隨之加速且廣泛。網頁攻擊防範作法的演進似乎也只能一直追隨著攻擊手法而不斷改進。但最根本的方法應為回歸原始的程式設計,網頁欄位輸入資料的檢核。確實做好欄位內容檢核並遵守網頁安全設計原則,嚴謹的資料庫存取授權才能安心杜絕不斷變化的攻擊。但因既有系統對於輸入欄位內容,並無確切根據應輸入的欄位長度及屬性或是特殊表示式進行檢核,以致造成類似Injection Flaws[1]及部分XSS(Cross Site Scripting)[2]攻擊的形成。
面對不斷變化的網站攻擊,大都以系統原始碼重覆修改、透過滲透測試服務檢視漏洞及購買偵測防禦設備防堵威脅。因原始碼重覆修改工作繁重,滲透測試也不能經常施行,購買偵測防禦設備也相當昂貴。
本研究回歸網頁資料輸入檢核,根據輸入資料的長度及屬性或是特殊的表示式進行檢核,若能堅守此項原則應可抵禦大部分的攻擊。但因既有系統程式龐大,若要重新檢視所有輸入欄位屬性及進行修改恐為曠日費時。本文中研究以側錄分析、資料庫SCHEMA的結合及方便的欄位屬性定義等功能,自動化的處理流程,快速產生輸入欄位的檢核依據。再以網站動態欄位檢核的方式,於網站接收使用者需求,且應用程式尚未處理前攔截網頁輸入資料,根據事先明確定義的網站欄位屬性及長度進行資料檢核,如此既有系統即無須修改,能在最低的成本下達到有效防禦的目的。 / With the dynamic development of network application and the increasing population of using internet, providing customer service and making business through network has been a prevalent trend recently.
However, the risk appears with this trend. In a borderless net world, threaten comes from all directions. With the progress of information technology, the technique of network attack becomes timeless and widespread. It seems that defense methods have to develop against these attack techniques. But the root of all should regress on the original program design – check the input data of data fields. The prevention of unceasing network attack is precisely check the content of data field and adhere to the webpage security design on principle, furthermore, the authority to access database is essential. Since most existing systems do not have exactly checkpoints of those data fields such as the length, the data type, and the data format, as a result, those conditions resulted in several network attacks like Injection Flaws and XSS.
In response to various website attack constantly, the majority remodify the system source code, inspect vulnerabilities by the service of penetration test, and purchase the equipment of Intrusion Prevention Systems(IPS). However, several limitations influence the performance, such as the massive workload of remodify source code, the difficulty to implement the daily penetration test, and the costly expenses of IPS equipment.
The fundamental method of this research is to check the input data of data fields which bases on the length, the data type and the data format to check input data. The hypothesis is that to implement the original design principle should prevent most website attacks. Unfortunately, most legacy system programs are massive and numerous. It is time-consuming to review and remodify all the data fields. This research investigates the analysis of network interception, integrates with the database schema and the easy-defined data type, to automatically process these procedures and rapidly generates the checklist of input data. Then, using the method of website dynamic captures technique to receive user request first and webpage input data before the system application commences to process it.
According to those input data can be checked by the predefined data filed type and the length, there is no necessary to modify existing systems and can achieve the goal to prevent web attack with the minimum cost.
|
255 |
貨幣政策操作目標之選擇與法則: 政策透明度及央行行為對小型開放經濟體之影響 / Monetary policy rules and operation targets: the effects of the central bank policy transparency and the central bank behavior蔡岳昆, Tsai, Yueh Kun Unknown Date (has links)
中央銀行政策透明度影響總體經濟的議題在近日漸受重視。以美國為例,2008年房貸嚴重違約,高順位債權受到波及,使多數金融業產生營運危機,讓聯邦準備銀行 (Fed) 政策執行受到關注。晚近貨幣當局的政策透明度漸受重視。貨幣政策應如何選定才能使總體經濟達到較高的社會福利?Cukierman在2002年指出中央銀行的透明度低易造成較高的物價膨脹。本研究以動態一般性均衡模型 (dynamic stochastic general equilibrium) ,建構新凱因斯小型開放總體模型。模型內含一定程度的價格僵固,並且擁有前瞻預期 (forward looking) 及後顧預期 (backward looking) 兩種型態的廠商存在其中。再採用貝氏方法估計台灣在該模型所應採用的參數後,並嘗試對體系內多個部門投入衝擊,然後檢視央行的政策透明度對總體經濟的影響,同時驗證是否支持Cukierman的結論。本研究印證Cukierman的結論,發現央行在操作貨幣政策面臨兩難時,不應採取透明度低的政策法則,而應優先針對物價的不穩定做出因應對策。 / Recently, people pay attention to central bank’s policy transparency, and most countries’ central banks have accepted the suggestion made by the Bank for International Settlements to adopt transparent monetary policy. Cukierman (2002) concluded that if the central bank’s policy was not transparency, it would cause higher inflation. The thesis will utilizes dynamic stochastic general equilibrium model with New Keynesian concept proposed by Gali and Monacelli (2005) to analyze the effects of transparent monetary policy and to classify the macroeconomic different effects between transparent and hazy monetary policy. The conclusions support that higher monetary policy transparency will reduce social welfare loss, lower the volatility of inflation and output gap.
|
256 |
一階衝擊動態方程的週期邊界值問題 / PBVPs of first-order impulsive dynamic equations on time scales梁益昌, Liang, Yi Chang Unknown Date (has links)
在這篇論文中,我們討論的是一階非線性衝擊動態方程的週期邊界值問題。利用Schaefer定理及Banach固定點定理,我們得到一些解的存在性結果。 / In this thesis, we are concernd with nonlinear first-order periodic boundary
value problems of impulsive dynamic equations on time scales. By
using Schaefer’s theorem and Banach’s fixed point theorem we acquire
some new existence results.
|
257 |
風險基礎資本,情境分析及動態模擬破產預測模型之比較 / Regulatory Solvency Prediction: Risk-Based Capital, Scenario analysis and Stochastic Simulation宋瑞琳, Sung, Jui-Lin Unknown Date (has links)
保險公司清償能力一直是保險監理的重心,在所有現行的制度中風險基礎資本是最重要的,但此項制度仍有其缺點,因此其他動態分析模型被許多學者所提出,如涉險值及情境分析。雖然這些動態分析模型被學者所偏好,但監理機關仍須對這些模型的精確程度加以了解,這也是本篇論文所要研究的目的。
基於此,本篇論文以模擬方式及經濟模型加以分析風險基礎資本、情境分析及涉險值等方法的破產預測的相對精確性。其中風險基礎資本完全採用現有NAIC的年報資料,情境分析及涉險值則採用我們所建立的模型,基於此也可以確認現有監理制度是否有缺失。
我們的結果發現風險基礎資本的預測能力很低,動態模型-情境分析及涉險值皆優於風險基礎資本,且在不同動態模型中涉險值的預測能力較好。因此可知被學者所偏好的動態分析模型應是未來保險監理的方向希望藉由本篇提供監理機關一個參考的依據。 / Solvency prediction of insurers has been the focus of insurance regulation. Among the solvency regulation systems, risked-based capital (RBC) is the most important but RBC still has some drawbacks. Thus, the dynamic financial analyses-scenario analysis and Value at Risk have been developed to be the regulation tool. Although, the scholars prefer the dynamic financial analysis, the regulators still want to make sure the accuracy of dynamic financial analysis. That is the purpose of our paper.
Therefore, we use the simulation result and the econometric model to analyze the relative effectiveness of RBC, scenario and Value at Risk (VaR). The RBC is from the annual statement and the scenario and VaR come from our simulation model.
Our result shows that the RBC has very low explanatory power, the dynamic financial analysis is better than RBC, and VaR outperform scenario analysis. Thus, we conclude that VaR is the way to go for property-casualty insurance regulators.
|
258 |
穩定性與多重性-以二部門體系動態調整方式為例 / Stability and indeterminacy --the dynamic adjustment of two-sector economy連科雄, Lian, Ke-Shaw Unknown Date (has links)
本篇論文試圖藉由比較一個產業生產技術為固定規模報酬的經濟體系,如何因外部因素的影響而改變其動態調整方式。在此考慮的外部因素有資本移動的開放與否、生產要素的外部性、及政府對要素報酬的課稅。考慮各種因素後,所得出的結論為在生產函數為Cobb-Douglas型式且產業生產技術為固定規模報酬的情況下:
1.多重均衡路徑在資本帳封閉時期唯有效用函數為特例時才能使其出現,但在資本自由移動時期對於所有的效用函數型態皆會成立。
2.其他條件保持不變之下,單獨存在生產要素外部性或是對要素所得課稅皆可使體系存在多重均衡路徑。
3.其他條件保持不變之下,若生產要素外部性與要素所得稅皆同時存在時,可使體系存在唯一的穩定馬鞍路徑。
|
259 |
Java網頁程式安全弱點驗證之測試案例產生工具 / Test Case Generation for Verifying Security Vulnerabilities in Java Web Applications黃于育, Huang, Yu Yu Unknown Date (has links)
近年來隨著網路的發達,網頁應用程式也跟著快速且普遍化地發展。網頁應用程式快速盛行卻忽略程式設計時的安全性考量,進而成為網路駭客的攻擊目標。因此,網頁應用程式的安全議題日益重要。目前已有許多網頁應用程式安全弱點的相關研究,以程式分析的技術找出弱點,主要分成靜態分析與動態分析兩大類。但無論是使用靜態或是動態的分析方法,仍有其不完美的地方。其中靜態分析結果完備但會產生過多弱點誤報;動態分析結果準確率高但會因為測試案例的不完備而造成弱點的漏報。因此,本論文研究結合了動靜態分析,利用靜態分析方法發展一套測試案例產生工具;再結合動態分析方法隨著測試案例的執行來追蹤測試資料並作弱點的驗證,以達到沒有弱點漏報的產生以及改善弱點誤報的目標。
本論文研究的重點集中在以靜態分析技術產生涵蓋目標程式中所有可執行路徑的測試案例。我們應用測試案例產生常見的符號化執行技巧,利用程式的路徑限制蒐集與解決來達成測試案例產生。實作上我們利用跨程序性路徑分析找出目標程式中所有潛在弱點的路徑,再以反向路徑限制蒐集將限制資訊完整蒐集;最後交給限制分析器解限制並產生測試案例。接著利用剖面導向程式語言AspectJ的程式插碼技術實現動態的汙染資料流分析,配合產生的測試案執行程式觸發動態的汙染資料流分析並產生可信賴的弱點分析結果。 / Due to the rapid development of the internet in recent years, web applications have become very popular and ubiquitous. However, developers may neglect the issues of security while designing a program so that web applications become the targets of attackers. Hence, the issue of web application vulnerabilities has become very crucial. There have been many research results of web application security vulnerabilities and many of them exploit the technique of program analysis to detect vulnerabilities. These analysis approaches can be can basically be categorized into dynamic analysis and static analysis. However, both of them still have their own problems to be improved. Specifically static analysis supports high coverage of vulnerabilities, but causes too many false positives. As for the dynamic analysis, although it produces high confident results, yet it may cause false negatives without complete test cases.
In this thesis, we integrate both static analysis and dynamic analysis to achieve the objectives that no false negatives are produced and reduce false positives. We develop a test case generation tool by the static analysis approach and a program execution tool that dynamically track the execution of the target program with those test data to detect its vulnerabilities. Our test case generation tool first employs both intra- and inter-procedural analysis to cover all vulnerable paths in a program, and then apply the symbolic execution technique to collect all path constraints. With these collected constraints, we use a constraint solver to solve them and finally generate the test cases. As to the execution tool, it utilizes the instrumentation mechanism provided by the aspect-oriented programming language AspectJ to implement a dynamic taint analysis that tracks the flow of tainted data derived from those generated test cases. As a result, all vulnerable program paths will be detected by our tools.
|
260 |
附最低保證變額年金保險最適資產配置及準備金之研究 / A study of optimal asset allocation and reserve for variable annuities insurance with guaranteed minimum benefit陳尚韋 Unknown Date (has links)
附最低保證投資型保險商品的特色在於無論投資者的投資績效好壞,保險金額皆享有一最低投資保證,過去關於此類商品的研究皆假設標的資產為單一資產,或依固定比例之投資組合,並沒有考慮到投資人自行配置投資組合的效果,但大部分市售商品中,投資人可以自行配置投資標,此情況之下,保險公司如何衡量適當的保證成本即為一相當重要之課題。
本研究假設投資人風險偏好服從冪次效用函數,並假設與保單所連結之投資標的有兩種資產,一為具有高風險高報酬的資產,另一為具有低風險低報酬之資產,在每個保單年度之初,投資人可以選擇配置在兩種資產之比例,我們運用黃迪揚(2009)所提出的動態規劃數值解之方法,計算出在考慮投資人自行配置資產之下,保證成本將會比固定比例之投資高出12個百分點。
此外,為了瞭解在不同資產報酬率的模型之下,保證成本是否會有不一樣的結論,除了對數常態模型之外,我們假設高風險資產與低風險資產服從ARIMA-GARCH(Autoregressive Integrated Moving Average-Generalized Autoregressive Conditional Heteroscedastic )模型,並得到較高的保證成本。 / The main characteristic of variable annuities (VA) with minimum benefits is that the benefit will be guaranteed. Previous literatures assume a specific underling asset return process when considering the guaranteed cost of VA; but they do not consider the portfolio choice opportunity of the policyholders. However, it is common for policyholders to rebalance his portfolio in many types of VA products. Therefore it’s important for insurance companies to apply an approximate method to measure the guaranteed cost.
In this research, we assume that there are two potential assets in policyholders’ portfolio; one with high risk and high return and the other one with low risk and low return. The utility function of the policyholder is assumed to follow a power utility. We consider the asset allocation effect on the guaranteed cost for a VA with guaranteed minimum withdrawal benefits, finding that the guaranteed cost will increase 12% compared with a specific underling asset.
The model effect of the asset return process is also examined by considering two different asset processes, the lognormal model and ARIMA-GARCH model. The solution of dynamic programming problem is solved by the numerical approach proposed by Huang (2009). Finally we get the conclusion which the guaranteed cost given by the ARIMA-GARCH model is greater than the lognormal model.
|
Page generated in 0.0301 seconds