• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 454
  • 158
  • 49
  • 47
  • 46
  • 38
  • 33
  • 25
  • 20
  • 8
  • 6
  • 6
  • 5
  • 4
  • 4
  • Tagged with
  • 1047
  • 1047
  • 250
  • 147
  • 129
  • 124
  • 113
  • 112
  • 97
  • 96
  • 89
  • 84
  • 83
  • 81
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
981

數位化原住民農耕知識之策略─以尖石泰雅族部落為例 / The Strategy For Digitization of Indigenous Knowledge of Farming ─ A Case Study of the Atayal Communities in Jianshih Village

張孟瑄 Unknown Date (has links)
原住民農耕知識乃長期適應自然環境,並與自然達成平衡關係之農耕方式,因此藉由探討原住民農耕知識,得提供高山農業政策上的建議。惟偏向質性的原住民知識,需萃取、轉化為科學形式,方能利用。是以,本研究以地理資訊系統為基礎、利用土地適宜性分析與羅吉斯迴歸為方法,設計一套適用於數位化原住民農耕知識的策略。此策略為一個迭代的循環,包含蒐集、轉換、分析與檢視的程序。首先,以量化方式蒐集原住民農耕知識,次將知識轉換成空間資料的形式,再透過分析將知識轉化成有意義的資訊,並以視覺化方式展示分析成果,而分析成果可用以檢視知識蒐集的完整性、檢驗知識轉換後的正確性,進而以為基礎,針對興趣點再度蒐集知識,如此反覆操作上述程序。以土地適宜性分析為核心之策略,可用以探究原住民農耕區位選取知識;以羅吉斯迴歸為主軸之策略,則以個別農耕地為基礎,驗證農耕行為與邊坡穩定性之關聯。本研究以尖石泰雅族部落為研究區域。研究發現此數位化原住民農耕知識策略是可行的,得以有效地達到原住民農耕知識蒐集、分析及展示的目標。數位化後的原住民農耕知識具體而明確,可作為相關政策之參考。 / Indigenous knowledge of farming is empirical rules based on a long-term interaction between human and organism which benefit to each other. As a result, by discussing indigenous knowledge of farming, it could provide positive suggestions for the cultivation on slope land. However, indigenous knowledge tends to be qualitative rather than quantitative. Thus, we need to translate it into a scientific formats so as to take use of it. Consequently, the paper which is based on GIS, utilizes land use suitability analysis and logistic regression aims to establish a strategy for digitizing indigenous knowledge of farming. Actually, the strategy is an iterative circle with the procedure of extraction, translation, analysis and review. Firstly, investigate indigenous knowledge of farming in quantitative way. Secondly, translate it into layers. Then, turn the fragmentary data into meaningful information, and illustrate it on the map. Finally, review the result whether it is comprehensive and reliable. We used land use suitability analysis as the strategy to explore indigenous knowledge of farming site selection. Similarly, we use logistic regression as the strategy to demonstrate the correlation between the practice of farming and slope stability. The Atayal communities in Jianshih village was used for testing of the above strategy. In conclusion, the strategy for digitizing indigenous knowledge of farming is feasible for investigating, analyzing and visualizing the indigenous knowledge of farming. Therefore, the result is quite clear and specific as an important supplement to policy.
982

A simulation study of the effect of therapeutic horseback riding : a logistic regression approach

Pauw, Jeanette 11 1900 (has links)
Therapeutic horseback riding (THR) uses the horse as a therapeutic apparatus in physical and psychological therapy. This dissertation suggests a more appropriate technique for measuring the effect of THR. A research survey of the statistical methods used to determine the effect of THR was undertaken. Although researchers observed clinically meaningful change in several of the studies, this was not supported by statistical tests. A logistic regression approach is proposed as a solution to many of the problems experienced by researchers on THR. Since large THR related data sets are not available, data were simulated. Logistic regression and t-tests were used to analyse the same simulated data sets, and the results were compared. The advantages of the logistic regression approach are discussed. This statistical technique can be applied in any field where the therapeutic value of an intervention has to be proven scientifically. / Mathematical Sciences / M. Sc. (Statistics)
983

綠色品質風險管控模型之研究 / Green Quality Risk Management Model

王昭珷, Wang,Chao Pin Unknown Date (has links)
本研究旨在利用風險管控的方式,來協助電子製造業建立一套可有效的維持產品的綠色品質並降低產品的綠色風險的綠色品質風險管控模型,使得企業不致因產品在出貨後,被檢測出違反RoHS指令而使企業被罰以巨額款項並損失商譽。 回顧1997年12月聯合國氣候變化框架公約(UNFCCC)參加國第三次會議在日本京都舉行,並簽定了[京都議定書]之後,各國陸續制定出其各自的環保法令,其中又以歐盟於2003年2月通過並於2006年7月1日起實施限制鉛,鎘,汞,六價鉻,多溴聯苯,多溴聯苯醚等六項有害物質的RoHS指令的影響範圍最大且最為直接的影響到我國的產業,從而引發起了本研究的動機。 本研究透過與訪談個案的合作,實際從分析個案的產品研發生產的作業中,由影響RoHS的角度從作業一直剖析到管控內容,進而找到會影響RoHS品質不良的16個風險因子,並透過建立的監控系統來進行風險因子的資料採樣,最後經由羅吉斯迴歸模型,建立出一套風險計算模型,以連接RoHS風險因子的監控系統而成為一套綠色品質風險管控模型。 / The objective of this research is to help electronic manufacturers to establish a Green Quality Risk Management Model, which can effectively keep green quality and decrease green quality risk of products. Consequently, companies can prevent huge amount of fine and goodwill impairment caused by RoHS violation of their shipments. After the participants of UNFCCC held the third meeting in Kyoto, Japan and ratified the Kyoto Protocol in December 1997, every country created its environmental regulations in secession. Among those regulations, the RoHS directive, which prohibits the usage of Lead, Mercury, Cadmium, Hexavalent chromium (Cr6+), Polybrominated biphenyls (PBB)and Polybrominated diphenyl ether (PBDE), adopted in February 2003 and activated in January 2006 by the European Union resulted in most pervasive and direct impact on Taiwanese industry, consequently creating the incentive for this research. By the cooperation of case interview, this research analyze the research and development operations of interviewees with the perspectives from primary operations to floor control in order to identify sixteen risk factors of RoHS quality, and sample the data of risk factors with established control system. Finally, a green quality risk management model was created by the establishment of a risk computation model in connection with RoHS risk factor control system was established using Logistic Regression model.
984

自變數有誤差的邏輯式迴歸模型:估計、實驗設計及序貫分析 / Logistic regression models when covariates are measured with errors: Estimation, design and sequential method

簡至毅, Chien, Chih Yi Unknown Date (has links)
本文主要在探討自變數存在有測量誤差時,邏輯式迴歸模型的估計問題,並設計實驗使得測量誤差能滿足遞減假設,進一步應用序貫分析方法,在給定水準下,建立一個信賴範圍。 當自變數存在有測量誤差時,通常會得到有偏誤的估計量,進而在做決策時會得到與無測量誤差所做出的決策不同。在本文中提出了一個遞減的測量誤差,使得滿足這樣的假設,可以證明估計量的強收斂,並證明與無測量誤差所得到的估計量相同的近似分配。相較於先前的假設,特別是證明大樣本的性質,新增加的樣本會有更小的測量誤差是更加合理的假設。我們同時設計了一個實驗來滿足所提出遞減誤差的條件,並利用序貫設計得到一個更省時也節省成本的處理方法。 一般的case-control實驗,自變數也會出現測量誤差,我們也證明了斜率估計量的強收斂與近似分配的性質,並提出一個二階段抽樣方法,計算出所需的樣本數及建立信賴區間。 / In this thesis, we focus on the estimate of unknown parameters, experimental designs and sequential methods in both prospective and retrospective logistic regression models when there are covariates measured with errors. The imprecise measurement of exposure happens very often in practice, for example, in retrospective epidemiology studies, that may due to either the difficulty or the cost of measuring. It is known that the imprecisely measured variables can result in biased coefficients estimation in a regression model and therefore, it may lead to an incorrect inference. Thus, it is an important issue if the effects of the variables are of primary interest. When considering a prospective logistic regression model, we derive asymptotic results for the estimators of the regression parameters when there are mismeasured covariates. If the measurement error satisfies certain assumptions, we show that the estimators follow the normal distribution with zero mean, asymptotically unbiased and asymptotically normally distributed. Contrary to the traditional assumption on measurement error, which is mainly used for proving large sample properties, we assume that the measurement error decays gradually at a certain rate as there is a new observation added to the model. This kind of assumption can be fulfilled when the usual replicate observation method is used to dilute the magnitude of measurement errors, and therefore, is also more useful in practical viewpoint. Moreover, the independence of measurement error and covariate is not required in our theorems. An experimental design with measurement error satisfying the required degenerating rate is introduced. In addition, this assumption allows us to employ sequential sampling, which is popular in clinical trials, to such a measurement error logistic regression model. It is clear that the sequential method cannot be applied based on the assumption that the measurement errors decay uniformly as sample size increasing as in the most of the literature. Therefore, a sequential estimation procedure based on MLEs and such moment conditions is proposed and can be shown to be asymptotical consistent and efficient. Case-control studies are broadly used in clinical trials and epidemiological studies. It can be showed that the odds ratio can be consistently estimated with some exposure variables based on logistic models (see Prentice and Pyke (1979)). The two-stage case-control sampling scheme is employed for a confidence region of slope coefficient beta. A necessary sample size is calculated by a given pre-determined level. Furthermore, we consider the measurement error in the covariates of a case-control retrospective logistic regression model. We also derive some asymptotic results of the maximum likelihood estimators (MLEs) of the regression coefficients under some moment conditions on measurement errors. Under such kinds of moment conditions of measurement errors, the MLEs can be shown to be strongly consistent, asymptotically unbiased and asymptotically normally distributed. Some simulation results of the proposed two-stage procedures are obtained. We also give some numerical studies and real data to verify the theoretical results in different measurement error scenarios.
985

New statistical methods to assess the effect of time-dependent exposures in case-control studies

Cao, Zhirong 12 1900 (has links)
Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps. Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac. Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations. Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps. Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins / Background: Case-control studies are very often used by epidemiologists to assess the impact of specific exposure(s) on a particular disease. These exposures may be represented by several time-dependent covariates and new methods are needed to accurately estimate their effects. Indeed, conventional logistic regression, which is the standard method to analyze case-control data, does not directly account for changes in covariate values over time. By contrast, survival analytic methods such as the Cox proportional hazards model can directly incorporate time-dependent covariates representing the individual entire exposure histories. However, it requires some careful manipulation of risk sets because of the over-sampling of cases, compared to controls, in case-control studies. As shown in a preliminary simulation study, the optimal definition of risk sets for the analysis of case-control data remains unclear and has to be investigated in the case of time-dependent variables. Objective: The overall objective is to propose and to investigate new versions of the Cox model for assessing the impact of time-dependent exposures in case-control studies, and to apply them to a real case-control dataset on lung cancer and smoking. Methods: I identified some potential new risk sets definitions (the weighted Cox model and the simple weighted Cox model), in which different weights were given to cases and controls, in order to reflect the proportions of cases and non cases in the source population. The properties of the estimates of the exposure effects that result from these new risk sets definitions were investigated through a simulation study. Various aspects of exposure were generated (intensity, duration, cumulative exposure value). The simulated case-control data were then analysed using different versions of Cox’s models corresponding to existing and new definitions of risk sets, as well as with standard logistic regression, for comparison purpose. The different regression models were then applied to real case-control data on lung cancer. The estimates of the effects of different smoking variables, obtained with the different methods, were compared to each other, as well as to simulation results. Results: The simulation results show that the estimates from the new proposed weighted Cox models, especially those from the weighted Cox model, are much less biased than the estimates from the existing Cox models that simply include or exclude future cases. In addition, the weighted Cox model was slightly, but systematically, less biased than logistic regression. The real life application shows some greater discrepancies between the estimates of the proposed Cox models and logistic regression, for some smoking time-dependent covariates. Conclusions: The results suggest that the new proposed weighted Cox models could be an interesting alternative to logistic regression for estimating the effects of time-dependent exposures in case-control studies.
986

Étude des déterminants démographiques de l’hypotrophie fœtale au Québec

Fortin, Émilie 04 1900 (has links)
Cette recherche vise à décrire l’association entre certaines variables démographiques telles que l’âge de la mère, le sexe, le rang de naissance et le statut socio-économique – représenté par l’indice de Pampalon – et l’hypotrophie fœtale au Québec. L’échantillon est constitué de 127 216 naissances simples et non prématurées ayant eu lieu au Québec entre le 1er juillet 2000 et le 30 juin 2002. Des régressions logistiques portant sur le risque d’avoir souffert d’un retard de croissance intra-utérine ont été effectuées pour l’ensemble du Québec ainsi que pour la région socio-sanitaire (RSS) de Montréal. Les résultats révèlent que les enfants de premier rang et les enfants dont la mère était âgée de moins de 25 ans ou de 35 ans et plus lors de l’accouchement ont un risque plus élevé de souffrir d’hypotrophie fœtale et ce dans l’ensemble du Québec et dans la RSS de Montréal. De plus, les résultats démontrent que le risque augmente plus la mère est défavorisée. Puisque l’indice de Pampalon est un proxy écologique calculé pour chaque aire de diffusion, les intervenants en santé publique peuvent désormais cibler géographiquement les femmes les plus à risque et adapter leurs programmes de prévention en conséquence. Ainsi, le nombre de cas d’hypotrophie fœtale, voire même la mortalité infantile, pourraient être réduits. / This study describes the association between demographic variables such as the mother’s age, the child’s gender and birth order, and the socio-economic status – that can now be assessed by the Pampalon Index – with intrauterine growth restriction (IUGR) in the province of Quebec. The analyses are based on a sample of 127,216 singletons and term births that occurred in the province of Quebec between July 1st, 2000 and June 30th, 2002. Logistics regressions on the risk of having suffered from IUGR were produced for the entire province of Quebec and for the health region of Montreal. In the province of Quebec and in the health region of Montreal, the results reveal that the risk of IUGR is higher for first-born infants, and for infants whose mother was under 25 years of age or aged 35 years and older. Moreover, the risk of IUGR increases with poverty. Since the Pampalon Index is calculated for each dissemination area, public health interventions can now target the most vulnerable women and reduce the number of IUGR cases or even infant mortality.
987

潛在移轉分析法與中位數法在長期追蹤資料分組的差異比較 / On classification of longitudinal data ─ comparison between Latent Transition Analysis and the method using Median as a cutpoint

李坤瑋, Lee, Kun Wei Unknown Date (has links)
當資料屬於類別型的長期追蹤資料(Longitudinal categorical data)時,除了可以透過廣義估計方程式(General estimate equation, GEE)來求解模型參數估計值外,潛在移轉分析(Latent transition analysis, LTA)法也是一種可行的資料分析方法。若資料的期數不多,也可以選擇將資料適度分群後使用羅吉斯迴歸分析(Logistic regression)法。當探討的反應變數為二元(Binary)型態,且觀察對象於每一期提供多個測量變數值的情況之下,廣義估計方程式與羅吉斯迴歸分析法的使用,文獻上常見先將所有的測量變數值加總後,以「中位數」作為分類的切割點。不同於以上兩種方法,潛在移轉分析法則是直接使用原始資料來取得觀察對象的潛在狀態相關訊息,因此與前二者的作法不同,可能導致後續的各項分析結果有所差異存在。 為了能夠了解造成中位數分類法與移轉分析法差異的可能因素,我們架構在潛在移轉分析法的模型下,以不同的參數設定來進行電腦模擬,比較各參數條件下的兩分類方法差異。結果發現各潛在狀態下的測量變數反應機率形式、第一期潛在狀態的組成比例等皆會對兩分類方法是否具有相同分類有所影響。另外,透過分析「青少年媒體使用與健康生活調查」的實際資料得知,潛在移轉分析會將大部分的觀察對象歸屬於「網路成癮」,而中位數分類法則是將大部分的觀察對象歸屬於「無網路成癮」。此外,可以注意到「沮喪」、「線上情色每星期平均使用天數」、及「父母相處狀況」這幾個控制變數與各分組結果的關聯性,於上述三種資料分析方法中有所不同。 / Several methods can be used to analyze longitudinal categorical data, as among them Latent Transition Analysis (LTA), and Generalized Linear Models estimated by Generalized Estimating Equations (GEE) probably the most popular. In addition, if the number of periods is two, then with certain grouping of data, the Logistic Regression can also be applied to perform the analyses. When there are more than one manifest response variable for each study subject, LTA is able to classify the subjects in terms of the original manifest response variables and proceeds with necessary analyses. On the other hand, GEE method and Logistic Regression lack the flexibility, and require certain transformation to transform the manifest response variables into a categorical response variable first. One common way to form a binary response is to sum all manifest variables, and then taking median as a cut-point. In this study, we explore the differences of the classification resulted from LTA directly and using median as a cut-point through simulations. An empirical study is also provided to illustrate the classification differences, and the differences on the subsequent analyses using LTA, GEE method, and Logistic Regression approach.
988

台灣縣市長選舉預測模型之研究:一個基礎模型的建立及其應用 / Election Forecasting: the Construction and Its Applications of a Logistic Model of Conuty Magistrate Elections in Taiwan

范凌嘉, Fan, Ling-Jia Unknown Date (has links)
本研究以1997年台灣縣市長選舉為標的,彙整政治學有關投票行為的相關理論,包含社會學研究途徑、社會心理學研究途徑與理性抉擇途徑的研究成果,整合該年度之總體與個體資料而設計出「特質調整模型」。特質調整模型是透過兩階段的操作模式進行預測,首先以基礎模型反應全國一致的因素,使之適用於台灣所有縣市,這些因素包括政黨認同、候選人取向與社會人口學變項。但由於各縣市狀況仍有不同,因此再進一步用延伸模型來考量各縣市的特殊選舉因素。延伸模型在基礎模型的規模上,以描述性統計來觀察選區情形後,再加入各地特質於模型之中,使其預測結果能反映各地特殊狀況。在延伸模型中,考量的因素包括議題取向、環境系絡因素、策略性投票、在位者表現、派系取向與賄選問題等。 在特質調整模型中,本研究嘗試以對數迴歸模型對各地區進行模擬計算,並用機率論的方式呈現每一位受訪者的投票可能,以反應政治學理論中的不確定性。研究結果發現基礎模型確能相當地反應出台灣各縣市的選舉狀況,描繪各地的一般狀況,而延伸模型又能更精確地貼近各地的選舉結果,反映各地的特殊選情。在資料完整的狀況下,最後各縣市的預測誤差均不超過抽樣誤差。 第一章 緒論 1 壹、研究動機與目的 1 貳、文獻檢閱 3 第二章 研究方法 25 壹、研究範圍與資料來源 25 貳、模型建構 28 參、研究架構 33 肆、模型評估 35 第三章 基礎模型 38 壹、 變數建構 38 貳、 基礎模型的探討 42 參、 討論 84 第四章 延伸模型:基礎模型的應用 87 壹、延伸模型的設計 87 貳、基隆市的延伸模型 89 參、台北縣的延伸模型 98 肆、桃園縣的延伸模型 115 伍、新竹市的延伸模型 123 陸、台中市的延伸模型 129 柒、彰化縣的延伸模型 140 捌、台南市的延伸模型 153 玖、台南縣的延伸模型 166 拾、小結 172 第五章 結論 174 壹、研究回顧 174 貳、研究效果評估 178 參、研究限制與未來研究建議 179 參考文獻 184 / This research is focused on Taiwanese county magistrates election in 1997, and based on the aggregate and individual data to design a forecasting model, named "Joined Idiosyncrasies Adjusted Model" (JIA Model). This model is operated by two stages. First, I compute a basic model, which reflects some general factors in every county. Second, I design extended models to adjust the output of basic models. Those extended models can precisely show the situation of every single county. In this model, I try to use logistic regression to compute the candidate's votes, and present the final forecast output in probability. This model made the county magistrates election more predictable, and the model errors are less than the sampling errors.
989

遺漏值存在時羅吉斯迴歸模式分析之研究 / Logistic Regression Analysis with Missing Value

劉昌明, Liu, Chang Ming Unknown Date (has links)
990

房屋貸款保證保險違約風險與保險費率關聯性之研究 / The study on relationship between the default risk of the mortgage insurance and premium rate

李展豪 Unknown Date (has links)
房屋貸款保證保險制度可移轉部分違約風險予保險公司。然而,保險公司與金融機構在共同承擔風險之際,因房貸保證保險制度之施行,於提高貸款成數後,產生違約風險提高之矛盾現象;而估計保險之預期損失時,以目前尚無此制度下之違約數據估計損失額,將有錯估之可能。 本研究以二元邏吉斯特迴歸模型(Binary Logistic Regression Model)與存活分析(Survival Analysis)估計違約行為,並比較各模型間資料適合度及預測能力,進而單獨分析變數-貸款成數對違約率之邊際機率影響。以探討房貸保證保險施行後,因其對借款者信用增強而提高之貸款成數,所增加之違約風險。並評估金融機構因提高貸款成數後可能之違約風險變動,據以推估違約率數據,並根據房貸保證保險費率結構模型,計算可能之預期損失額,估算變動的保險費率。 實證結果發現,貸款成數與違約風險呈現顯著正相關,貸款成數增加,邊際影響呈遞增情形,違約率隨之遞增,而違約預期損失額亦同時上升。保險公司因預期損失額增加,為維持保費收入得以支付預期損失,其保險費率將明顯提升。故實施房屋貸款保證保險,因借款者信用增強而提高之貸款成數,將增加違約機率並對保險費率產生直接變動。 / Mortgage insurance system may transfer part of the default risk to insurance companies. However, the implementation of mortgage insurance system, on increasing loan to value ratio, the resulting increase default risk. And literatures estimate the expected loss without the default data, there will be misjudge. Our study constructs the binary logistic regression model and survival analysis to estimate the mortgage default behavior, and compare the data between the model fit and the predictive power. Analyzes the effect of loan to value ratio on the marginal probability of default rate. Furthermore, assess the financial institutions in the risk of default due to loan to value ratio changes. According to the estimated default rate data, we employ the mortgage insurance rate structural model to calculate the expected amount of loss and the changes in premium rates. Empirical results found loan to value ratio have a significant positive effect on borrowers’ default. Loan to value ratio increase, the marginal effect progressively increase, along with increasing default rates and expected default losses. Due to the ascendant expected loss, insurance companies increase premiums to cover the expected loss, the premium rate will be significantly improved. Therefore, the implementation of mortgage insurance, credit enhancement for the borrower to improve loan to value ratio, will increase the probability of default and insurance rates.

Page generated in 0.0434 seconds