Spelling suggestions: "subject:"jogistic degression model"" "subject:"jogistic aregression model""
41 |
Regressão logística com erro de medida: comparação de métodos de estimação / Logistic regression model with measurement error: a comparison of estimation methodsRodrigues, Agatha Sacramento 27 June 2013 (has links)
Neste trabalho estudamos o modelo de regressão logística com erro de medida nas covariáveis. Abordamos as metodologias de estimação de máxima pseudoverossimilhança pelo algoritmo EM-Monte Carlo, calibração da regressão, SIMEX e naïve (ingênuo), método este que ignora o erro de medida. Comparamos os métodos em relação à estimação, através do viés e da raiz do erro quadrático médio, e em relação à predição de novas observações, através das medidas de desempenho sensibilidade, especificidade, verdadeiro preditivo positivo, verdadeiro preditivo negativo, acurácia e estatística de Kolmogorov-Smirnov. Os estudos de simulação evidenciam o melhor desempenho do método de máxima pseudoverossimilhança na estimação. Para as medidas de desempenho na predição não há diferença entre os métodos de estimação. Por fim, utilizamos nossos resultados em dois conjuntos de dados reais de diferentes áreas: área médica, cujo objetivo está na estimação da razão de chances, e área financeira, cujo intuito é a predição de novas observações. / We study the logistic model when explanatory variables are measured with error. Three estimation methods are presented, namely maximum pseudo-likelihood obtained through a Monte Carlo expectation-maximization type algorithm, regression calibration, SIMEX and naïve, which ignores the measurement error. These methods are compared through simulation. From the estimation point of view, we compare the different methods by evaluating their biases and root mean square errors. The predictive quality of the methods is evaluated based on sensitivity, specificity, positive and negative predictive values, accuracy and the Kolmogorov-Smirnov statistic. The simulation studies show that the best performing method is the maximum pseudo-likelihood method when the objective is to estimate the parameters. There is no difference among the estimation methods for predictive purposes. The results are illustrated in two real data sets from different application areas: medical area, whose goal is the estimation of the odds ratio, and financial area, whose goal is the prediction of new observations.
|
42 |
Regressão logística com erro de medida: comparação de métodos de estimação / Logistic regression model with measurement error: a comparison of estimation methodsAgatha Sacramento Rodrigues 27 June 2013 (has links)
Neste trabalho estudamos o modelo de regressão logística com erro de medida nas covariáveis. Abordamos as metodologias de estimação de máxima pseudoverossimilhança pelo algoritmo EM-Monte Carlo, calibração da regressão, SIMEX e naïve (ingênuo), método este que ignora o erro de medida. Comparamos os métodos em relação à estimação, através do viés e da raiz do erro quadrático médio, e em relação à predição de novas observações, através das medidas de desempenho sensibilidade, especificidade, verdadeiro preditivo positivo, verdadeiro preditivo negativo, acurácia e estatística de Kolmogorov-Smirnov. Os estudos de simulação evidenciam o melhor desempenho do método de máxima pseudoverossimilhança na estimação. Para as medidas de desempenho na predição não há diferença entre os métodos de estimação. Por fim, utilizamos nossos resultados em dois conjuntos de dados reais de diferentes áreas: área médica, cujo objetivo está na estimação da razão de chances, e área financeira, cujo intuito é a predição de novas observações. / We study the logistic model when explanatory variables are measured with error. Three estimation methods are presented, namely maximum pseudo-likelihood obtained through a Monte Carlo expectation-maximization type algorithm, regression calibration, SIMEX and naïve, which ignores the measurement error. These methods are compared through simulation. From the estimation point of view, we compare the different methods by evaluating their biases and root mean square errors. The predictive quality of the methods is evaluated based on sensitivity, specificity, positive and negative predictive values, accuracy and the Kolmogorov-Smirnov statistic. The simulation studies show that the best performing method is the maximum pseudo-likelihood method when the objective is to estimate the parameters. There is no difference among the estimation methods for predictive purposes. The results are illustrated in two real data sets from different application areas: medical area, whose goal is the estimation of the odds ratio, and financial area, whose goal is the prediction of new observations.
|
43 |
線性羅吉斯迴歸模型的最佳D型逐次設計 / The D-optimal sequential design for linear logistic regression model藍旭傑, Lan, Shiuh Jay Unknown Date (has links)
假設二元反應曲線為簡單線性羅吉斯迴歸模型(Simple Linear Logistic Regression Model),在樣本數為偶數的前題下,所謂的最佳D型設計(D-Optimal Design)是直接將半數的樣本點配置在第17.6個百分位數,而另一半則配置在第82.4個百分位數。很遺憾的是,這兩個位置在參數未知的情況下是無法決定的,因此逐次實驗設計法(Sequential Experimental Designs)在應用上就有其必要性。在大樣本的情況下,本文所探討的逐次實驗設計法在理論上具有良好的漸近最佳D型性質(Asymptotic D-Optimality)。尤其重要的是,這些特性並不會因為起始階段的配置不盡理想而消失,影響的只是收斂的快慢而已。但是在實際應用上,這些大樣本的理想性質卻不是我們關注的焦點。實驗步驟收斂速度的快慢,在小樣本的考慮下有決定性的重要性。基於這樣的考量,本文將提出三種起始階段設計的方法並透過模擬比較它們之間的優劣性。 / The D-optimal design is well known to be a two-point design for the simple linear logistic regression function model. Specif-ically , one half of the design points are allocated at the 17.6- th percentile, and the other half at the 82.4-th percentile. Since the locations of the two design points depend on the unknown parameters, the actual 2-locations can not be obtained. In order to dilemma, a sequential design is somehow necessary in practice. Sequential designs disscused in this context have some good properties that would not disappear even the initial stgae is not good enough under large sample size. The speed of converges of the sequential designs is influenced by the initial stage imposed under small sample size. Based on this, three initial stages will be provided in this study and will be compared through simulation conducted by C++ language.
|
44 |
Revision Moment for the Retail Decision-Making SystemJuszczuk, Agnieszka Beata, Tkacheva, Evgeniya January 2010 (has links)
In this work we address to the problems of the loan origination decision-making systems. In accordance with the basic principles of the loan origination process we considered the main rules of a clients parameters estimation, a change-point problem for the given data and a disorder moment detection problem for the real-time observations. In the first part of the work the main principles of the parameters estimation are given. Also the change-point problem is considered for the given sample in the discrete and continuous time with using the Maximum likelihood method. In the second part of the work the disorder moment detection problem for the real-time observations is considered as a disorder problem for a non-homogeneous Poisson process. The corresponding optimal stopping problem is reduced to the free-boundary problem with a complete analytical solution for the case when the intensity of defaults increases. Thereafter a scheme of the real time detection of a disorder moment is given.
|
45 |
Modeling framework for socioeconomic analysis of managed lanesKhoeini, Sara 08 June 2015 (has links)
Managed lanes are a form of congestion pricing that use occupancy and toll payment requirements to utilize capacity more efficiently. How socio-spatial characteristics impact users’ travel behavior toward managed lanes is the main research question of this study. This research is a case study of the conversion of a High Occupancy Vehicle (HOV) lane to a High Occupancy Toll (HOT) lane, implemented in Atlanta I-85 on 2011. To minimize the cost and maximize the size of the collected data, an innovative and cost-effective modeling framework for socioeconomic analysis of managed lanes has been developed. Instead of surveys, this research is based on the observation of one and a half million license plates, matched to household locations, collected over a two-year study period. Purchased marketing data, which include detailed household socioeconomic characteristics, supplemented the household corridor usage information derived from license plate observations. Generalized linear models have been used to link users’ travel behavior to socioeconomic attributes. Furthermore, GIS raster analysis methods have been utilized to visualize and quantify the impact of the HOV-to-HOT conversion on the corridor commutershed. At the local level, this study conducted a comprehensive socio-spatial analysis of the Atlanta I-85 HOV to HOT conversion. At the general scale, this study enhances managed lanes’ travel demand models with respect to users’ characteristics and introduces a comprehensive modeling framework for the socioeconomic analysis of managed lanes. The methods developed through this research will inform future Traffic and Revenue Studies and help to better predict the socio-spatial characteristics of the target market.
|
46 |
房屋抵押貸款之資訊不對稱問題 -以台北市和新北市為例 / The asymmetric information problems in mortgage lending: the evidence from Taipei City and New Taipei City林耀宗, Lin, Yao Tsung Unknown Date (has links)
2007年美國爆發次級房貸違約潮造成了其經濟、房市和股市的不景氣,也波及到持有美國房貸證券化商品的各國,使其承受重大的損失,因此房屋抵押貸款違約的影響因素和金融資產證券化機制對貸款違約風險的影響又再度成為不動產與金融市場上之重要議題。而以往針對美國次貸危機的研究多指出道德風險是造成此次危機的原因之一,但是較缺乏實證研究的支持。
有鑑於此,本研究以我國的台北市和新北市的房屋抵押貸款市場作為研究對象,探討逆選擇和道德風險這兩個資訊不對稱的問題對貸款違約率的影響。研究結果顯示「貸款成數高、貸款利率高、搭配信貸和設定二胎的貸款比較容易違約」,證實逆選擇和道德風險問題確實存在於房屋抵押貸款市場,而且會增加貸款違約的機率。為了降低違約機率,從降低資訊不對稱的角度來看,本研究建議:一、建立全國房貸資料庫;二、將信貸的金額納入房貸的貸款成數中考慮,以降低款人的道德風險。
再者,本研究認為造成次貸危機的根本原因是不當政策導致的保證機制浮濫,以及高風險的房貸證券化商品的氾濫。為了避免我國發生類似次貸危機的事件,從減少資訊不對稱的角度切入,本研究建議我國的金融資產證券化機制應該:一、將道德風險內部化,消除創始機構自利的動機以減少道德風險;二、使用外部信用增強的方式,以確實發揮分散證券風險的作用。 / The 2007 subprime mortgage crisis has severely struck the stability of the worldwide financial markets. Some researches indicate that moral hazard problems are the main factors causing the crisis. However, few studies support asymmetry problems existing in a mortgage market by empirical evidences.
First, using the mortgage samples from Taipei City and New Taipei City this study would like to understand if the mortgage market are information asymmetry problems, adverse selection and moral hazard, and conduct the empirical analysis for these factors’ impact on mortgage default. The results show that mortgage default is influenced significantly by the Loan-to-Value (LTV) ratio, contract interest rates, the existence of second liens and credit loans, and jobs. It shows that adverse selection and moral hazard actually exist in the mortgage market. According to the empirical results, secondly, this study proposes suggestions for mortgage lending and financial asset securitization to reduce adverse selection and moral hazard problems and enhance the regulation environment and market’s stability. It is expected that the results of this study will be applied to avoid the occurrence of similar crisis in Taiwan.
|
47 |
Proposta metodológica para identificar fatores contribuintes de acidentes viários por meio de geotecnologias / Methodological proposal to identify contributing factors of road accidents through geotechnologiesBatistão, Mariana Dias Chaves [UNESP] 02 February 2018 (has links)
Submitted by Mariana Dias Chaves null (mariana.unesp@hotmail.com) on 2018-02-16T19:43:53Z
No. of bitstreams: 1
Batistao, MDC-TeseDr.pdf: 6348711 bytes, checksum: 0f1b9c7f3392530f6d2f279ee0b58768 (MD5) / Approved for entry into archive by Claudia Adriana Spindola null (claudia@fct.unesp.br) on 2018-02-19T11:31:34Z (GMT) No. of bitstreams: 1
batistao_mdc_dr_prud.pdf: 6348711 bytes, checksum: 0f1b9c7f3392530f6d2f279ee0b58768 (MD5) / Made available in DSpace on 2018-02-19T11:31:34Z (GMT). No. of bitstreams: 1
batistao_mdc_dr_prud.pdf: 6348711 bytes, checksum: 0f1b9c7f3392530f6d2f279ee0b58768 (MD5)
Previous issue date: 2018-02-02 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / Essa pesquisa apresenta um estudo acerca dos fatores contribuintes de acidentes rodoviários com o objetivo de fornecer evidências para analisar o comportamento dos fatores contribuintes envolvidos nesses acidentes, mais especificamente nos trechos críticos. Desejase identificar a relação dos fatores com o grau de severidade de um acidente (danos materiais, sem vítimas fatais e com vítimas fatais) e o impacto de cada classe de fator na ocorrência de um acidente. A intensão é embasar uma análise geoespacial levando em consideração técnicas estatísticas e cartográficas e contribuir para melhorar a qualidade das informações sobre segurança viária no país e seu atual cenário crítico. A estrutura metodológica da pesquisa consiste em três etapas principais: (I) Identificação e determinação de segmentos de trechos críticos, (II) Mapeamento dos fatores contribuintes “via” para o acidente e (III) Investigação e estudo dos fatores contribuintes. Quatro trechos de rodovias do oeste do estado de São Paulo foram escolhidos como área de estudo. Na etapa I propôs-se um método de interpolação espacial de escolha de segmentos de trechos críticos levando a premissa existência da dependência geográfica dos acidentes em consideração. No total, foram identificados oito segmentos de trechos críticos na área de estudo. A etapa II concentrou-se no mapeamento dos fatores contribuintes desses segmentos de trechos críticos. Essa etapa trouxe o caráter tecnológico à pesquisa por fazer uso da integração de geotecnologias e a contribuição das Ciências Cartográficas para os estudos de segurança viária, por gerar informação a partir do mapeamento da localização dos fatores contribuintes. Das quatro classes de fatores (humano, ambiente, veículo e via) as características da via foram escolhidas para serem mapeadas, tendo-se deparado com a ausência de qualquer dado dessa classe de fatores tanto no banco de dados dos acidentes como no boletim de ocorrências. A relação com as outras três classes de fatores foi tratada na etapa III da pesquisa, cujos resultados proporcionaram montar o ranking dos seis fatores contribuintes da via mais frequentes nos segmentos de trechos críticos. Adicionalmente, foram construídos três modelos de regressão logística ordinal para investigar o impacto de cada uma das quatro classes de fatores no grau de severidade do acidente (três graus de severidade). Para isso, o grau foi tratado como variável dependente dos modelos. Quatro variáveis independentes (fatores contribuintes) foram consideradas significativas e escolhidas para compor os modelos: consumo de drogas (da classe de fator contribuinte humano), estado dos pneus (da classe de fator veículo), vegetação (da classe de fator via) e sinalização (da classe de fator via). Por fim, os modelos puderam ser analisados a partir da razão de chances (odds ratio) para complementar as informações e sintetizar os resultados como contribuições da pesquisa. / This research presents a study about the contributing factors of road accidents in order to provide evidences to analyse the behaviour of contributing factors involved in these accidents more specifically in critical sections. The intention is to identify the relationship between those factors and the severity degree of an accident (material damage, no fatalities and fatalities) and the impact of each factor class on an accident occurrence. The aim is to base on geospatial analysis taking into account statistical and cartographic techniques and contribute to improve the quality of the road safety information in the country which has a current critical scene. The methodological structure of this thesis consists of following three main steps: (I) Identification and determination of critical sections segments, (II) mapping “road” contributing factors for each accident and (III) Investigation and study of the contributing factors. Four sections of highways in the west of São Paulo state were chosen as the study area. In Step I, proposed a spatial interpolation method to choose critical sections segments premising the existence of geographical dependence of the considered accidents. In entire, eight critical sections segments were identified in the study area. Step II focused on mapping the contributing factors of these segments. This step brought the technological character to this research by making use of geotechnologies integration and the contribution of Cartographic Sciences to road safety by generating information of the contributing factors location from mapping. Of the four factors classes (human, environment, vehicle and road), the road characteristics were chosen to be mapped, since no data from this factor class was found in both the accident database and the occurrence report. The relation with the other three factors classes was the subject of step III, which results provided a ranking of the six most frequent contributing factors in critical sections segments. In addition, three ordinal logistic regression models were constructed to investigate the impact of each of the four factors classes on the accident severity degree (three severity degrees). For this, the severity degree was considered as the models dependent variable. Four significant independent variables (contributing factors) were chosen to compose the following models: drug consumption (from the human contributing factor class), tire condition (vehicle factor class), vegetation (road factor class) and signaling (road factor class). Lastly, the models could be analysed by the odds ratio method to complement the information and synthesize the results as research contributions.
|
48 |
自變數有誤差的邏輯式迴歸模型:估計、實驗設計及序貫分析 / Logistic regression models when covariates are measured with errors: Estimation, design and sequential method簡至毅, Chien, Chih Yi Unknown Date (has links)
本文主要在探討自變數存在有測量誤差時,邏輯式迴歸模型的估計問題,並設計實驗使得測量誤差能滿足遞減假設,進一步應用序貫分析方法,在給定水準下,建立一個信賴範圍。
當自變數存在有測量誤差時,通常會得到有偏誤的估計量,進而在做決策時會得到與無測量誤差所做出的決策不同。在本文中提出了一個遞減的測量誤差,使得滿足這樣的假設,可以證明估計量的強收斂,並證明與無測量誤差所得到的估計量相同的近似分配。相較於先前的假設,特別是證明大樣本的性質,新增加的樣本會有更小的測量誤差是更加合理的假設。我們同時設計了一個實驗來滿足所提出遞減誤差的條件,並利用序貫設計得到一個更省時也節省成本的處理方法。
一般的case-control實驗,自變數也會出現測量誤差,我們也證明了斜率估計量的強收斂與近似分配的性質,並提出一個二階段抽樣方法,計算出所需的樣本數及建立信賴區間。 / In this thesis, we focus on the estimate of unknown parameters, experimental designs and sequential methods in both prospective and retrospective logistic regression models when there are covariates measured with errors. The imprecise measurement of exposure happens very often in practice, for example, in retrospective epidemiology studies, that may due to either the difficulty or the cost of measuring. It is known that the imprecisely measured variables can result in biased coefficients estimation in a regression model and therefore, it may lead to an incorrect inference. Thus, it is an important issue if the effects of the variables are of primary interest.
When considering a prospective logistic regression model, we derive asymptotic results for the estimators of the regression parameters when there are mismeasured covariates. If the measurement error satisfies certain assumptions, we show that the estimators follow the normal distribution with zero mean, asymptotically unbiased and asymptotically normally distributed. Contrary to the traditional assumption on measurement error, which is mainly used for proving large sample properties, we assume that the measurement error decays gradually at a certain rate as there is a new observation added to the model. This kind of assumption can be fulfilled when the usual replicate observation method is used to dilute the magnitude of measurement errors, and therefore, is also more useful in practical viewpoint. Moreover, the independence of measurement error and covariate is not required in our theorems. An experimental design with measurement error satisfying the required degenerating rate is introduced. In addition, this assumption allows us to employ sequential sampling, which is popular in clinical trials, to such a measurement error logistic regression model. It is clear that the sequential method cannot be applied based on the assumption that the measurement errors decay uniformly as sample size increasing as in the most of the literature. Therefore, a sequential estimation procedure based on MLEs and such moment conditions is proposed and can be shown to be asymptotical consistent and efficient.
Case-control studies are broadly used in clinical trials and epidemiological studies. It can be showed that the odds ratio can be consistently estimated with some exposure variables based on logistic models (see Prentice and Pyke (1979)). The two-stage case-control sampling scheme is employed for a confidence region of slope coefficient beta. A necessary sample size is calculated by a given pre-determined level. Furthermore, we consider the measurement error in the covariates of a case-control retrospective logistic regression model. We also derive some asymptotic results of the maximum likelihood estimators (MLEs) of the regression coefficients under some moment conditions on measurement errors. Under such kinds of moment conditions of measurement errors, the MLEs can be shown to be strongly consistent, asymptotically unbiased and asymptotically normally distributed. Some simulation results of the proposed two-stage procedures are obtained. We also give some numerical studies and real data to verify the theoretical results in different measurement error scenarios.
|
49 |
房屋貸款保證保險違約風險與保險費率關聯性之研究 / The study on relationship between the default risk of the mortgage insurance and premium rate李展豪 Unknown Date (has links)
房屋貸款保證保險制度可移轉部分違約風險予保險公司。然而,保險公司與金融機構在共同承擔風險之際,因房貸保證保險制度之施行,於提高貸款成數後,產生違約風險提高之矛盾現象;而估計保險之預期損失時,以目前尚無此制度下之違約數據估計損失額,將有錯估之可能。
本研究以二元邏吉斯特迴歸模型(Binary Logistic Regression Model)與存活分析(Survival Analysis)估計違約行為,並比較各模型間資料適合度及預測能力,進而單獨分析變數-貸款成數對違約率之邊際機率影響。以探討房貸保證保險施行後,因其對借款者信用增強而提高之貸款成數,所增加之違約風險。並評估金融機構因提高貸款成數後可能之違約風險變動,據以推估違約率數據,並根據房貸保證保險費率結構模型,計算可能之預期損失額,估算變動的保險費率。
實證結果發現,貸款成數與違約風險呈現顯著正相關,貸款成數增加,邊際影響呈遞增情形,違約率隨之遞增,而違約預期損失額亦同時上升。保險公司因預期損失額增加,為維持保費收入得以支付預期損失,其保險費率將明顯提升。故實施房屋貸款保證保險,因借款者信用增強而提高之貸款成數,將增加違約機率並對保險費率產生直接變動。 / Mortgage insurance system may transfer part of the default risk to insurance companies. However, the implementation of mortgage insurance system, on increasing loan to value ratio, the resulting increase default risk. And literatures estimate the expected loss without the default data, there will be misjudge.
Our study constructs the binary logistic regression model and survival analysis to estimate the mortgage default behavior, and compare the data between the model fit and the predictive power. Analyzes the effect of loan to value ratio on the marginal probability of default rate. Furthermore, assess the financial institutions in the risk of default due to loan to value ratio changes. According to the estimated default rate data, we employ the mortgage insurance rate structural model to calculate the expected amount of loss and the changes in premium rates.
Empirical results found loan to value ratio have a significant positive effect on borrowers’ default. Loan to value ratio increase, the marginal effect progressively increase, along with increasing default rates and expected default losses. Due to the ascendant expected loss, insurance companies increase premiums to cover the expected loss, the premium rate will be significantly improved. Therefore, the implementation of mortgage insurance, credit enhancement for the borrower to improve loan to value ratio, will increase the probability of default and insurance rates.
|
50 |
BASEL II 與銀行企業金融授信實務之申請進件模型陳靖芸, Chen,Chin-Yun Unknown Date (has links)
授信業務是銀行主要獲利來源之一,隨著國際化趨勢以及政府積極推動經濟自由,國內金融環境丕變,金融機構之授信業務競爭日漸激烈,加上近年來國內經濟成長趨緩,又於千禧年爆發本土性金融風暴,集團企業財務危機猶如骨牌效應ㄧ樁接ㄧ樁,原因在於大企業過度信用擴張,過高槓桿操作,導致負債比率上升,面臨償債困難;還有銀行對企業放款之授信審核常有大企業不會倒閉之迷思。故如何找出企業財務危機出現之徵兆,及早防範於未然,將是本研究在建立企業授信之申請進件模型的重點之ㄧ。
此外,2002年新修定的巴塞爾資本協定主在落實銀行風險管理,國際清算銀行決定於2006年正式實行新巴塞爾協定,我國修正的「銀行資本適足性管理辦法」自民國九十五年十二月三十一日起實施,故本國銀行需要依據本身的商品特色、市場區隔、客戶性質、以及經營方式與理念等因素,去建制一套適合自己的內部風險評估系統。故本研究第二個重點即在於依據我國現有法令,做出一個符合信用風險基礎內部評等法要求之申請進件模型。
本研究使用某銀行有財務報表之企業授信戶,利用財報中的財務比率變數建立模型。先使用主成分分析將所有變數分為七大類,分別是企業之財務構面、經營能力、獲利能力、償債能力、長期資本指標、流動性、以及現金流量,再進行羅吉斯迴歸模型分析。 / Business loan is one of the main profits in the bank. But increasing business competition causes the loan process in the bank is not very serious, the bankers allow enterprise to expand his credit or has higher debt ratio, that would cause financial crises. The first point in this study is to find the symptom when enterprise has financial crises.
The second point is that under the framework of New Basel Capital Accord〈Basel II〉, we try to build an application model that committed the domestic requirements. The bank should develop the fundamental internal rating-based approach that accords with its strategy、market segmentation、and customers type.
This research paper uses financial variables〈ex. liquid ratio、debt ratio、ROA、ROE、… 〉to build enterprise application model. We use the principle component analysis to separate different factors which affect loan process: financial facet、ability to pay、profitability、management ability、long-term index、liquidity、and cash flow. Then, we show the result about these factors in the logistic regression model.
|
Page generated in 0.0834 seconds