Spelling suggestions: "subject:"[een] LOGISTIC REGRESSION"" "subject:"[enn] LOGISTIC REGRESSION""
991 |
Predicting Community-based Methadone Maintenance Treatment (MMT) OutcomeStones, George 07 January 2013 (has links)
This was a retrospective study of a community-based methadone maintenance treatment (MMT) program in Toronto. Participants (N = 170) were federally sentenced adult male offenders admitted to this voluntary program between 1997 and 2009 while subject to community supervision following incarceration. The primary investigation examined correlates of treatment responsivity, with principal outcome measures including MMT clients’ rates of: (i) illicit drug use; and (ii) completion of conditional (parole) or statutory release (SR). For a subset (n = 74), recidivism rates were examined after a 9-year interval. Findings included strong convergent evidence from logistic regression and ROC analyses that an empirically and theoretically derived set of five variables was a stable and highly significant (p <.001) predictor of release outcome. Using five factors related to risk (work/school status, security level of releasing institution, total PCL-R score, history of institutional drug use, and days at risk), release outcome was predicted with an overall classification accuracy of 88%, with high specificity (86%) and sensitivity (89%). The logistic regression model generated an R2 of .55 and the accompanying AUC was .89, both substantial. Work/school status had an extremely large positive association with successful completion of community supervision, accounting for > half of the total variance explained by the five-factor model and increasing the estimated odds of successful release outcome by > 15-fold. Also, when in the MMT program, clients' risk taking behaviour was significantly moderated, with low overall base rates of illicit drug use, yet the rate of parole/SR revocation (71%) was high. The 9-year follow-up showed a high mortality rate (15%) overall. Revocation of release while in the MMT program was associated with a significantly higher rate and more violent recidivism at follow-up. Results are discussed within the context of: (a) Andrews' and Bonta's psychology of criminal conduct; (b) the incompatibility of a harm reduction treatment model with an abstinence-based parole decision-making model; (c) changing drug use profiles among MMT clients; (d) a strength-based approach to correctional intervention focusing on educational and vocational retraining initiatives; and (e) creation of a user friendly case-based screening algorithm for prediction of release outcome for new releases.
|
992 |
BASEL II 與銀行企業金融授信實務之申請進件模型陳靖芸, Chen,Chin-Yun Unknown Date (has links)
授信業務是銀行主要獲利來源之一,隨著國際化趨勢以及政府積極推動經濟自由,國內金融環境丕變,金融機構之授信業務競爭日漸激烈,加上近年來國內經濟成長趨緩,又於千禧年爆發本土性金融風暴,集團企業財務危機猶如骨牌效應ㄧ樁接ㄧ樁,原因在於大企業過度信用擴張,過高槓桿操作,導致負債比率上升,面臨償債困難;還有銀行對企業放款之授信審核常有大企業不會倒閉之迷思。故如何找出企業財務危機出現之徵兆,及早防範於未然,將是本研究在建立企業授信之申請進件模型的重點之ㄧ。
此外,2002年新修定的巴塞爾資本協定主在落實銀行風險管理,國際清算銀行決定於2006年正式實行新巴塞爾協定,我國修正的「銀行資本適足性管理辦法」自民國九十五年十二月三十一日起實施,故本國銀行需要依據本身的商品特色、市場區隔、客戶性質、以及經營方式與理念等因素,去建制一套適合自己的內部風險評估系統。故本研究第二個重點即在於依據我國現有法令,做出一個符合信用風險基礎內部評等法要求之申請進件模型。
本研究使用某銀行有財務報表之企業授信戶,利用財報中的財務比率變數建立模型。先使用主成分分析將所有變數分為七大類,分別是企業之財務構面、經營能力、獲利能力、償債能力、長期資本指標、流動性、以及現金流量,再進行羅吉斯迴歸模型分析。 / Business loan is one of the main profits in the bank. But increasing business competition causes the loan process in the bank is not very serious, the bankers allow enterprise to expand his credit or has higher debt ratio, that would cause financial crises. The first point in this study is to find the symptom when enterprise has financial crises.
The second point is that under the framework of New Basel Capital Accord〈Basel II〉, we try to build an application model that committed the domestic requirements. The bank should develop the fundamental internal rating-based approach that accords with its strategy、market segmentation、and customers type.
This research paper uses financial variables〈ex. liquid ratio、debt ratio、ROA、ROE、… 〉to build enterprise application model. We use the principle component analysis to separate different factors which affect loan process: financial facet、ability to pay、profitability、management ability、long-term index、liquidity、and cash flow. Then, we show the result about these factors in the logistic regression model.
|
993 |
Går det att prediktera konkurs i svenska aktiebolag? : En kvantitativ studie om hur finansiella nyckeltal kan användas vid konkursprediktion / Is it possible to predict bankruptcy in swedish limited companies? : A quantitative study regarding the usefullness of financial ratios as bankruptcy predictorsPersson, Daniel, Ahlström, Johannes January 2015 (has links)
Från 1900-talets början har banker och låneinstitut använt nyckeltal som hjälpmedel vid bedömning och kvantifiering av kreditrisk. För dagens investerare är den ekonomiska miljön mer komplicerad än för bara 40 år sedan då teknologin och datoriseringen öppnade upp världens marknader mot varandra. Bedömning av kreditrisk idag kräver effektiv analys av kvantitativa data och modeller som med god träffsäkerhet kan förutse risker. Under 1900-talets andra hälft skedde en snabb utveckling av de verktyg som används för konkursprediktion, från enkla univariata modeller till komplexa data mining-modeller med tusentals observationer. Denna studie undersöker om det är möjligt att prediktera att svenska företag kommer att gå i konkurs och vilka variabler som innehåller relevant information för detta. Metoderna som används är diskriminantanalys, logistisk regression och överlevnadsanalys på 50 aktiva och 50 företag försatta i konkurs. Resultaten visar på en träffsäkerhet mellan 67,5 % och 75 % beroende på vald statistisk metod. Oavsett vald statistisk metod är det möjligt att klassificera företag som konkursmässiga två år innan konkursens inträffande med hjälp av finansiella nyckeltal av typerna lönsamhetsmått och solvensmått. Samhällskostnader reduceras av bättre konkursprediktion med hjälp av finansiella nyckeltal vilka bidrar till ökad förmåga för företag att tillämpa ekonomistyrning med relevanta nyckeltal i form av lager, balanserad vinst, nettoresultat och rörelseresultat. / From the early 1900s, banks and lending institutions have used financial ratios as an aid in the assessment and quantification of credit risk. For today's investors the economic environment is far more complicated than 40 years ago when the technology and computerization opened up the world's markets. Credit risk assessment today requires effective analysis of quantitative data and models that can predict risks with good accuracy. During the second half of the 20th century there was a rapid development of the tools used for bankruptcy prediction. We moved from simple univariate models to complex data mining models with thousands of observations. This study investigates if it’s possible to predict bankruptcy in Swedish limited companies and which variables contain information relevant for this cause. The methods used in the study are discriminant analysis, logistic regression and survival analysis on 50 active and 50 failed companies. The results indicate accuracy between 67.5 % and 75 % depending on the choice of statistical method. Regardless of the selected statistical method used, it’s possible to classify companies as bankrupt two years before the bankruptcy occurs using financial ratios which measures profitability and solvency. Societal costs are reduced by better bankruptcy prediction using financial ratios which contribute to increasing the ability of companies to apply financial management with relevant key ratios in the form of stock , retained earnings , net income and operating income.
|
994 |
FLK50-Score zur Vorhersage des Lungenkrebsrisikos bis-50jähriger Probanden. Eine methodische Arbeit auf Basis einer Familienstudie / FLK50-SCORE TO PREDICT THE LUNG CANCER RISK IN PROBANDS UP TO 50 YEARS IN AGE. A METHODOLOGICAL PAPER BASED ON A FAMILY STUDYGerlach, Gundula 08 February 2012 (has links)
No description available.
|
995 |
Policy options to reduce deforestation in the Bolivian lowlands based on spatial modeling of land use change / Handlungsoptionen zur Entwaldungsreduktion im bolivianischen Tiefland auf der Grundlage räumlicher Modellierung von LandnutzungsänderungenMüller, Robert 29 January 2012 (has links)
No description available.
|
996 |
Étude des déterminants démographiques de l’hypotrophie fœtale au QuébecFortin, Émilie 04 1900 (has links)
Cette recherche vise à décrire l’association entre certaines variables démographiques telles que l’âge de la mère, le sexe, le rang de naissance et le statut socio-économique – représenté par l’indice de Pampalon – et l’hypotrophie fœtale au Québec. L’échantillon est constitué de 127 216 naissances simples et non prématurées ayant eu lieu au Québec entre le 1er juillet 2000 et le 30 juin 2002. Des régressions logistiques portant sur le risque d’avoir souffert d’un retard de croissance intra-utérine ont été effectuées pour l’ensemble du Québec ainsi que pour la région socio-sanitaire (RSS) de Montréal.
Les résultats révèlent que les enfants de premier rang et les enfants dont la mère était âgée de moins de 25 ans ou de 35 ans et plus lors de l’accouchement ont un risque plus élevé de souffrir d’hypotrophie fœtale et ce dans l’ensemble du Québec et dans la RSS de Montréal. De plus, les résultats démontrent que le risque augmente plus la mère est défavorisée. Puisque l’indice de Pampalon est un proxy écologique calculé pour chaque aire de diffusion, les intervenants en santé publique peuvent désormais cibler géographiquement les femmes les plus à risque et adapter leurs programmes de prévention en conséquence. Ainsi, le nombre de cas d’hypotrophie fœtale, voire même la mortalité infantile, pourraient être réduits. / This study describes the association between demographic variables such as the mother’s age, the child’s gender and birth order, and the socio-economic status – that can now be assessed by the Pampalon Index – with intrauterine growth restriction (IUGR) in the province of Quebec. The analyses are based on a sample of 127,216 singletons and term births that occurred in the province of Quebec between July 1st, 2000 and June 30th, 2002. Logistics regressions on the risk of having suffered from IUGR were produced for the entire province of Quebec and for the health region of Montreal.
In the province of Quebec and in the health region of Montreal, the results reveal that the risk of IUGR is higher for first-born infants, and for infants whose mother was under 25 years of age or aged 35 years and older. Moreover, the risk of IUGR increases with poverty. Since the Pampalon Index is calculated for each dissemination area, public health interventions can now target the most vulnerable women and reduce the number of IUGR cases or even infant mortality.
|
997 |
ANTIMICROBIAL RESISTANCE OF HUMAN CAMPYLOBACTER JEJUNI INFECTIONS FROM SASKATCHEWANOtto, Simon James Garfield 29 April 2011 (has links)
Saskatchewan is the only province in Canada to have routinely tested the antimicrobial susceptibility of all provincially reported human cases of campylobacteriosis. From 1999 to 2006, 1378 human Campylobacter species infections were tested for susceptibility at the Saskatchewan Disease Control Laboratory using the Canadian Integrated Program for Antimicrobial Resistance Surveillance panel and minimum inhibitory concentration (MIC) breakpoints. Of these, 1200 were C. jejuni, 129 were C. coli, with the remaining made up of C. lari, C. laridis, C. upsaliensis and undifferentiated Campylobacter species. Campylobacter coli had significantly higher prevalences of ciprofloxacin resistance (CIPr), erythromycin resistance (ERYr), combined CIPr-ERYr resistance and multidrug resistance (to three or greater drug classes) than C. jejuni. Logistic regression models indicated that CIPr in C. jejuni decreased from 1999 to 2004 and subsequently increased in 2005 and 2006. The risk of CIPr was significantly increased in the winter months (January to March) compared to other seasons. A comparison of logistic regression and Cox proportional hazard survival models found that the latter were better able to detect significant temporal trends in CIPr and tetracycline resistance by directly modeling MICs, but that these trends were more difficult to interpret. Scan statistics detected significant spatial clusters of CIPr C. jejuni infections in urban centers (Saskatoon and Regina) and temporal clusters in the winter months; the space-time permutation model did not detect any space-time clusters. Bernoulli scan tests were computationally the fastest for cluster detection, compared to ordinal MIC and multinomial antibiogram models. eBURST analysis of antibiogram patterns showed a marked distinction between case and non-case isolates from the scan statistic clusters. Multilevel logistic regression models detected significant individual and regional contextual risk factors for infection with CIPr C. jejuni. Patients infected in the winter, that were between the ages of 40-45 years of age, that lived in urban regions and that lived in regions of moderately high poultry density had higher risks of a resistant infection. These results advance the epidemiologic knowledge of CIPr C. jejuni in Saskatchewan and provide novel analytical methods for antimicrobial resistance surveillance data in Canada. / Saskatchewan Disease Control Laboratory (Saskatchewan Ministry of Health); Laboratory for Foodborne Zoonoses (Public Health Agency of Canada); Centre for Foodborne, Environmental and Zoonotic Infectious Diseases (Public Health Agency of Canada); Ontario Veterinary College Blake Graham Fellowship
|
998 |
New statistical methods to assess the effect of time-dependent exposures in case-control studiesCao, Zhirong 12 1900 (has links)
Contexte. Les études cas-témoins sont très fréquemment utilisées par les épidémiologistes pour évaluer l’impact de certaines expositions sur une maladie particulière. Ces expositions peuvent être représentées par plusieurs variables dépendant du temps, et de nouvelles méthodes sont nécessaires pour estimer de manière précise leurs effets. En effet, la régression logistique qui est la méthode conventionnelle pour analyser les données cas-témoins ne tient pas directement compte des changements de valeurs des covariables au cours du temps. Par opposition, les méthodes d’analyse des données de survie telles que le modèle de Cox à risques instantanés proportionnels peuvent directement incorporer des covariables dépendant du temps représentant les histoires individuelles d’exposition. Cependant, cela nécessite de manipuler les ensembles de sujets à risque avec précaution à cause du sur-échantillonnage des cas, en comparaison avec les témoins, dans les études cas-témoins. Comme montré dans une étude de simulation précédente, la définition optimale des ensembles de sujets à risque pour l’analyse des données cas-témoins reste encore à être élucidée, et à être étudiée dans le cas des variables dépendant du temps.
Objectif: L’objectif général est de proposer et d’étudier de nouvelles versions du modèle de Cox pour estimer l’impact d’expositions variant dans le temps dans les études cas-témoins, et de les appliquer à des données réelles cas-témoins sur le cancer du poumon et le tabac.
Méthodes. J’ai identifié de nouvelles définitions d’ensemble de sujets à risque, potentiellement optimales (le Weighted Cox model and le Simple weighted Cox model), dans lesquelles différentes pondérations ont été affectées aux cas et aux témoins, afin de refléter les proportions de cas et de non cas dans la population source. Les propriétés des estimateurs des effets d’exposition ont été étudiées par simulation. Différents aspects d’exposition ont été générés (intensité, durée, valeur cumulée d’exposition). Les données cas-témoins générées ont été ensuite analysées avec différentes versions du modèle de Cox, incluant les définitions anciennes et nouvelles des ensembles de sujets à risque, ainsi qu’avec la régression logistique conventionnelle, à des fins de comparaison. Les différents modèles de régression ont ensuite été appliqués sur des données réelles cas-témoins sur le cancer du poumon. Les estimations des effets de différentes variables de tabac, obtenues avec les différentes méthodes, ont été comparées entre elles, et comparées aux résultats des simulations.
Résultats. Les résultats des simulations montrent que les estimations des nouveaux modèles de Cox pondérés proposés, surtout celles du Weighted Cox model, sont bien moins biaisées que les estimations des modèles de Cox existants qui incluent ou excluent simplement les futurs cas de chaque ensemble de sujets à risque. De plus, les estimations du Weighted Cox model étaient légèrement, mais systématiquement, moins biaisées que celles de la régression logistique. L’application aux données réelles montre de plus grandes différences entre les estimations de la régression logistique et des modèles de Cox pondérés, pour quelques variables de tabac dépendant du temps.
Conclusions. Les résultats suggèrent que le nouveau modèle de Cox pondéré propose pourrait être une alternative intéressante au modèle de régression logistique, pour estimer les effets d’expositions dépendant du temps dans les études cas-témoins / Background: Case-control studies are very often used by epidemiologists to assess the impact of specific exposure(s) on a particular disease. These exposures may be represented by several time-dependent covariates and new methods are needed to accurately estimate their effects. Indeed, conventional logistic regression, which is the standard method to analyze case-control data, does not directly account for changes in covariate values over time. By contrast, survival analytic methods such as the Cox proportional hazards model can directly incorporate time-dependent covariates representing the individual entire exposure histories. However, it requires some careful manipulation of risk sets because of the over-sampling of cases, compared to controls, in case-control studies. As shown in a preliminary simulation study, the optimal definition of risk sets for the analysis of case-control data remains unclear and has to be investigated in the case of time-dependent variables.
Objective: The overall objective is to propose and to investigate new versions of the Cox model for assessing the impact of time-dependent exposures in case-control studies, and to apply them to a real case-control dataset on lung cancer and smoking.
Methods: I identified some potential new risk sets definitions (the weighted Cox model and the simple weighted Cox model), in which different weights were given to cases and controls, in order to reflect the proportions of cases and non cases in the source population. The properties of the estimates of the exposure effects that result from these new risk sets definitions were investigated through a simulation study. Various aspects of exposure were generated (intensity, duration, cumulative exposure value). The simulated case-control data were then analysed using different versions of Cox’s models corresponding to existing and new definitions of risk sets, as well as with standard logistic regression, for comparison purpose. The different regression models were then applied to real case-control data on lung cancer. The estimates of the effects of different smoking variables, obtained with the different methods, were compared to each other, as well as to simulation results.
Results: The simulation results show that the estimates from the new proposed weighted Cox models, especially those from the weighted Cox model, are much less biased than the estimates from the existing Cox models that simply include or exclude future cases. In addition, the weighted Cox model was slightly, but systematically, less biased than logistic regression. The real life application shows some greater discrepancies between the estimates of the proposed Cox models and logistic regression, for some smoking time-dependent covariates.
Conclusions: The results suggest that the new proposed weighted Cox models could be an interesting alternative to logistic regression for estimating the effects of time-dependent exposures in case-control studies.
|
999 |
Analyse par apprentissage automatique des réponses fMRI du cortex auditif à des modulations spectro-temporellesBouchard, Lysiane 12 1900 (has links)
L'application de classifieurs linéaires à l'analyse des données d'imagerie cérébrale (fMRI) a mené à plusieurs percées intéressantes au cours des dernières années. Ces classifieurs combinent linéairement les réponses des voxels pour détecter et catégoriser différents états du cerveau. Ils sont plus agnostics que les méthodes d'analyses conventionnelles qui traitent systématiquement les patterns faibles et distribués comme du bruit. Dans le présent projet, nous utilisons ces classifieurs pour valider une hypothèse portant sur l'encodage des sons dans le cerveau humain. Plus précisément, nous cherchons à localiser des neurones, dans le cortex auditif primaire, qui détecteraient les modulations spectrales et temporelles présentes dans les sons. Nous utilisons les enregistrements fMRI de sujets soumis à 49 modulations spectro-temporelles différentes. L'analyse fMRI au moyen de classifieurs linéaires n'est pas standard, jusqu'à maintenant, dans ce domaine. De plus, à long terme, nous avons aussi pour objectif le développement de nouveaux algorithmes d'apprentissage automatique spécialisés pour les données fMRI. Pour ces raisons, une bonne partie des expériences vise surtout à étudier le comportement des classifieurs. Nous nous intéressons principalement à 3 classifieurs linéaires standards, soient l'algorithme machine à vecteurs de support (linéaire), l'algorithme régression logistique (régularisée) et le modèle bayésien gaussien naïf (variances partagées). / The application of linear machine learning classifiers to the analysis of brain imaging data (fMRI) has led to several interesting breakthroughs in recent years. These classifiers combine the responses of the voxels to detect and categorize different brain states. They allow a more agnostic analysis than conventional fMRI analysis that systematically treats weak and distributed patterns as unwanted noise. In this project, we use such classifiers to validate an hypothesis concerning the encoding of sounds in the human brain. More precisely, we attempt to locate neurons tuned to spectral and temporal modulations in sound. We use fMRI recordings of brain responses of subjects listening to 49 different spectro-temporal modulations. The analysis of fMRI data through linear classifiers is not yet a standard procedure in this field. Thus, an important objective of this project, in the long term, is the development of new machine learning algorithms specialized for neuroimaging data. For these reasons, an important part of the experiments is dedicated to studying the behaviour of the classifiers. We are mainly interested in 3 standard linear classifiers, namely the support vectors machine algorithm (linear), the logistic regression algorithm (regularized) and the naïve bayesian gaussian model (shared variances).
|
1000 |
Predicting Community-based Methadone Maintenance Treatment (MMT) OutcomeStones, George 07 January 2013 (has links)
This was a retrospective study of a community-based methadone maintenance treatment (MMT) program in Toronto. Participants (N = 170) were federally sentenced adult male offenders admitted to this voluntary program between 1997 and 2009 while subject to community supervision following incarceration. The primary investigation examined correlates of treatment responsivity, with principal outcome measures including MMT clients’ rates of: (i) illicit drug use; and (ii) completion of conditional (parole) or statutory release (SR). For a subset (n = 74), recidivism rates were examined after a 9-year interval. Findings included strong convergent evidence from logistic regression and ROC analyses that an empirically and theoretically derived set of five variables was a stable and highly significant (p <.001) predictor of release outcome. Using five factors related to risk (work/school status, security level of releasing institution, total PCL-R score, history of institutional drug use, and days at risk), release outcome was predicted with an overall classification accuracy of 88%, with high specificity (86%) and sensitivity (89%). The logistic regression model generated an R2 of .55 and the accompanying AUC was .89, both substantial. Work/school status had an extremely large positive association with successful completion of community supervision, accounting for > half of the total variance explained by the five-factor model and increasing the estimated odds of successful release outcome by > 15-fold. Also, when in the MMT program, clients' risk taking behaviour was significantly moderated, with low overall base rates of illicit drug use, yet the rate of parole/SR revocation (71%) was high. The 9-year follow-up showed a high mortality rate (15%) overall. Revocation of release while in the MMT program was associated with a significantly higher rate and more violent recidivism at follow-up. Results are discussed within the context of: (a) Andrews' and Bonta's psychology of criminal conduct; (b) the incompatibility of a harm reduction treatment model with an abstinence-based parole decision-making model; (c) changing drug use profiles among MMT clients; (d) a strength-based approach to correctional intervention focusing on educational and vocational retraining initiatives; and (e) creation of a user friendly case-based screening algorithm for prediction of release outcome for new releases.
|
Page generated in 0.0695 seconds