Spelling suggestions: "subject:"[een] PARAMETRIC MODELS"" "subject:"[enn] PARAMETRIC MODELS""
1 |
Will Mortality Rate of HIV-Infected Patients Decrease After Starting Antiretroviral Therapy (ART)?Bahakeem, Shaher 07 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Background: Many authors have indicated that HIV-infected patients mortality risk is higher immediately following the start of Antiretroviral Therapy. However, mortality rate of HIV-infected patients is expected to decrease after starting Antiretroviral Therapy (ART) potentially complicating accurate statistical estimation of patient survival and, more generally, effective monitoring of the evolution of the worldwide epidemic.
Method: In this thesis, we determine if mortality of HIV-patients increases or decreases after the initiation of ART therapy using flexible survival modelling techniques. To achieve this objective, this study uses semi-parametric statistical models for fitting and estimating survival time using different covariates. A combination of the Weibull distribution with splines is compared to the usual Weibull, exponential, and gamma distribution parametric models, and the Cox semi-parametric model. The objective of this study is to compare these models to find the best fitting model so that it can then be used to improve modeling of the survival time and explore the pattern of change in mortality rates for a cohort of HIV-infected patients recruited in a care and treatment program in Uganda.
Results: The analysis shows that flexible survival Weibull models are better than usualoff-parametric and semi-parametric model fitting according to the AIC criterion.
Conclusion: The mortality of HIV-patients is high right after the initiation of ART therapy and decreases rapidly subsequently.
|
2 |
Comparison of proportional hazards and accelerated failure time modelsQi, Jiezhi 30 March 2009
The field of survival analysis has experienced tremendous growth during the latter half of the 20th century. The methodological developments of survival analysis that have had the most profound impact are the Kaplan-Meier method for estimating the survival function, the log-rank test for comparing the equality of two or more survival distributions, and the Cox proportional hazards (PH) model for examining the covariate effects on the hazard function. The accelerated failure time (AFT) model was proposed but seldom used. In this thesis, we present the basic concepts, nonparametric methods (the Kaplan-Meier method and the log-rank test), semiparametric methods (the Cox PH model, and Cox model with time-dependent covariates) and parametric methods (Parametric PH model and the AFT model) for analyzing survival data.<p>
We apply these methods to a randomized placebo-controlled trial to prevent Tuberculosis (TB) in Ugandan adults infected with Human Immunodificiency Virus (HIV). The objective of the analysis is to determine whether TB preventive therapies affect the rate of AIDS progression and survival in HIV-infected adults. Our conclusion is that TB preventive therapies appear to have no effect on AIDS progression, death and combined event of AIDS progression and death. The major goal of this paper is to support an argument for the consideration of the AFT model as an alternative to the PH model in the analysis of some survival data by means of this real dataset. We critique the PH model and assess the lack of fit. To overcome the violation of proportional hazards, we use the Cox model with time-dependent covariates, the piecewise exponential model and the accelerated failure time model. After comparison of all the models and the assessment of goodness-of-fit, we find that the log-logistic AFT model fits better for this data set. We have seen that the AFT model is a more valuable and realistic alternative to the PH model in some situations. It can provide the predicted hazard functions, predicted survival functions, median survival times and time ratios. The AFT model can easily interpret the results into the effect upon the expected median duration of illness for a patient in a clinical setting. We suggest that the PH model may not be appropriate in some situations and that the AFT model could provide a more appropriate description of the data.
|
3 |
Comparison of proportional hazards and accelerated failure time modelsQi, Jiezhi 30 March 2009 (has links)
The field of survival analysis has experienced tremendous growth during the latter half of the 20th century. The methodological developments of survival analysis that have had the most profound impact are the Kaplan-Meier method for estimating the survival function, the log-rank test for comparing the equality of two or more survival distributions, and the Cox proportional hazards (PH) model for examining the covariate effects on the hazard function. The accelerated failure time (AFT) model was proposed but seldom used. In this thesis, we present the basic concepts, nonparametric methods (the Kaplan-Meier method and the log-rank test), semiparametric methods (the Cox PH model, and Cox model with time-dependent covariates) and parametric methods (Parametric PH model and the AFT model) for analyzing survival data.<p>
We apply these methods to a randomized placebo-controlled trial to prevent Tuberculosis (TB) in Ugandan adults infected with Human Immunodificiency Virus (HIV). The objective of the analysis is to determine whether TB preventive therapies affect the rate of AIDS progression and survival in HIV-infected adults. Our conclusion is that TB preventive therapies appear to have no effect on AIDS progression, death and combined event of AIDS progression and death. The major goal of this paper is to support an argument for the consideration of the AFT model as an alternative to the PH model in the analysis of some survival data by means of this real dataset. We critique the PH model and assess the lack of fit. To overcome the violation of proportional hazards, we use the Cox model with time-dependent covariates, the piecewise exponential model and the accelerated failure time model. After comparison of all the models and the assessment of goodness-of-fit, we find that the log-logistic AFT model fits better for this data set. We have seen that the AFT model is a more valuable and realistic alternative to the PH model in some situations. It can provide the predicted hazard functions, predicted survival functions, median survival times and time ratios. The AFT model can easily interpret the results into the effect upon the expected median duration of illness for a patient in a clinical setting. We suggest that the PH model may not be appropriate in some situations and that the AFT model could provide a more appropriate description of the data.
|
4 |
Análise dos modelos não paramétricos de avaliação de eficiência e a performance dos bancos comerciais brasileirosSilva, Tarcio Lopes da January 2006 (has links)
O principal objetivo deste trabalho é descrever e analisar do ponto de vista teórico os principais métodos não paramétricos de avaliação de eficiência e avaliar empiricamente os resultados gerados por esses diferentes métodos aplicados a um mesmo conjunto de dados. Com essa finalidade, realizou-se diversas simulações teóricas para fins comparativos dos tradicionais estimadores DEA e FDH, inclusive de seus procedimentos para inferência e correção de viés, e dos novos estimadores de ordem m e quantil. Empiricamente, utilizamos uma amostra de 184 bancos comerciais brasileiros no período de Junho/1995 à Junho/2004. Os resultados mostraram que as diferentes suposições impostas ao conjunto de produção pelos estimadores DEA e FDH afetam sensivelmente os índices de eficiência de várias firmas. Apesar disso, o uso de mais de um estimador mostrou-se um bom artifício para identificação das unidades com pior desempenho. Os procedimentos disponíveis para correção de viés e inferência, entretanto, mostraram-se deficientes principalmente para as firmas localizadas ao longo da fronteira estimada. Por outro lado, a importância da utilização dos novos estimadores não paramétricos quantil e de ordem m ficou evidente devido a presença de observações consideradas valores extremos, que distorcem os índices de eficiência estimados de outras observações. O uso de tais estimadores, mais robustos a valores extremos e outliers, gerou resultados mais confiáveis. Finalmente, procurou-se investigar se o controle de capital, o segmento de atuação e o porte dos bancos afetam sua eficiência, além de investigar o comportamento da performance do setor durante o período de análise. / The purpose of this piece of work has been to analyse and describe the main non parametric efficiency evaluation methods from a theoretical point of view and to empirically analyse the efficiency scores generated by such methods using the same data sample. For that purpose, theoretical simulations were used in order to compare traditional DEA and FDH estimators, their inference and bias correction procedures included. The new order m and quantil estimators were also used. We empirically used a sample of 184 Brazilian commercial banks spanning the June/1995 to June/2004 period. The results show that several banks’ efficiency scores were noticeably affected by the different assumptions made by the DEA and FDH estimators towards the production set. Nevertheless, the use of more than one estimator proved to be an effective way of identifying the units with the worst performance levels. The available procedures for bias correction and inference, however, proved ineffective for frontier firms. On the other hand, the importance of utilising the new order m and quantil estimators was evidenced. Several observations from our sample were detected as extreme values, which affected the estimated efficiency scores from other observations. The use of the aforementioned estimators, more robust to extreme values and outliers, generated more trustworthy results. Finally, an attempt was made to investigate whether or not capital control, business segment and banks’ size affect their efficiency, besides investigating the sector’s performance behaviour during the target period.
|
5 |
Análise dos modelos não paramétricos de avaliação de eficiência e a performance dos bancos comerciais brasileirosSilva, Tarcio Lopes da January 2006 (has links)
O principal objetivo deste trabalho é descrever e analisar do ponto de vista teórico os principais métodos não paramétricos de avaliação de eficiência e avaliar empiricamente os resultados gerados por esses diferentes métodos aplicados a um mesmo conjunto de dados. Com essa finalidade, realizou-se diversas simulações teóricas para fins comparativos dos tradicionais estimadores DEA e FDH, inclusive de seus procedimentos para inferência e correção de viés, e dos novos estimadores de ordem m e quantil. Empiricamente, utilizamos uma amostra de 184 bancos comerciais brasileiros no período de Junho/1995 à Junho/2004. Os resultados mostraram que as diferentes suposições impostas ao conjunto de produção pelos estimadores DEA e FDH afetam sensivelmente os índices de eficiência de várias firmas. Apesar disso, o uso de mais de um estimador mostrou-se um bom artifício para identificação das unidades com pior desempenho. Os procedimentos disponíveis para correção de viés e inferência, entretanto, mostraram-se deficientes principalmente para as firmas localizadas ao longo da fronteira estimada. Por outro lado, a importância da utilização dos novos estimadores não paramétricos quantil e de ordem m ficou evidente devido a presença de observações consideradas valores extremos, que distorcem os índices de eficiência estimados de outras observações. O uso de tais estimadores, mais robustos a valores extremos e outliers, gerou resultados mais confiáveis. Finalmente, procurou-se investigar se o controle de capital, o segmento de atuação e o porte dos bancos afetam sua eficiência, além de investigar o comportamento da performance do setor durante o período de análise. / The purpose of this piece of work has been to analyse and describe the main non parametric efficiency evaluation methods from a theoretical point of view and to empirically analyse the efficiency scores generated by such methods using the same data sample. For that purpose, theoretical simulations were used in order to compare traditional DEA and FDH estimators, their inference and bias correction procedures included. The new order m and quantil estimators were also used. We empirically used a sample of 184 Brazilian commercial banks spanning the June/1995 to June/2004 period. The results show that several banks’ efficiency scores were noticeably affected by the different assumptions made by the DEA and FDH estimators towards the production set. Nevertheless, the use of more than one estimator proved to be an effective way of identifying the units with the worst performance levels. The available procedures for bias correction and inference, however, proved ineffective for frontier firms. On the other hand, the importance of utilising the new order m and quantil estimators was evidenced. Several observations from our sample were detected as extreme values, which affected the estimated efficiency scores from other observations. The use of the aforementioned estimators, more robust to extreme values and outliers, generated more trustworthy results. Finally, an attempt was made to investigate whether or not capital control, business segment and banks’ size affect their efficiency, besides investigating the sector’s performance behaviour during the target period.
|
6 |
Análise dos modelos não paramétricos de avaliação de eficiência e a performance dos bancos comerciais brasileirosSilva, Tarcio Lopes da January 2006 (has links)
O principal objetivo deste trabalho é descrever e analisar do ponto de vista teórico os principais métodos não paramétricos de avaliação de eficiência e avaliar empiricamente os resultados gerados por esses diferentes métodos aplicados a um mesmo conjunto de dados. Com essa finalidade, realizou-se diversas simulações teóricas para fins comparativos dos tradicionais estimadores DEA e FDH, inclusive de seus procedimentos para inferência e correção de viés, e dos novos estimadores de ordem m e quantil. Empiricamente, utilizamos uma amostra de 184 bancos comerciais brasileiros no período de Junho/1995 à Junho/2004. Os resultados mostraram que as diferentes suposições impostas ao conjunto de produção pelos estimadores DEA e FDH afetam sensivelmente os índices de eficiência de várias firmas. Apesar disso, o uso de mais de um estimador mostrou-se um bom artifício para identificação das unidades com pior desempenho. Os procedimentos disponíveis para correção de viés e inferência, entretanto, mostraram-se deficientes principalmente para as firmas localizadas ao longo da fronteira estimada. Por outro lado, a importância da utilização dos novos estimadores não paramétricos quantil e de ordem m ficou evidente devido a presença de observações consideradas valores extremos, que distorcem os índices de eficiência estimados de outras observações. O uso de tais estimadores, mais robustos a valores extremos e outliers, gerou resultados mais confiáveis. Finalmente, procurou-se investigar se o controle de capital, o segmento de atuação e o porte dos bancos afetam sua eficiência, além de investigar o comportamento da performance do setor durante o período de análise. / The purpose of this piece of work has been to analyse and describe the main non parametric efficiency evaluation methods from a theoretical point of view and to empirically analyse the efficiency scores generated by such methods using the same data sample. For that purpose, theoretical simulations were used in order to compare traditional DEA and FDH estimators, their inference and bias correction procedures included. The new order m and quantil estimators were also used. We empirically used a sample of 184 Brazilian commercial banks spanning the June/1995 to June/2004 period. The results show that several banks’ efficiency scores were noticeably affected by the different assumptions made by the DEA and FDH estimators towards the production set. Nevertheless, the use of more than one estimator proved to be an effective way of identifying the units with the worst performance levels. The available procedures for bias correction and inference, however, proved ineffective for frontier firms. On the other hand, the importance of utilising the new order m and quantil estimators was evidenced. Several observations from our sample were detected as extreme values, which affected the estimated efficiency scores from other observations. The use of the aforementioned estimators, more robust to extreme values and outliers, generated more trustworthy results. Finally, an attempt was made to investigate whether or not capital control, business segment and banks’ size affect their efficiency, besides investigating the sector’s performance behaviour during the target period.
|
7 |
[en] GRADUATION METHODS UNDER PARAMETRIC AND NON-PARAMETRIC MODELS FOR SELECT AND ULTIMATE TABLES / [pt] METODOLOGIAS DE CONSTRUÇÃO DE TÁBUAS BIOMÉTRICAS SELETAS E FINAIS A PARTIR DE MODELOS PARAMÉTRICOS E NÃO-PARAMÉTRICOSFABIO GARRIDO LEAL MARTINS 04 March 2008 (has links)
[pt] O estudo aborda as diversas metodologias de construção de
tábuas
biométricas: desde as técnicas de graduação
tradicionalmente utilizadas para os
casos em que há grande quantidade de dados, até um método
específico de
aplicação para o caso de poucos dados. Inclui uma
discussão sobre as formas de
construção de tábuas seletas, em particular de
sobrevivência de inválidos. A
população de servidores públicos estatutários da
administração direta do
município do Rio de Janeiro é utilizada para a graduação
de tábuas de
sobrevivência de válidos e de inválidos, enquanto que a
dos aposentados urbanos
por invalidez do INSS serve de base para a tábua seleta de
sobrevivência de
inválidos. / [en] This study represents an approach to the main methods of
life tables
construction. It shows traditional graduation techniques
for cases including high
exposure data, as well a methodology for few data. Further
more, this study
generates a discussion about select life tables
construction, in particular disability
mortality tables. Data set from Rio de Janeiro officials
population were used for
mortality and disability mortality tables construction. In
addition, a select
disability mortality table was constructed based on the
INSS urban disability
retired population.
|
8 |
Data Driven Visual RecognitionAghazadeh, Omid January 2014 (has links)
This thesis is mostly about supervised visual recognition problems. Based on a general definition of categories, the contents are divided into two parts: one which models categories and one which is not category based. We are interested in data driven solutions for both kinds of problems. In the category-free part, we study novelty detection in temporal and spatial domains as a category-free recognition problem. Using data driven models, we demonstrate that based on a few reference exemplars, our methods are able to detect novelties in ego-motions of people, and changes in the static environments surrounding them. In the category level part, we study object recognition. We consider both object category classification and localization, and propose scalable data driven approaches for both problems. A mixture of parametric classifiers, initialized with a sophisticated clustering of the training data, is demonstrated to adapt to the data better than various baselines such as the same model initialized with less subtly designed procedures. A nonparametric large margin classifier is introduced and demonstrated to have a multitude of advantages in comparison to its competitors: better training and testing time costs, the ability to make use of indefinite/invariant and deformable similarity measures, and adaptive complexity are the main features of the proposed model. We also propose a rather realistic model of recognition problems, which quantifies the interplay between representations, classifiers, and recognition performances. Based on data-describing measures which are aggregates of pairwise similarities of the training data, our model characterizes and describes the distributions of training exemplars. The measures are shown to capture many aspects of the difficulty of categorization problems and correlate significantly to the observed recognition performances. Utilizing these measures, the model predicts the performance of particular classifiers on distributions similar to the training data. These predictions, when compared to the test performance of the classifiers on the test sets, are reasonably accurate. We discuss various aspects of visual recognition problems: what is the interplay between representations and classification tasks, how can different models better adapt to the training data, etc. We describe and analyze the aforementioned methods that are designed to tackle different visual recognition problems, but share one common characteristic: being data driven. / <p>QC 20140604</p>
|
9 |
Sensor Validation Using Linear Parametric Models, Artificial Neural Networks and CUSUM / Sensorvalidering medelst linjära konfektionsmodeller, artificiella neurala nätverk och CUSUMNorman, Gustaf January 2015 (has links)
Siemens gas turbines are monitored and controlled by a large number of sensors and actuators. Process information is stored in a database and used for offline calculations and analyses. Before storing the sensor readings, a compression algorithm checks the signal and skips the values that explain no significant change. Compression of 90 % is not unusual. Since data from the database is used for analyses and decisions are made upon results from these analyses it is important to have a system for validating the data in the database. Decisions made on false information can result in large economic losses. When this project was initiated no sensor validation system was available. In this thesis the uncertainties in measurement chains are revealed. Methods for fault detection are investigated and finally the most promising methods are put to the test. Linear relationships between redundant sensors are derived and the residuals form an influence structure allowing the faulty sensor to be isolated. Where redundant sensors are not available, a gas turbine model is utilized to state the input-output relationships so that estimates of the sensor outputs can be formed. Linear parametric models and an ANN (Artificial Neural Network) are developed to produce the estimates. Two techniques for the linear parametric models are evaluated; prediction and simulation. The residuals are also evaluated in two ways; direct evaluation against a threshold and evaluation with the CUSUM (CUmulative SUM) algorithm. The results show that sensor validation using compressed data is feasible. Faults as small as 1% of the measuring range can be detected in many cases.
|
10 |
Association Between Tobacco Related Diagnoses and Alzheimer Disease: A population StudyAlmalki, Amwaj Ghazi 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Background: Tobacco use is associated with an increased risk of developing Alzheimer's disease (AD). 14% of the incidence of AD is associated with various types of tobacco exposure. Additional real-world evidence is warranted to reveal the association between tobacco use and AD in age/gender-specific subpopulations.
Method: In this thesis, the relationships between diagnoses related to tobacco use and diagnoses of AD in gender- and age-specific subgroups were investigated, using health information exchange data. The non-parametric Kaplan-Meier method was used to estimate the incidence of AD. Furthermore, the log-rank test was used to compare incidence between individuals with and without tobacco related diagnoses. In addition, we used semi-parametric Cox models to examine the association between tobacco related diagnoses and diagnoses of AD, while adjusting covariates.
Results: Tobacco related diagnosis was associated with increased risk of developing AD comparing to no tobacco related diagnosis among individuals aged 60-74 years (female hazard ratio [HR] =1.26, 95% confidence interval [CI]: 1.07 – 1.48, p-value = 0.005; and male HR =1.33, 95% CI: 1.10 - 1.62, p-value =0.004). Tobacco related diagnosis was associated with decreased risk of developing AD comparing to no tobacco related diagnosis among individuals aged 75-100 years (female HR =0.79, 95% CI: 0.70 - 0.89, p-value =0.001; and male HR =0.90, 95% CI: 0.82 - 0.99, p-value =0.023).
Conclusion: Individuals with tobacco related diagnoses were associated with an increased risk of developing AD in older adults aged 60-75 years. Among older adults aged 75-100 years, individuals with tobacco related diagnoses were associated with a decreased risk of developing AD.
|
Page generated in 0.0532 seconds