• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 449
  • 103
  • 99
  • 49
  • 43
  • 20
  • 17
  • 14
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • Tagged with
  • 945
  • 165
  • 128
  • 107
  • 101
  • 96
  • 94
  • 94
  • 93
  • 88
  • 80
  • 73
  • 70
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Ocenenie Volkswagen Group / Valuation of Volkswagen Group

Šusták, Tomáš January 2010 (has links)
Objective of the thesis is determination of Volkswagen Group's equity intrinsic value. Basic starting point of the analysis is seggregation of consolidated financial statements into financial and production division, which are valuated separately. The production division is valuated using both enterprise discounted cashflow and discounted economic profit analysis. Equity cashflow valuation is used to derive value of the financial division. Results of valuation implied by income approach are then compared with market multiples valuation.
332

Bayesian approaches for the analysis of sequential parallel comparison design in clinical trials

Yao, Baiyun 07 November 2018 (has links)
Placebo response, an apparent improvement in the clinical condition of patients randomly assigned to the placebo treatment, is a major issue in clinical trials on psychiatric and pain disorders. Properly addressing the placebo response is critical to an accurate assessment of the efficacy of a therapeutic agent. The Sequential Parallel Comparison Design (SPCD) is one approach for addressing the placebo response. A SPCD trial runs in two stages, re-randomizing placebo patients in the second stage. Analysis pools the data from both stages. In this thesis, we propose a Bayesian approach for analyzing SPCD data. Our primary proposed model overcomes some of the limitations of existing methods and offers greater flexibility in performing the analysis. We find that our model is either on par or, under certain conditions, better, in preserving the type I error and minimizing mean square error than existing methods. We further develop our model in two ways. First, through prior specification we provide three approaches to model the relationship between the treatment effects from the two stages, as opposed to arbitrarily specifying the relationship as was done in previous studies. Under proper specification these approaches have greater statistical power than the initial analysis and give accurate estimates of this relationship. Second, we revise the model to treat the placebo response as a continuous rather than a binary characteristic. The binary classification, which groups patients into “placebo-responders” or “placebo non-responders”, can lead to misclassification, which can adversely impact the estimate of the treatment effect. As an alternative, we propose to view the placebo response in each patient as an unknown continuous characteristic. This characteristic is estimated and then used to measure the contribution (or the weight) of each patient to the treatment effect. Building upon this idea, we propose two different models which weight the contribution of placebo patients to the estimated second stage treatment effect. We show that this method is more robust against the potential misclassification of responders than previous methods. We demonstrate our methodology using data from the ADAPT-A SPCD trial.
333

Development of Written Complexity and Accuracy in an Intermediate to Advanced German L2 Setting Using Weighted Clause Ratio

Gemini Fox (6634193) 11 June 2019 (has links)
The primary focus of this study is to determine how clausal complexity and accuracy develop over the course of three academic years of intermediate to advanced-level German. This study aims to shed light on issues related to learner development of writing during advanced stages of language acquisition, particularly after conducting a study abroad. The main goal of this study will be to track the writing complexity and accuracy of multiple students longitudinally. This paper will identify Weighted Clause Ratio (Foster & Wigglesworth, 2016), as well as address Skills Acquisition Theory (DeKeyser, 2007), Interaction Hypothesis (Swain, 1985), and the Limited Attentional Capacity Theory (Skehan, 1998). In addition to this, the study will discuss the impact of a study abroad on the language-learning process, task complexity, and the language-learning plateau. Following a review of terminology, I will discuss how Weighted-Clause Ratio will be used to determine clausal accuracy and complexity. The data of this study will be analyzed with results shown in intervals throughout three academic years, comparing each of the three years with one another. Results indicate that accuracy increases drastically over the final two years when compared to the change in the first two years, confirming the effect that study abroad has on the written accuracy of learners, and the complexity showed improvements in some measures over the course of the study, but varied in other measures. I conclude the thesis by discussing by discussing the implications of these findings for our understanding of writing complexity and accuracy, and the long-term effects of study abroad.
334

Comparação entre diferentes sequências de ressonância magnética na detecção de calcificações em pacientes portadores de neurocisticercose / Comparison between different magnetic resonance sequences in the detection of calcifications in patients with neurocysticercosis

Porto, Gislaine Cristina Lopes Machado 06 April 2018 (has links)
Introdução: Neurocisticercose (NCC) é a principal causa evitável de epilepsia adquirida no mundo. NCC, além de ser, a doença parasitária mais comum do SNC, representa um importante problema de saúde pública, especialmente em países em desenvolvimento. Estudos de neuroimagem são cruciais no diagnóstico e planejamento terapêutico da NCC. Apesar da ressonância magnética (RM) fornecer maior número e detalhe de informações sobre a doença, a tomografia computadorizada (TC) ainda é o método mais sensível na detecção de calcificação intracraniana, o achado radiológico mais comum da NCC. Objetivo: Comparar performance das sequências de RM ponderadas em suscetibilidade magnética na identificação de calcificações intracranianas em pacientes com NCC. Métodos: Estudo prospectivo, unicêntrico, no qual 57 indivíduos foram submetidos a TC e RM de crânio. Todos os indivíduos foram provenientes do Ambulatório de Doenças Infecciosas do Departamento de Neurologia do Hospital das Clínicas - Faculdade de Medicina da Universidade de São Paulo (HC-FMUSP), com diagnóstico confirmado de NCC. O protocolo de RM incluiu uma sequência convencional 2D gradiente eco (2D-GRE) e duas relativamente novas sequências de suscetibilidade magnética: susceptibilityweighted imaging (SWI) e principles of echo shifting with a train of observations (PRESTO). A TC foi considerada método padrão de referência. Dois neurorradiologistas, cegos para os dados clínicos e demais achados radiológicos, analisaram independentemente as sequências 2D-GRE, SWI e PRESTO quanto à presença, número e localizações de calcificações intracranianas atribuídas a NCC. Resultados: Foram identificadas, pela TC, 739 lesões calcificadas relacionadas a NCC em 50 dos 57 indivíduos incluídos no estudo. A média de lesões calcificadas por paciente foi de 12,9 (± 19,8). A médias de lesões encontradas pelas sequências de suscetibilidade magnética, obtido através da média dos resultados dos observadores, foi de 10,8 (± 17,5) para PRESTO, 10,6 (± 17,3) para SWI e 8,3 (± 13,6) para 2D-GRE. Neste quesito não houve diferença estaticamente significativa entre PRESTO e SWI (p = 0,359) e ambos foram superiores a 2D-GRE (p < 0,05). A concordância foi fraca a moderada, provavelmente devido ao alto número de lesões falso-positivas encontradas (490), das quais 53,9% representavam lesões relacionadas a NCC em estágios não calcificados. A sensibilidade e especificidade das sequências estudadas em identificar corretamente indivíduos com NCC em estágio calcificado foi respectivamente de 85% e 100% para 2D-GRE, 90% e 100% para SWI e 93% e 100% para PRESTO. Conclusão: As sequências SWI, PRESTO e 2D-GRE apresentam boa sensibilidade na identificação de lesões calcificadas em pacientes com NCC. As sequências SWI e PRESTO tiveram melhor performance do que 2D-GRE. Todas as sequências estudadas mostrarem-se apropriadas para identificar indivíduos com NCC no estágio de calcificação. Sequências ponderadas em suscetibilidade magnética podem ajudar no entendimento da história natural, fisiopatologia e achados de imagem da NCC / Background: Neurocysticercosis (NCC) is the main preventable cause of acquired epilepsy. NCC, besides being the most common parasitic disease of the CNS, is an important public health problem, mainly in developing countries. Neuroimaging studies are crucial in the diagnosis and therapeutic planning of NCC. Although magnetic resonance imaging (MRI) provides countless and more detailed information about the disease, computed tomography (CT) is still the most sensitive method for detecting intracranial calcification, the most common radiological finding of NCC. Purpose: To compare the diagnostic performance of susceptibility-weighted MRI sequences in identification of intracranial calcifications in patients with NCC. Methods: A prospective study with 57 subjects who underwent CT and MRI of the brain. All individuals came from Department of Neurology of the Hospital das Clínicas - Faculdade de Medicina da Universidade de São Paulo (HC-FMUSP), with a stablished diagnosis of NCC. The MRI protocol included a conventional 2D gradient echo sequence (2D-GRE) and two relatively new susceptibility-weighted sequences: susceptibility-weighted imaging (SWI) and principles of echo shifting with a train of observations (PRESTO). CT was considered the standard reference method. Two neuroradiologists, blinded to clinical data and other radiological findings, independently analyzed the 2D-GRE, SWI and PRESTO sequences on behalf to presence, number and sites of intracranial calcifications attributed to NCC. Results: A total of 739 NCC-related calcified lesions were identified by CT in 50 of the 57 subjects included in the study. The mean number of calcified lesions per patient was 12.9 (± 19.8). The mean number of lesions found by the susceptibility-weighted MRI sequences, obtained through the mean of the observers\' results, was 10.8 (± 17.5) for PRESTO, 10.6 (± 17.3) for SWI and 8.3 (± 13.6) for 2D-GRE. There was no statistically significant difference between PRESTO and SWI (p = 0.359) and both were superior to 2D-GRE (p < 0.05). The concordance was weak to moderate, probably due to the high number of false-positive lesions found (490), of which 53.9% represented NCC-related lesions in non-calcified stages. The sensitivity and specificity of the sequences studied in correctly identifying individuals with calcified NCC were 85% and 100% respectively for 2D-GRE, 90% and 100% for SWI and 93% and 100% for PRESTO. Conclusion: SWI, PRESTO and 2D-GRE sequences have good sensitivity in the identification of calcified lesions in patients with NCC. SWI and PRESTO performed better than 2DGRE. All sequences studied are suitable for identifying individuals with NCC in the calcified stage. The new susceptibility-weighted MRI sequences may help in understanding the natural history, pathophysiology and imaging findings of NCC
335

Analyse des sensibilités des modèles internes de crédit pour l'étude de la variabilité des RWA / Sensitivity analysis of credit models for the assessment of the RWA variability

Sestier, Michael 04 October 2017 (has links)
Suite à la crise de 2007-2009, des études menées par le Comité de Bâle ont montré une grande dispersion des actifs pondérés du risque (RWA) entre les banques, dont une part significative proviendrait des hypothèses des modèles internes. De nouvelles réglementations visant à trouver un équilibre entre bonne représentation des risques, simplicité des approches et comparabilité ont dès lors été développées. Celles-ci proposent notamment l'ajout de contraintes sur les modèles/paramètres pour l'évaluation interne des RWA de crédit des portefeuilles bancaire et de négociation. Dans ce contexte, ces travaux de thèse consistent principalement en l'analyse de la pertinence de ces contraintes pour la réduction de la variabilité des RWA. Ils font largement recours aux méthodes d'analyses des sensibilités, notamment celles basées sur les décompositions de Hoeffding. Le traitement réglementaire des paramètres de crédit (les corrélations des défauts, les probabilités de défaut -PD -et les taux de perte en cas de défaut -LGD) forme la colonne vertébrale des développements. Au final, les conclusions des études menées semblent indiquer des résultats mitigés des réforn1es. D'une part, les contraintes sur les corrélations pour le portefeuille de négociation sont d'un impact faible sur la réduction de la variabilité des RWA. D'autre part, les contraintes sur les paramètres de PD et de LGD, ayant un impact plus important sur la réduction de la variabilité des RWA, devraient être considérées avec plus de prudence. La thèse fournit enfin des preuves que la variabilité est amplifiée par la mesure du risque réglementaire et les multiples sources de données de calibration des modèles. / In the aftermath of the 2007-2009 crisis, several studies led by the Base! Committee showed a large dispersion of risk-weighted assets (RWA) among banks, a significant part of which would come from the internal model's assumptions. Consequently, new regulations aiming at finding a balance between risk sensitivity, simplicity and comparability have then been developed. These ones notably include constraints on models / parameters for the internal assessment of the credit RWA for both the banking and the trading books. ln this context, the thesis work mainly consists in analyzing the relevance of such constraints to reduce the RWA variability. It makes extensive use of sensitivity analysis methods, particularly the ones based on the Hoeffding's decomposition. Regulatory treatments of the credit parameters (default correlations, default probabilities -DP -and loss given default -LGD) form the backbone of the developments. The findings suggest mixed results of the reforms. On the one hand, the constraints on the correlations for the trading book have a low impact on the RWA variability. On the other hand, the constraints on OP and LGD parameters, having a greater impact on the RWA variability, should be considered with more caution. The studies finally provide evidence that variability is amplified by the regulatory measurement of the risk and the multiple sources of calibration data.javascript:nouvelleZone('abstract');_ajtAbstract('abstract');
336

Aplicação do modelo da soma-ponderada-de-gases-cinza a sistemas com superfícies não cinzas

Fonseca, Roberta Juliana Collet da January 2017 (has links)
A radiação térmica é o principal mecanismo de transferência de calor em fenômenos que envolvem meios participantes em temperaturas elevadas, tais como em processos de combustão. A dependência fortemente irregular do coeficiente de absorção em relação ao número de onda torna desafiador o estudo de situações em que a radiação é apenas parte de um problema mais complexo. A exatidão do cálculo da radiação fica condicionada à solução da equação da transferência radiativa (RTE) por meio da integração linha-por-linha (LBL), sendo, muitas vezes, impraticável, em virtude do esforço computacional requerido para contabilizar as centenas de milhares ou milhões de linhas espectrais do coeficiente de absorção. Alternativamente, modelos espectrais, como a soma-ponderada-de-gases-cinza (WSGG), têm sido empregados de maneira eficaz na obtenção de resultados em substituição à integração LBL. Nessa dissertação, o modelo WSGG é aplicado na solução da transferência de calor radiativa em um sistema unidimensional, formado por duas placas planas paralelas infinitas e preenchido por uma mistura homogênea de dióxido de carbono e vapor de água, considerando-se perfis distintos de temperatura. Diferentemente da maioria dos estudos da literatura que empregam a mesma geometria, mas com paredes negras, o presente trabalho supõe superfícies cinzas e não cinzas. O objetivo central é, portanto, avaliar o erro em se assumir fronteiras negras quando estas não apresentam esse comportamento. Os resultados para o modelo WSGG aplicado a superfícies não cinzas, cinzas e negras são comparados com a solução linha-por-linha para paredes não cinzas. As análises dos desvios entre as soluções pelo modelo da soma-ponderada-de-gases-cinza e pela integração LBL mostram que a suposição de paredes negras, para casos em que as superfícies deveriam ser consideradas não cinzas, pode levar a erros de até 50% nos resultados para o fluxo de calor e para o termo fonte radiativo. / Thermal radiation is the main heat transfer mechanism in phenomena that involves high temperatures, such as in combustion processes. The strongly irregular dependence of the absorption coefficient on the wavenumber makes challenger the study of situations in which the radiation is only part of a more complex problem. The accuracy of the calculation of the radiation is conditioned to the solution of the radiative transfer equation (RTE) by line-by-line (LBL) integration, being frequently impracticable, due to the computational effort required to account for the hundreds of thousands or millions spectral lines of the absorption coefficient. Alternatively, spectral models, such as the weighted-sum-of-gray-gases (WSGG) model, have been used with success to obtain results in comparison to LBL integration. In this study, the WSGG model is applied to solve the radiative heat transfer in a one-dimensional system, formed by two infinite flat parallel plates and filled by a homogeneous mixture of carbon dioxide and water vapor, for different temperature profiles. Unlike most studies of the literature that employ the same geometry, but with black walls, the present work supposes gray and non-gray surfaces. The central objective is, therefore, to evaluate the error in assuming black boundaries when they do not present this behavior. The results for the WSGG model applied to non-gray, gray and black surfaces are compared with the line-by-line solution for non-gray walls. Analyzes of the deviations between the solutions by the weighted-sum-of-gray-gases model and the LBL integration show that the assumption of black walls, for cases where the surfaces should be considered as non-gray, may lead to errors of up to 50% in results for the heat flux and the radiative source term.
337

Équations de Stokes et d'Oseen en domaine extérieur avec diverses conditions aux limites. / Stokes and Oseen equations in an exterior domain with different boundary conditions.

Meslameni, Mohamed 01 March 2013 (has links)
On s’intéresse aux équations stationnaires de Navier-Stokes linéarisées, il s'agit ici des équations d'Oseen et des équations de Stokes posées dans des domaines infinis, comme les domaines extérieurs, en dimension trois et l'espace tout entier. Le but est d'étudier l'existence de solutions généralisés et de solutions fortes dans un cadre général non nécessairement hilbertien. On s'intéresse aussi au cas des solutions très faibles. Dans ce travail, on considère aussi bien des conditions aux limites classiques de type Dirichlet que des conditions aux limites non standard portant sur certaines composantes du champ de vitesses, du tourbillon, voir du champ de pression. Les espaces de Sobolev classiques ne sont pas adaptés à l'étude de ces problèmes pour une telle géométrie. Pour une bonne analyse mathématique, nous avons choisi de travailler dans le cadre des espaces de Sobolev avec poids, ce qui permet en particulier de mieux contrôler le comportement à l'infini de la solution. / In this work, we study the linearized Navier-Stokes equations in an exterior domain or in the whole space at the steady state, that is, the Stokes equations and the Oseen equations. We give existence, uniqueness and regularity of solutions. The case of very weak solutions is also treated. We consider not only the Dirichlet boundary conditions but also the Non Standard boundary conditions, on some components of the velocity field, vorticity and also on the pressure. Since the domain is not bounded, the classical Sobolev spaces are not adequate. Therefore, a specific functional framework is necessary which also has to take into account the behaviour of the functions at infinity. Our approach rests on the use of weighted Sobolev spaces.
338

Geração de novas correlações da soma-ponderada-de-gases-cinza para H2O e CO2 em alta pressão

Coelho, Felipe Ramos January 2017 (has links)
A radiação térmica é frequentemente considerada um mecanismo de transferência de calor muito importante em processos de combustão em alta pressão, devido à presença de meios participantes e às altas temperaturas envolvidas. Resolver a radiação térmica em meios participantes é um problema complexo devido à natureza integro-diferencial da equação governante e à dependência espectral altamente irregular das propriedades de radiação. Atualmente, o método mais preciso para resolver a integração espectral é o método linha-porlinha (LBL), que possui um custo computacional muito elevado. Para contornar essa dificuldade, o problema espectral é geralmente resolvido usando modelos espectrais e, consequentemente, a equação da transferência radiativa (RTE) é simplificada. Um destes modelos é o da soma-ponderada-de-gases-cinza (WSGG), que substitui o comportamento espectral altamente irregular do coeficiente de absorção, por bandas de coeficientes de absorção uniforme e tem mostrado um bom desempenho em diversas aplicações, mesmo sendo um modelo bastante simplificado. Entretanto, recentemente alguns autores não obtiveram bons resultados ao tentar aplicar o WSGG a problemas de combustão em alta pressão. Este artigo desenvolve um modelo WSGG para CO2 e H2O em condições de alta pressão. Para validar o modelo, a emitância total é calculada usando os coeficientes WSGG e comparada à solução do LBL obtida usando o banco de dados espectrais HITEMP 2010. Os resultados mostraram grande convergência entre os valores de emitância de ambos os métodos, mesmo para valores de alta pressão, tanto para o CO2 quanto para H2O, provando que o método WSGG é aplicável a condições de alta pressão. O modelo também foi validado pelo cálculo do fluxo de calor e termo fonte radiativo, e comparando-os com os obtidos através do método LBL. O H2O teve melhores resultados para baixas pressões, enquanto o CO2 apresentou melhores resultados para pressões mais altas. O efeito da pressão total sobre a solução de LBL foi maior para o H2O, o que pode ser um dos motivos pelo qual os desvios foram maiores para os casos de alta pressão. / Thermal radiation is often a very important heat transfer mechanism in high pressure combustion processes due to the presence of participating media and the high temperatures involved. Solving thermal radiation in participating media is a tough problem due to the integro-differential governing equation and the complex spectral dependence of radiation properties. Currently, the most accurate method to solve the spectral integration is the line-byline (LBL) method, which has a very high computational cost. In order to avoid this drawback the spectral problem is usually solved using spectral models, and as a consequence the radiative transfer equation (RTE) is simplified. One of the models is the weighted-sum-ofgray- gases (WSGG) which replaces the highly irregular spectral behavior of the absorption coefficient by bands of uniform absorption coefficients, and has shown great performance a lot of applications even though it is a very simple model. However, recently some authors didn’t have good results when trying to apply the WSGG to high pressure combustion problems. This thesis develops a WSGG model for both CO2 and H2O on high pressure conditions. In order to validate the model the total emittance is calculated using the WSGG coefficients and compared to the LBL solution which was obtained using the HITEMP 2010 spectral emissivity database. The results showed that the emittance values from both methods were very close even for high pressure values for both CO2 and H2O proving that the WSGG method is applicable to high pressure conditions. The model was also validated by calculating the radiative heat flux and source, and comparing them with the LBL method. H2O had better results for low pressures while CO2 had better results for higher pressures. The effect of total pressure on the LBL solution was higher for H2O, which might be the reason why deviations were higher at high pressure values.
339

Modélisation des données d'enquêtes cas-cohorte par imputation multiple : application en épidémiologie cardio-vasculaire / Modeling of case-cohort data by multiple imputation : application to cardio-vascular epidemiology

Marti soler, Helena 04 May 2012 (has links)
Les estimateurs pondérés généralement utilisés pour analyser les enquêtes cas-cohorte ne sont pas pleinement efficaces. Or, les enquêtes cas-cohorte sont un cas particulier de données incomplètes où le processus d'observation est contrôlé par les organisateurs de l'étude. Ainsi, des méthodes d'analyse pour données manquant au hasard (MA) peuvent être pertinentes, en particulier, l'imputation multiple, qui utilise toute l'information disponible et permet d'approcher l'estimateur du maximum de vraisemblance partielle.Cette méthode est fondée sur la génération de plusieurs jeux plausibles de données complétées prenant en compte les différents niveaux d'incertitude sur les données manquantes. Elle permet d'adapter facilement n'importe quel outil statistique disponible pour les données de cohorte, par exemple, l'estimation de la capacité prédictive d'un modèle ou d'une variable additionnelle qui pose des problèmes spécifiques dans les enquêtes cas-cohorte. Nous avons montré que le modèle d'imputation doit être estimé à partir de tous les sujets complètement observés (cas et non-cas) en incluant l'indicatrice de statut parmi les variables explicatives. Nous avons validé cette approche à l'aide de plusieurs séries de simulations: 1) données complètement simulées, où nous connaissions les vraies valeurs des paramètres, 2) enquêtes cas-cohorte simulées à partir de la cohorte PRIME, où nous ne disposions pas d'une variable de phase-1 (observée sur tous les sujets) fortement prédictive de la variable de phase-2 (incomplètement observée), 3) enquêtes cas-cohorte simulées à partir de la cohorte NWTS, où une variable de phase-1 fortement prédictive de la variable de phase-2 était disponible. Ces simulations ont montré que l'imputation multiple fournissait généralement des estimateurs sans biais des risques relatifs. Pour les variables de phase-1, ils approchaient la précision obtenue par l'analyse de la cohorte complète, ils étaient légèrement plus précis que l'estimateur calibré de Breslow et coll. et surtout que les estimateurs pondérés classiques. Pour les variables de phase-2, l'estimateur de l'imputation multiple était généralement sans biais et d'une précision supérieure à celle des estimateurs pondérés classiques et analogue à celle de l'estimateur calibré. Les résultats des simulations réalisées à partir des données de la cohorte NWTS étaient cependant moins bons pour les effets impliquant la variable de phase-2 : les estimateurs de l'imputation multiple étaient légèrement biaisés et moins précis que les estimateurs pondérés. Cela s'explique par la présence de termes d'interaction impliquant la variable de phase-2 dans le modèle d'analyse, d'où la nécessité d'estimer des modèles d'imputation spécifiques à différentes strates de la cohorte incluant parfois trop peu de cas pour que les conditions asymptotiques soient réunies.Nous recommandons d'utiliser l'imputation multiple pour obtenir des estimations plus précises des risques relatifs, tout en s'assurant qu'elles sont analogues à celles fournies par les analyses pondérées. Nos simulations ont également montré que l'imputation multiple fournissait des estimations de la valeur prédictive d'un modèle (C de Harrell) ou d'une variable additionnelle (différence des indices C, NRI ou IDI) analogues à celles fournies par la cohorte complète / The weighted estimators generally used for analyzing case-cohort studies are not fully efficient. However, case-cohort surveys are a special type of incomplete data in which the observation process is controlled by the study organizers. So, methods for analyzing Missing At Random (MAR) data could be appropriate, in particular, multiple imputation, which uses all the available information and allows to approximate the partial maximum likelihood estimator.This approach is based on the generation of several plausible complete data sets, taking into account all the uncertainty about the missing values. It allows adapting any statistical tool available for cohort data, for instance, estimators of the predictive ability of a model or of an additional variable, which meet specific problems with case-cohort data. We have shown that the imputation model must be estimated on all the completely observed subjects (cases and non-cases) including the case indicator among the explanatory variables. We validated this approach with several sets of simulations: 1) completely simulated data where the true parameter values were known, 2) case-cohort data simulated from the PRIME cohort, without any phase-1 variable (completely observed) strongly predictive of the phase-2 variable (incompletely observed), 3) case-cohort data simulated from de NWTS cohort, where a phase-1 variable strongly predictive of the phase-2 variable was available. These simulations showed that multiple imputation generally provided unbiased estimates of the risk ratios. For the phase-1 variables, they were almost as precise as the estimates provided by the full cohort, slightly more precise than Breslow et al. calibrated estimator and still more precise than classical weighted estimators. For the phase-2 variables, the multiple imputation estimator was generally unbiased, with a precision better than classical weighted estimators and similar to Breslow et al. calibrated estimator. The simulations performed with the NWTS cohort data provided less satisfactory results for the effects where the phase-2 variable was involved: the multiple imputation estimators were slightly biased and less precise than the weighted estimators. This can be explained by the interactions terms involving the phase-2 variable in the analysis model and the necessity of estimating specific imputation models in different strata not including sometimes enough cases to satisfy the asymptotic conditions. We advocate the use of multiple imputation for improving the precision of the risk ratios estimates while making sure they are similar to the weighted estimates.Our simulations also showed that multiple imputation provided estimates of a model predictive value (Harrell's C) or of an additional variable (difference of C indices, NRI or IDI) similar to those obtained from the full cohort.
340

A New Approach to Statistical Efficiency of Weighted Least Squares Fitting Algorithms for Reparameterization of Nonlinear Regression Models

Zheng, Shimin, Gupta, A. K. 01 April 2012 (has links)
We study nonlinear least-squares problem that can be transformed to linear problem by change of variables. We derive a general formula for the statistically optimal weights and prove that the resulting linear regression gives an optimal estimate (which satisfies an analogue of the Rao–Cramer lower bound) in the limit of small noise.

Page generated in 0.056 seconds