• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 42
  • 22
  • 10
  • 9
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 118
  • 118
  • 30
  • 23
  • 20
  • 19
  • 17
  • 15
  • 14
  • 13
  • 13
  • 12
  • 12
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

L'évaluation du risque de récidive chez les agresseurs sexuels adultes

Parent, Geneviève January 2008 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal.
62

COST AND ACCURACY COMPARISONS IN MEDICAL TESTING USING SEQUENTIAL TESTING STRATEGIES

Ahmed, Anwar 14 May 2010 (has links)
The practice of sequential testing is followed by the evaluation of accuracy, but often not by the evaluation of cost. This research described and compared three sequential testing strategies: believe the negative (BN), believe the positive (BP) and believe the extreme (BE), the latter being a less-examined strategy. All three strategies were used to combine results of two medical tests to diagnose a disease or medical condition. Descriptions of these strategies were provided in terms of accuracy (using the maximum receiver operating curve or MROC) and cost of testing (defined as the proportion of subjects who need 2 tests to diagnose disease), with the goal to minimize the number of tests needed for each subject while maintaining test accuracy. It was shown that the cost of the test sequence could be reduced without sacrificing accuracy beyond an acceptable range by setting an acceptable tolerance (q) on maximum test sensitivity. This research introduced a newly-developed ROC curve reflecting this reduced sensitivity and cost of testing called the Minimum Cost Maximum Receiver Operating Characteristic (MCMROC) curve. Within these strategies, four different parameters that could influence the performance of the combined tests were examined: the area under the curve (AUC) of each individual test, the ratio of standard deviations (b) from assumed underlying disease and non-disease populations, correlation (rho) between underlying disease populations, and disease prevalence. The following patterns were noted: Under all parameter settings, the MROC curve of the BE strategy never performed worse than the BN and BP strategies, and it most frequently had the lowest cost. The parameters tended to have less of an effect on the MROC and MCMROC curves than they had on the cost curves, which were affected greatly. The AUC values and the ratio of standard deviations both had a greater effect on cost curves, MROC curves, and MCMROC curves than prevalence and correlation. The use of BMI and plasma glucose concentration to diagnose diabetes in Pima Indians was presented as an example of a real-world application of these strategies. It was found that the BN and BE strategies were the most consistently accurate and least expensive choice.
63

Contribution à l'évaluation de capacités pronostiques en présence de données censurées, de risques concurrents et de marqueurs longitudinaux : inférence et applications à la prédiction de la démence / Contribution to the evaluation of prognostic abilities in presence of censored data, competing risks and longitudinal markers : inference and applications to dementia prediction

Blanche, Paul 10 December 2013 (has links)
Ce travail a eu pour objectif de proposer des méthodes statistiques pour évaluer et comparer les capacités prédictives de divers outils pronostiques. Le Brier score et principalement les courbes ROC dépendant du temps ont été étudiés. Tous deux dépendent d'un temps t, représentant un horizon de prédiction. Motivé par les applications à la prédiction de la démence et des données de cohortes de personnes âgées, ce travail s'est spécifiquement intéressé à des procédures d'inférence en présence de données censurées et de risques concurrents. Le risque concurrent de décès sans démence est en effet important lorsque l'on s'intéresse à prédire une démence chez des sujets âgés. Pour obtenir des estimateurs consistants, nous avons utilisé une méthode appelée “Inverse Probability of Censoring Weighting” (IPCW). Dans un premier travail, nous montrons qu'elle permet d'étendre simplement les estimateurs pour données non censurées et de prendre en compte une censure éventuellement dépendante de l'outil pronostique étudié. Dans un second travail, nous proposons des adaptations pour les situations de risques concurrents. Quelques résultats asymptotiques sont donnés et permettent de dériver des régions de confiance et des tests de comparaison d'outils pronostiques. Enfin, un troisième travail s'intéresse à la comparaison d'outils pronostiques dynamiques, basés sur des marqueurs longitudinaux. Les mesures de capacités pronostiques dépendent ici à la fois du temps s auquel on fait la prédiction et de l'horizon de prédiction t. Des courbes de capacités pronostiques selon s sont proposées pour leur évaluation et quelques procédures d'inférence sont développées, permettant de construire des régions de confiance et des tests de comparaison de ces courbes. L'application des méthodes proposées a permis de montrer que des outils prédictifs de la démence basés sur des tests cognitifs ou des mesures répétées de ces tests ont de bonnes capacités pronostiques. / The objective of this work is to develop statistical methods that can be used to evaluate and compare the prognostic ability of different prognostic tools. To measure prognostic ability, mainly the time-dependent ROC curve is studied and also the Brier score for a prediction horizon t. Motivated by applications where the aim is to predict the risk of dementia in cohort data of elderly people, this work focuses on inference procedures in the presence of right censoring and competing risks. In elderly populations death is a highly prevalent competing risk. To define consistent estimators of the prediction ability measures, we use the inverse probability of censoring weighting (IPCW) approach. In our first work, we show that the IPCW approach provides consistent estimators of prediction ability based on right censored data, even when the censoring distribution is marker-dependent. In our second work, we adapt the estimators to settings with competing risks. Asymptotic results are provided and we derive confidence regions and tests for comparing different prognostic tools. Finally, in a third work we focus on comparing dynamic prognostic tools which use information from repeated marker measurements to predict future events. The prognostic ability measures now depend on both the time s at which predictions are made and on the prediction horizon t. Curves of the prognostic ability as a function of s are developed for the evaluation of dynamic risk predictions. Inference procedures are adapted and so are confidence regions and tests to compare the curves. The applications of the proposed methods to cohort data show that the prognostic tools that use cognitive tests, or repeated measurements of cognitive tests, have high prognostic abilities.
64

Formação de indicadores para a psicopatologia do Luto / Training indicators for the psychopathology of mourning

Alves, Tania Maria 05 December 2014 (has links)
Introdução: luto complicado é caracterizado pela procura persistente pelo falecido, tristeza e dor emocional intensos em resposta à morte de ente querido. Luto complicado é frequentemente pouco reconhecido e subtratado. O Texas Inventory Revised of Grief (TRIG) é um instrumento de alta confiabilidade e validade na medida de avaliação do luto. Nosso objetivo foi traduzir, adaptar e validar o TRIG para Português do Brasil e verificar se o mesmo, em uma população enlutada, é capaz de distinguir entre os que têm e os que não tem luto complicado assim como identificar quais elementos da escala contribuem para isso. Métodos: o trabalho foi realizado em duas etapas: a) tradução e adaptação transcultural do TRIG para o português do Brasil e b) estudo em corte transversal para análise da confiabilidade e validação desse instrumento. Participantes: 165 pacientes adultos foram recrutados de: a) Ambulatório de Luto do Departamento e Instituto de Psiquiatria - Universidade de São Paulo, b) Ambulatório de convênio e Particulares no mesmo departamento e, c) Colegas de trabalho que perderam um ente querido. Todos os pacientes foram entrevistados com o TRIG e de acordo com critérios clínicos, 69 dos 165 pacientes enlutados foram diagnosticados com luto complicado. Resultados: quanto à tradução e adaptação transcultural, o TRIG foi traduzido para o português, feito a retrotradução para o inglês e adaptado à cultura local. Esse processo foi realizado por dois psiquiatras bilíngues. A confiabilidade e consistência interna do instrumento foram medidos pelo coeficiente de Alpha de Cronbach que alcançou 0,735 para parte I e 0,896 para a parte II do instrumento. A sensibilidade, especificidade e ponto de corte para identificar enlutados com e sem luto complicado foram medidos pela Curva ROC. Viu-se que usando o ponto de corte encontrado de 104 (escore total das partes I, II, III + variáveis psicográficas), é possível classificar corretamente 71,3% dos indivíduos com e sem luto complicado. A validação do instrumento foi realizada pela análise fatorial exploratória e confirmatória. Pela regressão logística demonstrou-se que nível educacional, idade do falecido, idade do enlutado, perda de filho(a) e morte do tipo inesperada são fatores de risco para luto complicado. Nossos resultados também sugerem que religião pode influenciar luto complicado. Conclusões: a versão traduzida e adaptada do TRIG para o português é confiável e válida como medida do luto tanto quanto a versão original. O TRIG foi capaz de distinguir pacientes com e sem luto complicado. Nós sugerimos o uso do TRIG com ponto de corte igual a 104 para identificar enlutados com luto complicado / Background: Complicated grief is characterized by persistent yearning for the deceased, intense sorrow and emotional pain in response to death causing significant distress. Complicated grief is often underrecognized and under treated. The Texas Revised Inventory of Grief (TRIG) is a questionnaire that has been demonstrated to have high validity and reliability in the assessment of complicated grief. Our objective was to translate, adapt, and validate the TRIG to Brazilian Portuguese and to verify whether the TRIG, in a bereaved population, is able to distinguish between those with and those without complicated grief and to identify which elements in the scale contribute to this. Methods: Two stages: a) cross-culture adaptation of a questionnaire, and b) crosssectional study of reliability and validity. Setting and Participants: 165 adult patients were recruited from a) the Grief Outpatient Clinic at the Department and Institute of Psychiatry - University of São Paulo, b) private practice at the same department, and c) co-workers who have lost a loved one. All the patients were interviewed with the TRIG. According to clinical criteria 69 of 165 bereaved patients were presenting complicated grief. Results: Cross-culture adaptation: the TRIG was translated from American English, then back-translated and finally compared with the Brazilian Portuguese version by two bilingual psychiatrists. Reliability: the Cronbach\'s alpha coefficients (internal consistency) of the TRIG scales were 0,735 (part I) and 0,896 (part II). Sensitivity, specificity as well as cutoff points to identify complicated and non-complicated grief, were measured using the ROC curve Using the total score of 104 (part I + part II + Part III + psychographics variables), we can correctly classify 71.3% of individuals with and without complicated grief. The construct validity was assessed by exploratory factor analysis and confirmatory analysis. Furthermore, by logistic regression, our study demonstrated that a low education level, age of the deceased and age of the bereaved, loss of a son or daughter, and unexpected death were all risk factors for complicated grief. Our results also suggest that religion may influence complicated grief. Conclusions: The TRIG adapted to Brazilian Portuguese is as reliable and valid as the original version. In the evaluation of Brazilian bereaved, it was able to distinguish individuals with and without complicated grief. And, we suggest a cut-off value of 104 for complicated grief
65

Acurácia diagnóstica da variação da pressão de pulso mensurada em artéria periférica para predição de diferentes aumentos do volume sistólico em resposta ao desafio volêmico em cães

Dalmagro, Tábata Larissa. January 2019 (has links)
Orientador: Francisco José Teixeira-Neto / Resumo: Objetivo – Determinar a acurácia diagnóstica da variação da pressão de pulso (ΔPP) mensurada em artéria periférica na predição de diferentes aumentos no volume sistólico induzidos por um desafio volêmico em cães. Metodologia – Foram incluídos 39 cães, fêmeas (19,3 ± 3,6 kg) submetidas à ovariohisterectomia eletiva. A anestesia foi mantida com isoflurano sob ventilação mecânica controlada a volume (volume corrente 12 mL/kg; pausa inspiratória durante 40% do tempo inspiratório; relação inspiração:expiração 1:1,5). O débito cardíaco foi obtido através da técnica de termodiluição transpulmonar (cateter na artéria femoral) e o ΔPP foi mensurado através de um cateter posicionado na artéria podal dorsal. A fluido-responsividade (FR) foi avaliada através da administração de um (n = 21) ou dois (n = 18) desafios volêmicos com solução de Ringer Lactato (RL, 20 mL/kg durante 15 minutos), antes do procedimento cirúrgico. A análise da curva “receiver operating characteristics” (ROC) e a zona de incerteza diagnóstica (“gray zone”) do ΔPP foram empregadas para avaliar a habilidade do índice preditivo em discriminar os respondedores ao último desafio volêmico. A fluido-reponsividade foi definida por diferentes porcentagens de aumento no índice de volume sistólico (IVS) mensurado pela técnica de termodiluição transpulmonar (IVS>10%, IVS>15%, IVS>20% e IVS>25%). Resultados – O número de respondedores ao último desafio volêmico foi de 25 (IVS>10%), 21 (IVS>15%), 18 (IVS>20%) e 14 (IVS>25%). A á... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: Objective – To determine the accuracy of pulse pressure variation (PPV) measured from a peripheral artery to predict different percent increases in stroke volume induced by a fluid challenge in dogs. Methods – Were included 39 adult bitches (19.3 ± 3.6 kg) undergoing ovariohysterectomy. Anesthesia was maintained with isoflurane under volumecontrolled ventilation (tidal volume 12 mL kg-1 ; inspiratory pause during 40% of inspiratory time; inspiration:expiration ratio 1:1.5). Cardiac output was obtained by transpulmonary thermodilution (femoral artery catheter) and PPV was measured from a dorsal pedal artery catheter. Fluid responsiveness (FR) was evaluated by a fluid challenge with lactated Ringer´s solution (LRS, 20 mL kg-1 over 15 minutes) administered once (n = 21) or twice (n = 18) before surgery. Receiver operating characteristics (ROC) curve analysis and the zone of diagnostic uncertainty (gray zone) of PPV cutoff thresholds were employed to evaluate the ability of PPV to discriminate responders to the last fluid challenge, defined by different percentage increases in stroke volume index (SVI) measured by transpulmonary thermodilution (SVI>10% to SVI> 25%, with 5 % increments). Results – Number of responders to the last fluid challenge were 25 (SVI>10%), 21 (SVI>15%), 18 (SVI>20%), and 14 (SVI>25%). The area under the ROC curve (AUROC) of PPV was 0.897 (SVI>10%), 0.968 (SVI>15%), 0.923 (SVI>20%), and 0.891 (SVI>25%) (p <0.0001 from AUROC = 0.5). Gray zones of PPV cutoff ... (Complete abstract click electronic access below) / Mestre
66

Probabilidade de controle tumoral: modelos e estatísticas / Tumor Control Probability: Models and Statistics

Santos, Mairon Marques dos 28 November 2014 (has links)
A modelagem em radiobiologia possibilita prever a eficácia de tratamentos radioterápicos, especificando protocolos e estratégias para se tratar pacientes com câncer. Muitos modelos matemáticos têm sido propostos para a avaliação da Probabilidade de Controle Tumoral (TCP). Nesta tese, inicialmente apresenta-se um estudo desenvolvido em colaboração com pesquisadores da Universidade de Alberta, no Canadá, em que são comparadas as TCP\'s obtidas através de simulações Monte Carlo e dos modelos Poissoniano, de Zaider-Minerbo (ZM) e de Dawson-Hillen (DH). Os resultados mostram que, para tumores de baixa proliferação celular, o uso do modelo Poissoniano para indicação de protocolos de tratamento é tão eficaz quanto o método Monte Carlo ou o uso de modelos mais sofisticados (ZM e DH). Na segunda parte da tese, propõe-se um teste estatístico, baseado em simulações Monte Carlo do modelo de TCP de DH, para se determinar a capacidade de previsão de erradicação de tumor (cura). Obtem-se a curva ROC do teste a partir das distribuições de probabilidade da fração de células tumorais remanescentes, nas condições de cura ou não-cura. Os resultados mostram que o método pode ser também aplicado a dados clínicos, sugerindo que a avaliação do tamanho do tumor no início da radioterapia permite a prognose do tratamento a curto prazo. Na terceira parte da tese, aborda-se o estudo da fração de sobrevivência (FS) de células tumorais em função da dose de radiação a que são submetidas. Na literatura, esta fração de sobrevivência tem sido formulada através do modelo Linear-Quadrático (LQ) e, mais recentemente, da estatística não-extensiva de Tsallis. Avalia-se o comportamento dessas duas formulações em termos dos ajustes da FS a dados experimentais da literatura (referentes a células cultivadas in vitro para vários tecidos tumorais) estendendo-se assim estudos prévios da literatura. Os parâmetros da FS para ambas formulações são obtidos e a qualidade dos ajustes da FS a dados experimentais é comparada utilizando-se o qui-quadrado reduzido. Os resultados mostram que, em geral, as duas formulações permitem bons ajustes das curvas de FS. Além deste estudo, utilizamos a estatística não-extensiva de Tsallis para obtenção da TCP de ZM em função da dose, expressando-a analiticamente em termos da função Gama (para um perfil de dose típico de radiação de feixe externo) e da função Hipergeométrica (para um perfil de dose típico de braquiterapia). Finalmente, as curvas das correspondentes TCP\'s são levantadas com o uso de dados experimentais e comparadas com a TCP\'s obtidas através do modelo LQ. / Radiobiological modeling allows one to predict the efficacy of radiotherapeutic treatments, specifying protocols and strategies to treat patients with cancer. Many mathematical models have been proposed to evaluate the Tumor Control Probability (TCP). In this thesis we first present a study in colaboration with researchers at the University of Alberta, Canada, in which we compare the TCPs obtained by Monte Carlo simulations and from the Poissonian, Zaider-Minerbo (ZM) and Dawson-Hillen (DH) models. Results show that, for low proliferation tumors, the use of the Poissonian model for indicating the treatment protocol is as effective as the Monte Carlo method or more sofisticated models (ZM and DH). in the second part of the thesis, we propose a statistical test based on Monte Carlo simulations of the DH TCP model to determine the prediction capacity of tumor eradication (cure). We obtain the ROC curve of the test from the probability distributions of the remaining tumor cells for conditions of cure and non-cure. Results show that the method can also be applied to clinical data suggesting that the evaluation of the tumor size at the beginning of the radiotherapy leads to a short-term prognosis of the treatment. In the third part of the thesis, we study the surviving fraction (FS) of tumor cells as function of the radiation dose to which they are subjected. In the literature, this surviving fraction has been formulated by the Linear-Quadratic (LQ) model and, more recently, from the Tsallis non-extensive statistics. We evaluate the behaviour of both formulations in terms of the FS fittings to experimental data in the literature (related to cells cultivated for several tumoral tissues) so that we extend previous studies in the literature. The FS parameters for both formulations are obtained and the quality of the FS fittings to experimental data is compared using the reduced chi-square. Results show that in general both formulations lead to very good FS-curve fittings. Furthermore, we use the Tsallis non-extensive statistics to obtain the ZM TCP as function of the dose, expressing it analitically in terms of the Gamma function (for a dose profile typical of external beam radiation) and the Hipergeometric function (for a dose profile typical of brachitherapy). Finally, the curves of the corresponding TCPs are plotted using experimental data and then compared with TCPs obtained from the LQ model.
67

Formação de indicadores para a psicopatologia do Luto / Training indicators for the psychopathology of mourning

Tania Maria Alves 05 December 2014 (has links)
Introdução: luto complicado é caracterizado pela procura persistente pelo falecido, tristeza e dor emocional intensos em resposta à morte de ente querido. Luto complicado é frequentemente pouco reconhecido e subtratado. O Texas Inventory Revised of Grief (TRIG) é um instrumento de alta confiabilidade e validade na medida de avaliação do luto. Nosso objetivo foi traduzir, adaptar e validar o TRIG para Português do Brasil e verificar se o mesmo, em uma população enlutada, é capaz de distinguir entre os que têm e os que não tem luto complicado assim como identificar quais elementos da escala contribuem para isso. Métodos: o trabalho foi realizado em duas etapas: a) tradução e adaptação transcultural do TRIG para o português do Brasil e b) estudo em corte transversal para análise da confiabilidade e validação desse instrumento. Participantes: 165 pacientes adultos foram recrutados de: a) Ambulatório de Luto do Departamento e Instituto de Psiquiatria - Universidade de São Paulo, b) Ambulatório de convênio e Particulares no mesmo departamento e, c) Colegas de trabalho que perderam um ente querido. Todos os pacientes foram entrevistados com o TRIG e de acordo com critérios clínicos, 69 dos 165 pacientes enlutados foram diagnosticados com luto complicado. Resultados: quanto à tradução e adaptação transcultural, o TRIG foi traduzido para o português, feito a retrotradução para o inglês e adaptado à cultura local. Esse processo foi realizado por dois psiquiatras bilíngues. A confiabilidade e consistência interna do instrumento foram medidos pelo coeficiente de Alpha de Cronbach que alcançou 0,735 para parte I e 0,896 para a parte II do instrumento. A sensibilidade, especificidade e ponto de corte para identificar enlutados com e sem luto complicado foram medidos pela Curva ROC. Viu-se que usando o ponto de corte encontrado de 104 (escore total das partes I, II, III + variáveis psicográficas), é possível classificar corretamente 71,3% dos indivíduos com e sem luto complicado. A validação do instrumento foi realizada pela análise fatorial exploratória e confirmatória. Pela regressão logística demonstrou-se que nível educacional, idade do falecido, idade do enlutado, perda de filho(a) e morte do tipo inesperada são fatores de risco para luto complicado. Nossos resultados também sugerem que religião pode influenciar luto complicado. Conclusões: a versão traduzida e adaptada do TRIG para o português é confiável e válida como medida do luto tanto quanto a versão original. O TRIG foi capaz de distinguir pacientes com e sem luto complicado. Nós sugerimos o uso do TRIG com ponto de corte igual a 104 para identificar enlutados com luto complicado / Background: Complicated grief is characterized by persistent yearning for the deceased, intense sorrow and emotional pain in response to death causing significant distress. Complicated grief is often underrecognized and under treated. The Texas Revised Inventory of Grief (TRIG) is a questionnaire that has been demonstrated to have high validity and reliability in the assessment of complicated grief. Our objective was to translate, adapt, and validate the TRIG to Brazilian Portuguese and to verify whether the TRIG, in a bereaved population, is able to distinguish between those with and those without complicated grief and to identify which elements in the scale contribute to this. Methods: Two stages: a) cross-culture adaptation of a questionnaire, and b) crosssectional study of reliability and validity. Setting and Participants: 165 adult patients were recruited from a) the Grief Outpatient Clinic at the Department and Institute of Psychiatry - University of São Paulo, b) private practice at the same department, and c) co-workers who have lost a loved one. All the patients were interviewed with the TRIG. According to clinical criteria 69 of 165 bereaved patients were presenting complicated grief. Results: Cross-culture adaptation: the TRIG was translated from American English, then back-translated and finally compared with the Brazilian Portuguese version by two bilingual psychiatrists. Reliability: the Cronbach\'s alpha coefficients (internal consistency) of the TRIG scales were 0,735 (part I) and 0,896 (part II). Sensitivity, specificity as well as cutoff points to identify complicated and non-complicated grief, were measured using the ROC curve Using the total score of 104 (part I + part II + Part III + psychographics variables), we can correctly classify 71.3% of individuals with and without complicated grief. The construct validity was assessed by exploratory factor analysis and confirmatory analysis. Furthermore, by logistic regression, our study demonstrated that a low education level, age of the deceased and age of the bereaved, loss of a son or daughter, and unexpected death were all risk factors for complicated grief. Our results also suggest that religion may influence complicated grief. Conclusions: The TRIG adapted to Brazilian Portuguese is as reliable and valid as the original version. In the evaluation of Brazilian bereaved, it was able to distinguish individuals with and without complicated grief. And, we suggest a cut-off value of 104 for complicated grief
68

Monothermal Caloric Screening Test Performance: A Relative Operating Characteristic Curve Analysis

Murnane, Owen D., Akin, Faith W., Lynn, Susan G., Cyr, David G. 01 July 2009 (has links)
Objective: The objective of the present study was to evaluate the performance of the monothermal caloric screening test in a large sample of patients. Design: A retrospective analysis of the medical records of 1002 consecutive patients who had undergone vestibular assessment at the Mayo Clinic during the years 1989 and 1990 was conducted. Patients with incomplete alternate binaural bithermal (ABB) caloric testing, congenital or periodic alternating nystagmus, or bilateral vestibular loss were excluded from the study. Clinical decision theory analyses (relative operating characteristic curves) were used to determine the accuracy with which the monothermal warm (MWST) and monothermal cool (MCST) caloric screening tests predicted the results of the ABB caloric test. Cumulative distributions were constructed as a function of the cutoff points for monothermal interear difference (IED) to select the cutoff point associated with any combination of true-positive and false-positive rates. Results: Both MWST and MCST performed well above chance level. The test performance for the MWST was significantly better than that of the MCST for three of the four ABB gold standards. A 10% IED cutoff point for the MWST yielded a false-negative rate of either 1% (UW ≥25%) or 3% (UW ≥20%). The use of a 10% IED (UW ≥25%) for the MWST would have resulted in a 40% reduction (N = 294) in the number of ABB caloric tests performed on patients without a unilateral weakness. Conclusions: The results of this study indicated that the MWST decreases test time without sacrificing the sensitivity of the ABB caloric test.
69

Paramètres cliniques, électroencéphalograhiques et biologiques pour optimiser les critères diagnostiques de la narcolepsie / Clinical, electroencephalographic and biological parameters to optimise narcolepsy diagnostic criteria

Andlauer, Olivier 11 December 2014 (has links)
La narcolepsie est une maladie rare, touchant une personne sur 2000. Elle se caractérise par l'association d'une somnolence diurne excessive, d'épisodes de cataplexie, de paralysies du sommeil, d'hallucinations hypnagogiques. et d'une fragmentation du sommeil. La narcolepsie sans cataplexie constitue un sous-type hétérogène. Le diagnostic de narcolepsie peut être clinique, mais bien souvent un Test Itératif de Latence d'Endormissement (T1LE), précédé d'une polysomnographie nocturne (NPSG). sont utilisés pour porter le diagnostic.La cause de la plupart des cas de narcolepsie avec cataplexie a été découverte au début des années 2000: la destruction, probablement d'origine auto-immune. des neurones à hypocrétine de l'hypothalamus. Un déficit en hypocrétine à la ponction lombaire constitue désormais un test de référence pour établir le diagnostic, ce qui offre l'opportunité d'optimiser les critères actuels et de tester de nouvelles hypothèses diagnostiques en regard de ce test de référence. Peu d'études ont à ce jour spécifiquement porté sur la narcolepsie sans cataplexie et son diagnostic. Nous avons donc cherché à identifier les prédicteurs du déficit en hypocrétine dans la narcolepsie sans cataplexie. De plus, dans la narcolepsie-cataplexie, l'utilisation comme critère diagnostique d'une latence courte d'apparition du sommeil paradoxal à la NPSG n'a jamais été évaluée en utilisant comme test de référence le déficit en hypocrétine, et nous avons donc cherché à en déterminer l'utilité diagnostic et la valeur-seuil optimale.Afin de mener à bien ces projets de recherche, nous avons initié et participé au développement du logiciel d'analyse ROC (Receiver Operating Characteristic) SoftROC. Dans la narcolepsie sans cataplexie. nous avons montré que les paramètres électrophysiologiques, plus que cliniques, différaient entre les patients avec un taux bas d'hypocrétine et ceux avec un taux normal. Dans la narcolepsie avec cataplexie. nous avons établi qu'une latence courte (< 15 minutes) d'apparition du sommeil paradoxal à la NPSG était un test diagnostique spécifique, mais peu sensible, pour la narcolepsie avec déficit en hypocrétine. Nos résultats ont contribué à la révision des classifications internationales des troubles du sommeil. / Narcolepsy is characterised by excessive diurnal sleepiness, cataplexy, sleep paralysis, hypnagogic hallucinations andsleep fragmentation. Narcolepsy without cataplexy is a heterogeneous subtype. Diagnosis can be established clinically,but a Mulitple Sleep Latency Test (MSLT) following a Nocturnal PolySomnoGraphy (NPSG), is used most of the time.Auto-immune loss of hypocretin cells is responsible for narcolepsy with cataplexy. Hypocretin deficiency at lumbarpuncture is a gold standard for diagnosis.Few studies have focused specifically on narcolepsy without cataplexy. Our aim was to identify predictors of hypocretindeficiency in this condition. Moreover, in narcolepsy with cataplexy, a short REM sleep latency at NPSG has never beenevaluated as a diagnostic test using hypocretin deficiency as a gold standard, and we therefore have aimed at assessing itsdiagnostic utility and optimal cut-off.In order to conduct our research, we have contributed to developing a ROC analysis software (SoftROC).In narcolepsy without cataplexy- objective (NPSG and MSLT) more than clinical parameters were predictors ofhypocretin-deficiency. In narcolepsy-cataplexy, a short (< 15 mins) REM latency at NPSG was a specific, but notsensitive. diagnostic test. Our results contributed to the revision of international diagnostic classifications.
70

Multiple hypothesis testing and multiple outlier identification methods

Yin, Yaling 13 April 2010
Traditional multiple hypothesis testing procedures, such as that of Benjamini and Hochberg, fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this thesis it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storeys method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storeys procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochbergs procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.<p> Multiple hypothesis testing can also be applied to regression diagnostics. In this thesis, a Bayesian method is proposed to test multiple hypotheses, of which the i-th null and alternative hypotheses are that the i-th observation is not an outlier versus it is, for i=1,...,m. In the proposed Bayesian model, it is assumed that outliers have a mean shift, where the proportion of outliers and the mean shift respectively follow a Beta prior distribution and a normal prior distribution. It is proved in the thesis that for the proposed model, when there exists more than one outlier, the marginal distributions of the deletion residual of the i-th observation under both null and alternative hypotheses are doubly noncentral t distributions. The outlyingness of the i-th observation is measured by the marginal posterior probability that the i-th observation is an outlier given its deletion residual. An importance sampling method is proposed to calculate this probability. This method requires the computation of the density of the doubly noncentral F distribution and this is approximated using Patnaiks approximation. An algorithm is proposed in this thesis to examine the accuracy of Patnaiks approximation. The comparison of this algorithms output with Patnaiks approximation shows that the latter can save massive computation time without losing much accuracy.<p> The proposed Bayesian multiple outlier identification procedure is applied to some simulated data sets. Various simulation and prior parameters are used to study the sensitivity of the posteriors to the priors. The area under the ROC curves (AUC) is calculated for each combination of parameters. A factorial design analysis on AUC is carried out by choosing various simulation and prior parameters as factors. The resulting AUC values are high for various selected parameters, indicating that the proposed method can identify the majority of outliers within tolerable errors. The results of the factorial design show that the priors do not have much effect on the marginal posterior probability as long as the sample size is not too small.<p> In this thesis, the proposed Bayesian procedure is also applied to a real data set obtained by Kanduc et al. in 2008. The proteomes of thirty viruses examined by Kanduc et al. are found to share a high number of pentapeptide overlaps to the human proteome. In a linear regression analysis of the level of viral overlaps to the human proteome and the length of viral proteome, it is reported by Kanduc et al. that among the thirty viruses, human T-lymphotropic virus 1, Rubella virus, and hepatitis C virus, present relatively higher levels of overlaps with the human proteome than the predicted level of overlaps. The results obtained using the proposed procedure indicate that the four viruses with extremely large sizes (Human herpesvirus 4, Human herpesvirus 6, Variola virus, and Human herpesvirus 5) are more likely to be the outliers than the three reported viruses. The results with thefour extreme viruses deleted confirm the claim of Kanduc et al.

Page generated in 0.4561 seconds