• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 20
  • 4
  • Tagged with
  • 77
  • 77
  • 36
  • 34
  • 17
  • 14
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • 9
  • 9
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Modélisation des bi-grappes et sélection des variables pour des données de grande dimension : application aux données d’expression génétique

Chekouo Tekougang, Thierry 08 1900 (has links)
Les simulations ont été implémentées avec le programme Java. / Le regroupement des données est une méthode classique pour analyser les matrices d'expression génétiques. Lorsque le regroupement est appliqué sur les lignes (gènes), chaque colonne (conditions expérimentales) appartient à toutes les grappes obtenues. Cependant, il est souvent observé que des sous-groupes de gènes sont seulement co-régulés (i.e. avec les expressions similaires) sous un sous-groupe de conditions. Ainsi, les techniques de bi-regroupement ont été proposées pour révéler ces sous-matrices des gènes et conditions. Un bi-regroupement est donc un regroupement simultané des lignes et des colonnes d'une matrice de données. La plupart des algorithmes de bi-regroupement proposés dans la littérature n'ont pas de fondement statistique. Cependant, il est intéressant de porter une attention sur les modèles sous-jacents à ces algorithmes et de développer des modèles statistiques permettant d'obtenir des bi-grappes significatives. Dans cette thèse, nous faisons une revue de littérature sur les algorithmes qui semblent être les plus populaires. Nous groupons ces algorithmes en fonction du type d'homogénéité dans la bi-grappe et du type d'imbrication que l'on peut rencontrer. Nous mettons en lumière les modèles statistiques qui peuvent justifier ces algorithmes. Il s'avère que certaines techniques peuvent être justifiées dans un contexte bayésien. Nous développons une extension du modèle à carreaux (plaid) de bi-regroupement dans un cadre bayésien et nous proposons une mesure de la complexité du bi-regroupement. Le critère d'information de déviance (DIC) est utilisé pour choisir le nombre de bi-grappes. Les études sur les données d'expression génétiques et les données simulées ont produit des résultats satisfaisants. À notre connaissance, les algorithmes de bi-regroupement supposent que les gènes et les conditions expérimentales sont des entités indépendantes. Ces algorithmes n'incorporent pas de l'information biologique a priori que l'on peut avoir sur les gènes et les conditions. Nous introduisons un nouveau modèle bayésien à carreaux pour les données d'expression génétique qui intègre les connaissances biologiques et prend en compte l'interaction par paires entre les gènes et entre les conditions à travers un champ de Gibbs. La dépendance entre ces entités est faite à partir des graphes relationnels, l'un pour les gènes et l'autre pour les conditions. Le graphe des gènes et celui des conditions sont construits par les k-voisins les plus proches et permet de définir la distribution a priori des étiquettes comme des modèles auto-logistiques. Les similarités des gènes se calculent en utilisant l'ontologie des gènes (GO). L'estimation est faite par une procédure hybride qui mixe les MCMC avec une variante de l'algorithme de Wang-Landau. Les expériences sur les données simulées et réelles montrent la performance de notre approche. Il est à noter qu'il peut exister plusieurs variables de bruit dans les données à micro-puces, c'est-à-dire des variables qui ne sont pas capables de discriminer les groupes. Ces variables peuvent masquer la vraie structure du regroupement. Nous proposons un modèle inspiré de celui à carreaux qui, simultanément retrouve la vraie structure de regroupement et identifie les variables discriminantes. Ce problème est traité en utilisant un vecteur latent binaire, donc l'estimation est obtenue via l'algorithme EM de Monte Carlo. L'importance échantillonnale est utilisée pour réduire le coût computationnel de l'échantillonnage Monte Carlo à chaque étape de l'algorithme EM. Nous proposons un nouveau modèle pour résoudre le problème. Il suppose une superposition additive des grappes, c'est-à-dire qu'une observation peut être expliquée par plus d'une seule grappe. Les exemples numériques démontrent l'utilité de nos méthodes en terme de sélection de variables et de regroupement. / Clustering is a classical method to analyse gene expression data. When applied to the rows (e.g. genes), each column belongs to all clusters. However, it is often observed that the genes of a subset of genes are co-regulated and co-expressed in a subset of conditions, but behave almost independently under other conditions. For these reasons, biclustering techniques have been proposed to look for sub-matrices of a data matrix. Biclustering is a simultaneous clustering of rows and columns of a data matrix. Most of the biclustering algorithms proposed in the literature have no statistical foundation. It is interesting to pay attention to the underlying models of these algorithms and develop statistical models to obtain significant biclusters. In this thesis, we review some biclustering algorithms that seem to be most popular. We group these algorithms in accordance to the type of homogeneity in the bicluster and the type of overlapping that may be encountered. We shed light on statistical models that can justify these algorithms. It turns out that some techniques can be justified in a Bayesian framework. We develop an extension of the biclustering plaid model in a Bayesian framework and we propose a measure of complexity for biclustering. The deviance information criterion (DIC) is used to select the number of biclusters. Studies on gene expression data and simulated data give satisfactory results. To our knowledge, the biclustering algorithms assume that genes and experimental conditions are independent entities. These algorithms do not incorporate prior biological information that could be available on genes and conditions. We introduce a new Bayesian plaid model for gene expression data which integrates biological knowledge and takes into account the pairwise interactions between genes and between conditions via a Gibbs field. Dependence between these entities is made from relational graphs, one for genes and another for conditions. The graph of the genes and conditions is constructed by the k-nearest neighbors and allows to define a priori distribution of labels as auto-logistic models. The similarities of genes are calculated using gene ontology (GO). To estimate the parameters, we adopt a hybrid procedure that mixes MCMC with a variant of the Wang-Landau algorithm. Experiments on simulated and real data show the performance of our approach. It should be noted that there may be several variables of noise in microarray data. These variables may mask the true structure of the clustering. Inspired by the plaid model, we propose a model that simultaneously finds the true clustering structure and identifies discriminating variables. We propose a new model to solve the problem. It assumes that an observation can be explained by more than one cluster. This problem is addressed by using a binary latent vector, so the estimation is obtained via the Monte Carlo EM algorithm. Importance Sampling is used to reduce the computational cost of the Monte Carlo sampling at each step of the EM algorithm. Numerical examples demonstrate the usefulness of these methods in terms of variable selection and clustering.
72

Modélisation des bi-grappes et sélection des variables pour des données de grande dimension : application aux données d’expression génétique

Chekouo Tekougang, Thierry 08 1900 (has links)
Le regroupement des données est une méthode classique pour analyser les matrices d'expression génétiques. Lorsque le regroupement est appliqué sur les lignes (gènes), chaque colonne (conditions expérimentales) appartient à toutes les grappes obtenues. Cependant, il est souvent observé que des sous-groupes de gènes sont seulement co-régulés (i.e. avec les expressions similaires) sous un sous-groupe de conditions. Ainsi, les techniques de bi-regroupement ont été proposées pour révéler ces sous-matrices des gènes et conditions. Un bi-regroupement est donc un regroupement simultané des lignes et des colonnes d'une matrice de données. La plupart des algorithmes de bi-regroupement proposés dans la littérature n'ont pas de fondement statistique. Cependant, il est intéressant de porter une attention sur les modèles sous-jacents à ces algorithmes et de développer des modèles statistiques permettant d'obtenir des bi-grappes significatives. Dans cette thèse, nous faisons une revue de littérature sur les algorithmes qui semblent être les plus populaires. Nous groupons ces algorithmes en fonction du type d'homogénéité dans la bi-grappe et du type d'imbrication que l'on peut rencontrer. Nous mettons en lumière les modèles statistiques qui peuvent justifier ces algorithmes. Il s'avère que certaines techniques peuvent être justifiées dans un contexte bayésien. Nous développons une extension du modèle à carreaux (plaid) de bi-regroupement dans un cadre bayésien et nous proposons une mesure de la complexité du bi-regroupement. Le critère d'information de déviance (DIC) est utilisé pour choisir le nombre de bi-grappes. Les études sur les données d'expression génétiques et les données simulées ont produit des résultats satisfaisants. À notre connaissance, les algorithmes de bi-regroupement supposent que les gènes et les conditions expérimentales sont des entités indépendantes. Ces algorithmes n'incorporent pas de l'information biologique a priori que l'on peut avoir sur les gènes et les conditions. Nous introduisons un nouveau modèle bayésien à carreaux pour les données d'expression génétique qui intègre les connaissances biologiques et prend en compte l'interaction par paires entre les gènes et entre les conditions à travers un champ de Gibbs. La dépendance entre ces entités est faite à partir des graphes relationnels, l'un pour les gènes et l'autre pour les conditions. Le graphe des gènes et celui des conditions sont construits par les k-voisins les plus proches et permet de définir la distribution a priori des étiquettes comme des modèles auto-logistiques. Les similarités des gènes se calculent en utilisant l'ontologie des gènes (GO). L'estimation est faite par une procédure hybride qui mixe les MCMC avec une variante de l'algorithme de Wang-Landau. Les expériences sur les données simulées et réelles montrent la performance de notre approche. Il est à noter qu'il peut exister plusieurs variables de bruit dans les données à micro-puces, c'est-à-dire des variables qui ne sont pas capables de discriminer les groupes. Ces variables peuvent masquer la vraie structure du regroupement. Nous proposons un modèle inspiré de celui à carreaux qui, simultanément retrouve la vraie structure de regroupement et identifie les variables discriminantes. Ce problème est traité en utilisant un vecteur latent binaire, donc l'estimation est obtenue via l'algorithme EM de Monte Carlo. L'importance échantillonnale est utilisée pour réduire le coût computationnel de l'échantillonnage Monte Carlo à chaque étape de l'algorithme EM. Nous proposons un nouveau modèle pour résoudre le problème. Il suppose une superposition additive des grappes, c'est-à-dire qu'une observation peut être expliquée par plus d'une seule grappe. Les exemples numériques démontrent l'utilité de nos méthodes en terme de sélection de variables et de regroupement. / Clustering is a classical method to analyse gene expression data. When applied to the rows (e.g. genes), each column belongs to all clusters. However, it is often observed that the genes of a subset of genes are co-regulated and co-expressed in a subset of conditions, but behave almost independently under other conditions. For these reasons, biclustering techniques have been proposed to look for sub-matrices of a data matrix. Biclustering is a simultaneous clustering of rows and columns of a data matrix. Most of the biclustering algorithms proposed in the literature have no statistical foundation. It is interesting to pay attention to the underlying models of these algorithms and develop statistical models to obtain significant biclusters. In this thesis, we review some biclustering algorithms that seem to be most popular. We group these algorithms in accordance to the type of homogeneity in the bicluster and the type of overlapping that may be encountered. We shed light on statistical models that can justify these algorithms. It turns out that some techniques can be justified in a Bayesian framework. We develop an extension of the biclustering plaid model in a Bayesian framework and we propose a measure of complexity for biclustering. The deviance information criterion (DIC) is used to select the number of biclusters. Studies on gene expression data and simulated data give satisfactory results. To our knowledge, the biclustering algorithms assume that genes and experimental conditions are independent entities. These algorithms do not incorporate prior biological information that could be available on genes and conditions. We introduce a new Bayesian plaid model for gene expression data which integrates biological knowledge and takes into account the pairwise interactions between genes and between conditions via a Gibbs field. Dependence between these entities is made from relational graphs, one for genes and another for conditions. The graph of the genes and conditions is constructed by the k-nearest neighbors and allows to define a priori distribution of labels as auto-logistic models. The similarities of genes are calculated using gene ontology (GO). To estimate the parameters, we adopt a hybrid procedure that mixes MCMC with a variant of the Wang-Landau algorithm. Experiments on simulated and real data show the performance of our approach. It should be noted that there may be several variables of noise in microarray data. These variables may mask the true structure of the clustering. Inspired by the plaid model, we propose a model that simultaneously finds the true clustering structure and identifies discriminating variables. We propose a new model to solve the problem. It assumes that an observation can be explained by more than one cluster. This problem is addressed by using a binary latent vector, so the estimation is obtained via the Monte Carlo EM algorithm. Importance Sampling is used to reduce the computational cost of the Monte Carlo sampling at each step of the EM algorithm. Numerical examples demonstrate the usefulness of these methods in terms of variable selection and clustering. / Les simulations ont été implémentées avec le programme Java.
73

Projection de la mortalité aux âges avancées au Canada : comparaison de trois modèles

Tang, Kim Oanh January 2009 (has links)
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
74

Patient and other factors influencing the prescribing of cardiovascular prevention therapy in the general practice setting with and without nurse assessment

Mohammed, Mohammed A., El Sayed, C., Marshall, T. January 2012 (has links)
No / Although guidelines indicate when patients are eligible for antihypertensives and statins, little is known about whether general practitioners (GPs) follow this guidance. To determine the factors influencing GPs decisions to prescribe cardiovascular prevention drugs. DESIGN OF STUDY: Secondary analysis of data collected on patients whose cardiovascular risk factors were measured as part of a controlled study comparing nurse-led risk assessment (four practices) with GP-led risk assessment (two practices). SETTING: Six general practices in the West Midlands, England. PATIENTS: Five hundred patients: 297 assessed by the project nurse, 203 assessed by their GP. MEASUREMENTS: Cardiovascular risk factor data and whether statins or antihypertensives were prescribed. Multivariable logistic regression models investigated the relationship between prescription of preventive treatments and cardiovascular risk factors. RESULTS: Among patients assessed by their GP, statin prescribing was significantly associated only with a total cholesterol concentration >/= 7 mmol/L and antihypertensive prescribing only with blood pressure >/= 160/100 mm Hg. Patients prescribed an antihypertensive by their GP were five times more likely to be prescribed a statin. Among patients assessed by the project nurse, statin prescribing was significantly associated with age, sex, and all major cardiovascular risk factors. Antihypertensive prescribing was associated with blood pressures >/= 140/90 mm Hg and with 10-year cardiovascular risk. LIMITATIONS: Generalizability is limited, as this is a small analysis in the context of a specific cardiovascular prevention program. CONCLUSIONS: GP prescribing of preventive treatments appears to be largely determined by elevation of a single risk factor. When patients were assessed by the project nurse, prescribing was much more consistent with established guidelines.
75

Determinação de incidência, preditores e escores de risco de complicações cardiovasculares e óbito total, em 30 dias e após 1ano da cirurgia, em pacientes submetidos a cirurgias vasculares arteriais eletivas / Incidence, predictors, risk scores of cardiovascular complications, and total death rate within 30 days and 1 year after elective arterial surgery

Smeili, Luciana Andréa Avena 30 April 2015 (has links)
Introdução: Estima-se que ocorram 2,5 milhões de mortes por ano relacionadas a cirurgias não cardíacas e cinco vezes este valor para morbidade, com limitações funcionais e redução na sobrevida em longo prazo. Pacientes que deverão ser submetidos à cirurgia vascular são considerados de risco aumentado para eventos adversos cardiovasculares no pós-operatório. Há, ainda, muitas dúvidas em como fazer uma avaliação pré-operatória mais acurada desses pacientes. Objetivo: Em pacientes submetidos à cirurgia vascular arterial eletiva, avaliar a incidência e preditores de complicações cardiovasculares e/ou óbito total, e calcular a performance dos modelos de estratificação de risco mais utilizados. Métodos: Em pacientes adultos, consecutivos, operados em hospital terciário, determinou-se a incidência de complicações cardiovasculares e óbitos, em 30 dias e em um ano. Comparações univariadas e regressão logística avaliaram os fatores de risco associados com os desfechos e a curva ROC (receiver operating characteristic) examinou a capacidade discriminatória do Índice de Risco Cardíaco Revisado (RCRI) e do Índice de Risco Cardíaco do Grupo de Cirurgia Vascular da New England (VSG-CRI). Resultados: Um total de 141 pacientes (idade média 66 anos, 65% homens) realizou cirurgia de: carótida 15 (10,6%), membros inferiores 65 (46,1%), aorta abdominal 56 (39,7%) e outras (3,5%). Complicações cardiovasculares e óbito ocorreram, respectivamente, em 28 (19,9%) e em 20 (14,2%), em até 30 dias, e em 20 (16,8%) e 10 (8,4%), de 30 dias a um ano. Complicações combinadas ocorreram em 39 (27,7%) pacientes em até 30 dias e em 21 (17,6%) de 30 dias a um ano da cirurgia. Para eventos em até 30 dias, os preditores de risco encontrados foram: idade, obesidade, acidente vascular cerebral, capacidade funcional ruim, cintilografia com hipocaptação transitória, cirurgia aberta, cirurgia de aorta e troponina alterada. Os escores Índice de Risco Cardíaco Revisado (RCRI) e Índice de Risco Cardíaco do Grupo de Estudo Vascular da New England (VSG-CRI) obtiveram AUC (area under curve) de 0,635 e 0,639 para complicações cardiovasculares precoces e 0,562 e 0,610 para óbito em 30 dias, respectivamente. Com base nas variáveis preditoras aqui encontradas, testou-se um novo escore pré-operatório que obteve AUC de 0,747, para complicações cardiovasculares precoces, e um escore intraoperatório que apresentou AUC de 0,840, para óbito em até 30 dias. Para eventos tardios (de 30 dias a 1 ano), os preditores encontrados foram: capacidade funcional ruim, pressão arterial sistólica, cintilografia com hipocaptação transitória, ASA (American Society of Anesthesiologists Physical Status) classe > II, RCRI (AUC 0,726) e troponina alterada. Conclusões: Nesse grupo pequeno e selecionado de pacientes de elevada complexidade clínica, submetidos à cirurgia vascular arterial, a incidência de eventos adversos foi elevada. Para complicações em até 30 dias, mostramos que os índices de avaliação de risco mais utilizados até o momento (RCRI e VSG-CRI) não apresentaram boa performance em nossa amostra. A capacidade preditiva de um escore mais amplo pré-operatório, e uma análise de risco em dois tempos: no pré-operatório e no pós-operatório imediato, como o que simulamos, poderá ser mais efetiva em estimar o risco de complicações / Introduction: Approximately 2.5 million deaths are caused by non-cardiac surgeries per year, while morbidity, represented by functional impairment and a decline in long-term survival, accounts for five times this value. Patients who require a vascular surgery are considered at an increased risk for adverse cardiovascular events in the postoperative period. However, the method for obtaining a more accurate preoperative evaluation in these patients has not yet been determined. Objective: In patients undergoing elective arterial vascular surgery, the incidence and predictors of cardiovascular complications and/or total death were determined and the performance of risk stratification models was assessed. Methods: The incidence of cardiovascular complications and death within 30 days and 1 year after vascular surgery was determined in consecutive adult patients operated in a tertiary hospital. Univariate comparison and logistic regression analysis were used to evaluate risk factors associated with the outcome, and the receiver operating characteristic (ROC) curve determined the discriminatory capacity of the Revised Cardiac Risk Index (RCRI) and the Cardiac Risk Index of the New England Vascular Surgery Group (VSG-CRI). Results: In all, 141 patients (mean age, 66 years; 65% men) underwent vascular surgery, namely for the carotid arteries (15 [10.6%]), inferior limbs (65 [46.1%]), abdominal aorta (56 [39.7%]), and others (5 [3.5%]). Cardiovascular complications and death occurred in 28 (19.9%) and 20 (14.2%) patients, respectively, within 30 days after surgery, and in 20 (16.8%) and 10 (8.4%) patients, respectively, between 30 days and 1 year after the surgical procedure. Combined complications occurred in 39 patients (27.7%) within 30 days and in 21 patients (17.6%) between 30 days and 1 year after surgery. The risk predictors for cardiovascular events that occurred within 30 days were age, obesity, stroke, poor functional capacity, transitory myocardial hypocaptation on scintigraphy, open surgery, aortic surgery, and abnormal troponin levels. The RCRI and VSG-CRI showed an under the curve area of 0.635 and 0.639 for early cardiovascular complications as well as of 0.562 and 0.610 for death within 30 days, respectively. Based on the predictors found in this study, a new preoperative score was proposed, based on an AUC of 0.747 obtained for early cardiovascular complications and an intraoperative score that presented an AUC of 0.840 for death within 30 days. For late events (between 30 days and 1 year), the predictors were poor functional capacity, systolic blood pressure, presence of transitory myocardial hypocaptation on scintigraphy, class > II American Society of Anesthesiologists Physical Status score, RCRI (AUC= 0.726), and abnormal troponin levels. Conclusions: In this small group of patients with increased clinical complexity who underwent arterial surgery, the incidence of adverse events was high. In our series, we found that RCRI and VSG-CRI do not reasonably predict the risk of cardiovascular complications. The predictive capacity of a modified preoperative score and evaluating the risk preoperatively and early postoperatively, such as that simulated in this study, may be more effective in determining the risk of complications
76

Determinação de incidência, preditores e escores de risco de complicações cardiovasculares e óbito total, em 30 dias e após 1ano da cirurgia, em pacientes submetidos a cirurgias vasculares arteriais eletivas / Incidence, predictors, risk scores of cardiovascular complications, and total death rate within 30 days and 1 year after elective arterial surgery

Luciana Andréa Avena Smeili 30 April 2015 (has links)
Introdução: Estima-se que ocorram 2,5 milhões de mortes por ano relacionadas a cirurgias não cardíacas e cinco vezes este valor para morbidade, com limitações funcionais e redução na sobrevida em longo prazo. Pacientes que deverão ser submetidos à cirurgia vascular são considerados de risco aumentado para eventos adversos cardiovasculares no pós-operatório. Há, ainda, muitas dúvidas em como fazer uma avaliação pré-operatória mais acurada desses pacientes. Objetivo: Em pacientes submetidos à cirurgia vascular arterial eletiva, avaliar a incidência e preditores de complicações cardiovasculares e/ou óbito total, e calcular a performance dos modelos de estratificação de risco mais utilizados. Métodos: Em pacientes adultos, consecutivos, operados em hospital terciário, determinou-se a incidência de complicações cardiovasculares e óbitos, em 30 dias e em um ano. Comparações univariadas e regressão logística avaliaram os fatores de risco associados com os desfechos e a curva ROC (receiver operating characteristic) examinou a capacidade discriminatória do Índice de Risco Cardíaco Revisado (RCRI) e do Índice de Risco Cardíaco do Grupo de Cirurgia Vascular da New England (VSG-CRI). Resultados: Um total de 141 pacientes (idade média 66 anos, 65% homens) realizou cirurgia de: carótida 15 (10,6%), membros inferiores 65 (46,1%), aorta abdominal 56 (39,7%) e outras (3,5%). Complicações cardiovasculares e óbito ocorreram, respectivamente, em 28 (19,9%) e em 20 (14,2%), em até 30 dias, e em 20 (16,8%) e 10 (8,4%), de 30 dias a um ano. Complicações combinadas ocorreram em 39 (27,7%) pacientes em até 30 dias e em 21 (17,6%) de 30 dias a um ano da cirurgia. Para eventos em até 30 dias, os preditores de risco encontrados foram: idade, obesidade, acidente vascular cerebral, capacidade funcional ruim, cintilografia com hipocaptação transitória, cirurgia aberta, cirurgia de aorta e troponina alterada. Os escores Índice de Risco Cardíaco Revisado (RCRI) e Índice de Risco Cardíaco do Grupo de Estudo Vascular da New England (VSG-CRI) obtiveram AUC (area under curve) de 0,635 e 0,639 para complicações cardiovasculares precoces e 0,562 e 0,610 para óbito em 30 dias, respectivamente. Com base nas variáveis preditoras aqui encontradas, testou-se um novo escore pré-operatório que obteve AUC de 0,747, para complicações cardiovasculares precoces, e um escore intraoperatório que apresentou AUC de 0,840, para óbito em até 30 dias. Para eventos tardios (de 30 dias a 1 ano), os preditores encontrados foram: capacidade funcional ruim, pressão arterial sistólica, cintilografia com hipocaptação transitória, ASA (American Society of Anesthesiologists Physical Status) classe > II, RCRI (AUC 0,726) e troponina alterada. Conclusões: Nesse grupo pequeno e selecionado de pacientes de elevada complexidade clínica, submetidos à cirurgia vascular arterial, a incidência de eventos adversos foi elevada. Para complicações em até 30 dias, mostramos que os índices de avaliação de risco mais utilizados até o momento (RCRI e VSG-CRI) não apresentaram boa performance em nossa amostra. A capacidade preditiva de um escore mais amplo pré-operatório, e uma análise de risco em dois tempos: no pré-operatório e no pós-operatório imediato, como o que simulamos, poderá ser mais efetiva em estimar o risco de complicações / Introduction: Approximately 2.5 million deaths are caused by non-cardiac surgeries per year, while morbidity, represented by functional impairment and a decline in long-term survival, accounts for five times this value. Patients who require a vascular surgery are considered at an increased risk for adverse cardiovascular events in the postoperative period. However, the method for obtaining a more accurate preoperative evaluation in these patients has not yet been determined. Objective: In patients undergoing elective arterial vascular surgery, the incidence and predictors of cardiovascular complications and/or total death were determined and the performance of risk stratification models was assessed. Methods: The incidence of cardiovascular complications and death within 30 days and 1 year after vascular surgery was determined in consecutive adult patients operated in a tertiary hospital. Univariate comparison and logistic regression analysis were used to evaluate risk factors associated with the outcome, and the receiver operating characteristic (ROC) curve determined the discriminatory capacity of the Revised Cardiac Risk Index (RCRI) and the Cardiac Risk Index of the New England Vascular Surgery Group (VSG-CRI). Results: In all, 141 patients (mean age, 66 years; 65% men) underwent vascular surgery, namely for the carotid arteries (15 [10.6%]), inferior limbs (65 [46.1%]), abdominal aorta (56 [39.7%]), and others (5 [3.5%]). Cardiovascular complications and death occurred in 28 (19.9%) and 20 (14.2%) patients, respectively, within 30 days after surgery, and in 20 (16.8%) and 10 (8.4%) patients, respectively, between 30 days and 1 year after the surgical procedure. Combined complications occurred in 39 patients (27.7%) within 30 days and in 21 patients (17.6%) between 30 days and 1 year after surgery. The risk predictors for cardiovascular events that occurred within 30 days were age, obesity, stroke, poor functional capacity, transitory myocardial hypocaptation on scintigraphy, open surgery, aortic surgery, and abnormal troponin levels. The RCRI and VSG-CRI showed an under the curve area of 0.635 and 0.639 for early cardiovascular complications as well as of 0.562 and 0.610 for death within 30 days, respectively. Based on the predictors found in this study, a new preoperative score was proposed, based on an AUC of 0.747 obtained for early cardiovascular complications and an intraoperative score that presented an AUC of 0.840 for death within 30 days. For late events (between 30 days and 1 year), the predictors were poor functional capacity, systolic blood pressure, presence of transitory myocardial hypocaptation on scintigraphy, class > II American Society of Anesthesiologists Physical Status score, RCRI (AUC= 0.726), and abnormal troponin levels. Conclusions: In this small group of patients with increased clinical complexity who underwent arterial surgery, the incidence of adverse events was high. In our series, we found that RCRI and VSG-CRI do not reasonably predict the risk of cardiovascular complications. The predictive capacity of a modified preoperative score and evaluating the risk preoperatively and early postoperatively, such as that simulated in this study, may be more effective in determining the risk of complications
77

Assessment of cerebral venous return by a novel plethysmography method

Zamboni, P., Menegatti, E., Conforti, P., Shepherd, Simon J., Tessari, M., Beggs, Clive B. January 2012 (has links)
No / BACKGROUND: Magnetic resonance imaging and echo color Doppler (ECD) scan techniques do not accurately assess the cerebral venous return. This generated considerable scientific controversy linked with the diagnosis of a vascular syndrome known as chronic cerebrospinal venous insufficiency (CCSVI) characterized by restricted venous outflow from the brain. The purpose of this study was to assess the cerebral venous return in relation to the change in position by means of a novel cervical plethysmography method. METHODS: This was a single-center, cross-sectional, blinded case-control study conducted at the Vascular Diseases Center, University of Ferrara, Italy. The study involved 40 healthy controls (HCs; 18 women and 22 men) with a mean age of 41.5 +/- 14.4 years, and 44 patients with multiple sclerosis (MS; 25 women and 19 men) with a mean age of 41.0 +/- 12.1 years. All participants were previously scanned using ECD sonography, and further subset in HC (CCSVI negative at ECD) and CCSVI groups. Subjects blindly underwent cervical plethysmography, tipping them from the upright (90 degrees ) to supine position (0 degrees ) in a chair. Once the blood volume stabilized, they were returned to the upright position, allowing blood to drain from the neck. We measured venous volume (VV), filling time (FT), filling gradient (FG) required to achieve 90% of VV, residual volume (RV), emptying time (ET), and emptying gradient (EG) required to achieve 90% of emptying volume (EV) where EV = VV - RV, also analyzing the considered parameters by receiver operating characteristic (ROC) curves and principal component mathematical analysis. RESULTS: The rate at which venous blood discharged in the vertical position (EG) was significantly faster in the controls (2.73 mL/second +/- 1.63) compared with the patients with CCSVI (1.73 mL/second +/- 0.94; P = .001). In addition, respectively, in controls and in patients with CCSVI, the following parameters were highly significantly different: FT 5.81 +/- 1.99 seconds vs 4.45 +/- 2.16 seconds (P = .003); FG 0.92 +/- 0.45 mL/second vs 1.50 +/- 0.85 mL/second (P < .001); RV 0.54 +/- 1.31 mL vs 1.37 +/- 1.34 mL (P = .005); ET 1.84 +/- 0.54 seconds vs 2.66 +/- 0.95 seconds (P < .001). Mathematical analysis demonstrated a higher variability of the dynamic process of cerebral venous return in CCSVI. Finally, ROC analysis demonstrated a good sensitivity of the proposed test with a percent concordant 83.8, discordant 16.0, tied 0.2 (C = 0.839). CONCLUSIONS: Cerebral venous return characteristics of the patients with CCSVI were markedly different from those of the controls. In addition, our results suggest that cervical plethysmography has great potential as an inexpensive screening device and as a postoperative monitoring tool.

Page generated in 0.0663 seconds