Spelling suggestions: "subject:"4passes."" "subject:"5classes.""
381 |
Difusió i impacte de l'estandardització de la gestió de serveis de tecnologies de la informació amb ISO 20000Cots, Santi 21 April 2015 (has links)
La present tesi aborda l’estudi de la difusió a escala global, així com l’impacte que està tenint la norma ISO 20000 a les organitzacions que la utilitzen. Aquesta norma pren una forma similar a la dels reconeguts estàndards de gestió de la qualitat , adaptant la seva aplicació als requisits específics dels serveis, especialment pel cas dels de les tecnologies de la informació.
Aquesta conjunció entre la gestió estandarditzada, i la gestió de serveis TI, així com la fiabilitat que atorga el tractar-se d’un estàndard certificable, ha permès estudiar-lo utilitzant metodologies i referències validades per estudis previs en el àmbit de gestió estandarditzada i la qualitat.
Les conclusions permeten, no només avaluar la difusió fins al moment sinó anticipar-ne l’evolució futura, comparant-la amb la d’altres estàndards. També s’han analitzat qüestions de l’impacte com les motivacions i beneficis de les organitzacions per la implantació i la certificació d’aquesta mena de sistemes de gestió, que poden permetre anticipar-los a les organitzacions interessades en implementar aquesta mena d’estàndards. / Due to the fact that the management of information technology services is a field in clear expansion and at the same time that the knowledge society becomes more dependent on such services, the need to manage them with proper quality becomes more and more evident.
This thesis deals with a twofold aim: (i) the study of the global diffusion of the ISO 20000 standard and (ii) the impact of it on the organizations. The standard takes a similar pattern to the recognized standards of quality management, adapted for the specific requirements of the services, and particularly for the case of information technology.
This conjunction between standardized management, and IT service management, as well as the reliability granted by being a certifiable standard, has enabled to study it using validated methodologies and references from previous studies in the field of management and standardized quality.
The findings allow us not only to assess the diffusion so far, but also to anticipating the future evolution, comparing it with the other standards. We have also analyzed the impact of issues such as motivation and benefits by the implementation of the standard, which allows anticipating these results to other organizations interested in implementing this kind of standards.
|
382 |
Exploring the dynamics and dark halos of elliptical galaxies at large radiiForestell, Amy Dove 23 October 2009 (has links)
Dark matter is now accepted as an integral part of our universe, and galaxy dynamics have long provided the most convincing observational evidence for dark matter. Spiral galaxies have traditionally been used for these studies because of their more simple kinematics, however elliptical galaxies need to be understood as well. In this dissertation I present deep long-slit spectroscopy from the University of Texas’ Hobby-Eberly Telescope for a sample of elliptical galaxies. For a subsample of galaxies I fit axisymmetric orbit-superposition models with a range of dark halo density profiles. I find that all three galaxies modeled require a significant dark halo to explain their motions. However, the shape of the dark halo is not the expected NFW profile, but rather a profile with a flat central slope. I also discuss the galaxy masses, anisotropies, and stellar mass-to-light ratios. / text
|
383 |
Etude des Environnements Circumstellaires en Imagerie à Haut Contraste et à Haute Résolution AngulaireChauvin, Gael 04 November 2003 (has links) (PDF)
Dans le contexte de la recherche des compagnons de faibles masses, planètes et naines brunes, et des disques de poussières autour des étoiles brillantes, une première partie de mon travail est consacrée à l'étude des performances de détection des instruments dédiés à l'imagerie à haut contraste et à haute résolution angulaire. Je me suis particulièrement intéressé aux instruments équipant actuellement les grands telescopes au sol, qui sont composés d'un système d'optique adaptative et d'une caméra infrarouge, couplée à un coronographe stellaire. J'ai eu la chance de participer aux phases d'intégration et de tests de l'instrument d'optique adaptative NAOS. Il est actuellement installé sur le télescope UT4 du Very Large Telescope de l'ESO, au Chili. J'ai, ensuite, developpé un modèle de contraste afin de cerner et d'étudier le comportement des différentes limitations dans une image d'optique adaptative, en fonction de la configuration observationnelle choisie, des modes de fonctionnement du détecteur, des caractéristiques de l'instrument utilisé et de la qualité d'image liée aux conditions atmosphériques. Cette réflexion a été déterminante dans le cadre du second volet de mon travail, portant sur la recherche en imagerie coronographique des compagnons naines brunes ou planètes et des disques circumstellaires. Deux catégories d'étoiles se sont avérées particulièrement propices à ce type d'étude. Il s'agit des membres des associations jeunes et proches, favorisant, par leur statut évolutif, la détection d'objets peu massifs, et les étoiles ayant une planète détectée par des mesures de vitesses radiales. Je présente, d'une part, les résultats que j'ai obtenus, concernant la détection de plusieurs compagnons de faibles masses probables, dans les associations jeunes du groupe Beta Pictoris, de MBM12 et de Tucana-Horologium, ainsi qu'une étude statistique, sans précédent, sur la fraction de compagnons stellaires et naines brunes parmi ces étoiles. Je décris, d'autre part, les résultats obtenus lors de relevés systématiques d'imagerie profonde des étoiles ayant des planètes. Ils concernent la découverte d'objets faibles, jusqu'à présent inconnus dans l'environnement de ces étoiles, et les capacités de détection atteintes grâce à l'imagerie à haut contraste et à haute résolution angulaire.
|
384 |
Long term capacity planning with products' renewalYilmaz, Görken 30 April 2014 (has links)
Long Term Capacity Planning (LTCP) consists of deciding the type and amount of capacity of production systems for multiple periods in a long term planning horizon. It involves decisions related to strategic planning, such as buying or selling of production technology, outsourcing, and making tactical decisions regarding capacity level and configuration. Making these kinds of decisions correctly is highly important for three reasons. Firstly, they usually involve a high investment; secondly, once a decision like this is taken, it cannot be changed easily (i.e. they are highly irreversible); thirdly, they affect the performance of the entire system and the decisions that will be possible at a tactical level. If capacity is suboptimal, there will be lost demand (in the present and possibly in the future); if the system is oversized, there will be unused resources, which may represent an economical loss. Long term decisions are typically solved with non-formalized procedures, such as generating and comparing solutions, which do not guarantee an optimal solution. In addition, the characteristics of the long term capacity planning problem make the problem very difficult to solve, especially in cases in which products have a short life cycle. One of the most relevant characteristics is the uncertainty inherent to strategic problems. In this case, uncertainty affects parameters such as demand, product life cycle, available production technology and the economic parameters involved (e.g. prices, costs, bank interests, etc.). Selection of production technology depends on the products being offered by the company, along with factors such as costs and productivity. When a product is renewed, the production technology may not be capable of producing it; or, if it can, the productivity and/or the quality may be poor. Furthermore, renewing a product will affect its demand (cannibalization), as well as the demand and value of the old products. Hence, it is very important to accurately decide the correct time for product renewal. This thesis aims to design a model for solving a long term capacity planning problem with the following main characteristics: (1) short-life cycle products and their renewal, with demand interactions (complementary and competitive products) considered; (2) different capacity options (such as acquisition, renewal, updating, outsourcing and reducing); and (3) tactical decisions (including integration strategic and tactical decisions).
|
385 |
Calcul à une boucle avec plusieurs pattes externes dans les théories de jauge : la bibliothèque Golem95Zidi, Mohamed Sadek 06 September 2013 (has links) (PDF)
Les calculs de précision dans les théories de jauge jouent un rôle très important pour l'étude de la physique du Modèle Standard et au-delà dans les super-collisionneurs de particules comme le LHC, TeVatron et ILC. Par conséquent, il est extrêmement important de fournir des outils du calcul d'amplitudes à une boucle stables, rapides, efficaces et hautement automatisés. Cette thèse a pour but de développer la bibliothèque d'intégrales Golem95. Cette bibliothèque est un programme écrit en Fortran95, qui contient tous les ingrédients nécessaires pour calculer une intégrale scalaire ou tensorielle à une boucle avec jusqu'à six pattes externes. Golem95 utilise une méthode traditionnelle de réduction (réduction à la Golem) qui réduit les facteurs de forme en des intégrales de base redondantes qui peuvent être scalaires (sans paramètres de Feynman au numérateur) ou tensorielles (avec des paramètres de Feynman au numérateur); ce formalisme permet d'éviter les problèmes de l'instabilité numérique engendrés par des singularités factices dues à l'annulation des déterminants de Gram. En plus, cette bibliothèque peut être interfacée avec des programmes du calcul automatique basés sur les méthodes d'unitarité comme GoSam par exemple. Les versions antérieures de Golem95 ont été conçues pour le calcul des amplitudes sans masses internes. Le but de ce travail de thèse est de généraliser cette bibliothèque pour les configurations les plus générales (les masses complexes sont incluses), et de fournir un calcul numériquement stable dans les régions problématique en donnant une représentation intégrale unidimensionnelle stable pour chaque intégrale de base de Golem95.
|
386 |
Méthode numérique d'estimation du mouvement des masses mollesThouzé, Arsène 10 1900 (has links)
L’analyse biomécanique du mouvement humain en utilisant des systèmes optoélectroniques et des marqueurs cutanés considère les segments du corps comme des corps rigides. Cependant, le mouvement des tissus mous par rapport à l'os, c’est à dire les muscles et le tissu adipeux, provoque le déplacement des marqueurs. Ce déplacement est le fait de deux composantes, une composante propre correspondant au mouvement aléatoire de chaque marqueur et une composante à l’unisson provoquant le déplacement commun des marqueurs cutanés lié au mouvement des masses sous-jacentes. Si nombre d’études visent à minimiser ces déplacements, des simulations ont montré que le mouvement des masses molles réduit la dynamique articulaire. Cette observation est faite uniquement par la simulation, car il n'existe pas de méthodes capables de dissocier la cinématique des masses molles de celle de l’os. L’objectif principal de cette thèse consiste à développer une méthode numérique capable de distinguer ces deux cinématiques.
Le premier objectif était d'évaluer une méthode d'optimisation locale pour estimer le mouvement des masses molles par rapport à l’humérus obtenu avec une tige intra-corticale vissée chez trois sujets. Les résultats montrent que l'optimisation locale sous-estime de 50% le déplacement des marqueurs et qu’elle conduit à un classement de marqueurs différents en fonction de leur déplacement. La limite de cette méthode vient du fait qu'elle ne tient pas compte de l’ensemble des composantes du mouvement des tissus mous, notamment la composante en unisson.
Le second objectif était de développer une méthode numérique qui considère toutes les composantes du mouvement des tissus mous. Plus précisément, cette méthode devait fournir une cinématique similaire et une plus grande estimation du déplacement des marqueurs par rapport aux méthodes classiques et dissocier ces composantes. Le membre inférieur est modélisé avec une chaine cinématique de 10 degrés de liberté reconstruite par optimisation globale en utilisant seulement les marqueurs placés sur le pelvis et la face médiale du tibia. L’estimation de la cinématique sans considérer les marqueurs placés sur la cuisse et le mollet permet d'éviter l’influence de leur déplacement sur la reconstruction du modèle cinématique. Cette méthode testée sur 13 sujets lors de sauts a obtenu jusqu’à 2,1 fois plus de déplacement des marqueurs en fonction de la méthode considérée en assurant des cinématiques similaires. Une approche vectorielle a montré que le déplacement des marqueurs est surtout dû à la composante à l’unisson. Une approche matricielle associant l’optimisation locale à la chaine cinématique a montré que les masses molles se déplacent principalement autour de l'axe longitudinal et le long de l'axe antéro-postérieur de l'os.
L'originalité de cette thèse est de dissocier numériquement la cinématique os de celle des masses molles et les composantes de ce mouvement. Les méthodes développées dans cette thèse augmentent les connaissances sur le mouvement des masses molles et permettent d’envisager l’étude de leur effet sur la dynamique articulaire. / Biomechanical analysis of human movement using optoelectronic system and skin markers considers body segments as rigid bodies. However the soft tissue motion relative to the bone, including muscles, fat mass, results in relative displacement of markers. This displacement is the results of two components, an own component which corresponds to a random motion of each marker and an in-unison component corresponding to the common movement of skin markers resulting from the movement of the underlying wobbling mass. While most studies aim to minimize these displacements, computer simulation models have shown that the movement of the soft tissue motion relative to the bones reduces the joint kinetics. This observation is only available using computer simulations because there are no methods able to distinguish the kinematics of wobbling mass of the bones kinematics. The main objective of this thesis is to develop a numerical method able to distinguish this different kinematics.
The first aim of this thesis was to assess a local optimisation method for estimating the soft tissue motion using intra-cortical pins screwed into the humerus in three subjects. The results show that local optimisation underestimates of 50% the marker displacements. Also it leads to a different marker ranking in terms of displacement. The limit of local optimisation comes from the fact that it does not consider all the components of the soft tissue motion, especially the in-unison component.
The second aim of this thesis was to develop a numerical method that accounts for all the component of the soft tissue motion. More specifically, this method should provide similar kinematics and estimate large marker displacement and distinguish the two components to conventional approaches. The lower limb is modeled using a 10 degree of freedom chain model reconstructed using global optimisation and the markers placed only on the pelvis and the medial face of the shank. The original estimate of joint kinematics without considering the markers placed on the thigh and on the calf avoids the influences of these markers displacement on the kinematic model reconstruction. This method was tested on 13 subjects who performed hopping trials and obtained up to 2.1 times of marker displacement depending the method considered ensuring similar joint-kinematics. A vector approach shown that marker displacements is more induce by the in-unison component. A matrix approach combining the local optimisation and the kinematic model shown that the wobbling mass moves around the longitudinal axis and along the antero-posterior axis of the bone.
The originality of this thesis is to numerically distinguish the bone kinematics from the wobbling mass kinematics and the two components of the soft tissue motion. The methods developed in this thesis increases the knowledge on soft tissue motion and allow future studies to consider their movement in joint kinetics calculation.
|
387 |
Modélisation de la consommation électrique à partir de grandes masses de données pour la simulation des alternatives énergétiques du futur / Electricity demand modeling using large scale databases to simulate different prospective scenariosBarbier, Thibaut 22 December 2017 (has links)
L’évolution de la consommation électrique est un point clé pour les choix à venir, tant pour les moyens de production d’électricité, que pour le dimensionnement du réseau à toutes ses échelles. Aujourd’hui, ce sont majoritairement des modèles statistiques basés sur les consommations passées et des tendances démographiques ou économétriques qui permettent de prédire cette consommation. Dans le contexte de la transition énergétique, des changements importants sont en cours et à venir, et la consommation future ne sera certainement pas une continuation des tendances passées. Modéliser ces changements nécessite une modélisation fine de type bottom-up de chaque contributeur de la consommation électrique. Ce type de modèle présente des challenges de modélisation, car il nécessite un grand nombre de paramètres d’entrée qui peuvent difficilement être renseignés de façon réaliste à grande échelle. En même temps, les données et informations de tout type n’ont jamais été autant disponibles. Cela représente à la fois un atout pour la modélisation, mais aussi une difficulté importante notamment à cause de l’hétérogénéité des données. Dans ce contexte, cette thèse présente une démarche de construction d’un simulateur de consommation électrique bottom-up capable de simuler différentes alternatives énergétiques à l’échelle de la France. Un travail de recensement, de classification et d’association des bases de données pour expliquer la consommation électrique a d’abord été mené. Ensuite, le modèle de consommation électrique a été présenté ; il a été validé et calibré sur une grande quantité de mesures de consommation électrique des départs HTA fournie par Enedis. Ce modèle a enfin pu être utilisé pour simuler différentes alternatives énergétiques afin d’aider au dimensionnement du réseau de distribution. / Future trend of electricity demand is a key point for sizing both the electricity network and the power plants. In order to forecast future electricity demand, current models mostly use statistical approaches based on past demand measurements and on demographic and economic trends. Because of current context of energy transition which comes along with important changes, future electricity demand is not expected to be similar to past trends. Modeling these changes requires a bottom-up modeling of each contributor to electricity demand. This kind of model is challenging because of the large number of input data required. At the same time, data and information are more and more available. Such availability can be considered both as an asset for modeling and as an important issue because of data heterogeneity. In this context, this dissertation offers an approach to build a bottom-up load curve simulator which enables to simulate prospective scenarii at the scale of France country. Firstly, an assessment, classification, and matching of the large databases explaining the electricity demand have been performed. Then, the electricity demand model has been presented. It has been validated and calibrated on Enedis’ large volumes of electricity demand measurements of medium voltage feeders. Finally, this model has been used to simulate several prospective scenarii in order to improve the electricity distribution network sizing.
|
388 |
Définition d'un modèle unifié pour la simulation physique adaptative avec changements topologiques / Definition of a unified model for the adaptative physical simulation with topological changesFléchon, Elsa 09 December 2014 (has links)
Les travaux réalisés pendant mon doctorat répondent à la problématique de la simulation physique, en temps interactif, du comportement d'objets déformables soumis à des changements topologiques. Mes travaux ont abouti à la définition d'un nouveau modèle unifié couplant un modèle topologique complet et un modèle physique, pour la simulation physique d'objets déformables décomposés en éléments surfaciques comme volumiques, tout en réalisant pendant cette simulation des changements topologiques comme la découpe ou la subdivision locale d'un élément du maillage. Cette dernière opération a permis de proposer une méthode adaptative où les éléments du maillage sont raffinés selon un critère géométrique au cours de la simulation. Nous avons fait le choix des cartes combinatoires et plus particulièrement celui des complexes cellulaires linéaires, comme modèle topologique de notre modèle unifié. Ils ont l'avantage d'être génériques par rapport à la dimension de l'objet représenté mais également par rapport à la topologie des cellules en lesquelles l'objet est décomposé. Le système masses-ressort a, quant à lui, été choisi comme modèle physique de notre modèle unifié. L'avantage de ce dernier réside dans la simplicité de ses équations, son implémentation intuitive, son interactivité et sa facilité à gérer les changements topologiques. Enfin, la définition d'un modèle unifié nous a permis de proposer un modèle évitant la redondance d'informations et facilitant la mise à jour de ces dernières suite à des changements topologiques / The work made during my PhD, respond to the problematic of physical simulation of the behavior of deformable objects subject to topological changes in interactive time. My work resulted in the definition of a new unified model coupling a complete topological model and a physical model for physical simulation of deformable objects decomposed in surface as volume elements, while performing during this simulation topological changes such as cutting or subdivision local of a mesh element. This operation allowed us to propose an adaptive method where mesh elements are refined during the simulation according to a geometric criterion. For the topological model of our unified model, we made the choice of combinatorial maps and more particularly linear cellular complexes. Their main advantage of the latter is the simplicity of its equations, its intuitive implementation, its interactivity and its ease to handle topological changes. Finally, the definition of a unified model allowed us to propose a model avoiding duplication of information and facilitate the update after topological changes
|
389 |
Análise de risco de obras subterrâneas em maciços rochosos fraturados / Risk analysis of underground structures in fractured rock massesNapa García, Gian Franco 11 June 2015 (has links)
Nesta tese o autor estabelece um método sistemático de quantificação de risco em obras subterrâneas em maciço rochoso fraturado utilizando de maneira eficiente conceitos de confiabilidade estrutural. O método é aplicado a um caso de estudo real da caverna da Usina Hidrelétrica Paulo Afonso IV, UHE-PAIV. Adicionalmente, um estudo de otimização de projeto com base em risco quantitativo também é apresentado para mostrar as potencialidades do método. A estimativa do risco foi realizada de acordo com as recomendações da Organização de Auxílio contra Desastres das Nações Unidas, UNDRO, onde o risco pode ser estimado como a convolução entre as funções de perigo, vulnerabilidade e perdas. Para a quantificação da confiabilidade foram utilizados os métodos de aproximação FORM e SORM com uso de acoplamento direto e de superfícies de resposta polinomial quadráticas. A simulação de Monte Carlo também foi utilizada para a quantificação da confiabilidade no estudo de caso da UHE-PAIV devido à ocorrência de múltiplos modos de falha simultâneos. Foram avaliadas as ameaças de convergência excessiva das paredes, colapso da frente de escavação e a queda de blocos. As funções de perigo foram estimadas em relação à intensidade da ameaça como razão de deslocamento da parede ou volume do bloco. No caso da convergência excessiva, um túnel circular profundo foi estudado com o intuito de comparar a qualidade de aproximação da técnica numérica (FLAC3D com acoplamento direto) em relação à solução exata. Erros inferiores a 0,1% foram encontrados na estimativa do índice de confiabilidade ß. Para o caso da estabilidade de frente foram comparadas duas soluções da análise limite da plasticidade contra a solução obtida numericamente. Já no caso de queda de bloco, verificou-se que as recomendações de parcialização do sistema de classificação geomecânica Q incrementa consideravelmente a segurança da escavação conduzindo a padrões da prática mais avançada, por exemplo, de um ß de 2,04 para a escavação a seção plena até 4,43 para o vão recomendado. No estudo de caso, a segurança da caverna da UHE-PAIV foi estudada perante a queda de blocos utilizando o software Unwedge. A probabilidade de falha individual foi integrada no comprimento da caverna e o conceito de sistema foi utilizado para estimar a probabilidade de falha global. A caverna apresentou uma probabilidade de falha global de 3,11 a 3,22% e um risco de 7,22x10-3 x C e 7,29x10-3 x C, sendo C o custo de falha de um bloco de grandes dimensões. O bloco mais crítico apresentou um ß de 3,63. No estudo de otimização foram utilizadas duas variáveis de projeto, a espessura do concreto projetado e o número de tirantes por metro quadrado. A configuração ótima foi encontrada como o par [t, nb] que minimiza a função de custo total. Também, um estudo de sensibilidade foi realizado para avaliar as influências de alguns parâmetros no projeto ótimo da escavação. Finalmente, os resultados obtidos sugerem que as análises quantitativas de risco, como base para a avaliação e gestão de risco, podem e devem ser consideradas como diretriz da prática da engenharia geotécnica, uma vez que estas análises conciliam os conceitos básicos de projeto como eficiência mecânica, segurança e viabilidade financeira. Assim, a quantificação de risco é plenamente possível. / In this thesis the author establishes a systematic method for quantifying the risk in underground structures in fractured rock masses using structural reliability concepts in an efficient way. The method is applied to the case study of the underground cavern of Paulo Afonso IV Hydroelectrical Power Station UHE-PAIV. Additionally, an optimization study was conducted in order to show a potential application of the method. The estimation of the risk was done according to the recommendations of the United Nations Disaster Relief Organization UNDRO where risk can be estimated as the convolution between the hazard, vulnerability and losses functions. FORM and SORM were used as approximation methods for the reliability quantification by means of Direct Coupling and Quadratic Polynomial Response Surfaces. A Monte Carlo simulation was also used to quantify the reliability of the cavern UHE-PAIV because of the presence of multiple failure modes in the numerical model. In this study 3 types of threads were evaluated: excessive wall convergence, face stability and wedge block fall. Hazard functions were built relative to the thread intensities such as wall convergence ratio or block size. In the case of excessive wall convergence a deep circular tunnel was studied meaning to compare the quality of the approximation of the reliability technique (FLAC3D with direct coupling) to the exact solution. Errors below 0.1% were found in the reliability index ß estimation. The reliability of the face stability was evaluated using two limit analysis solutions against the numeric estimation. For the block stability it was verified that the sequential excavation recommended by the Q system increases considerably the reliability of the excavation leading safety to modern standard levels, e.g. from a ß equal to 2.04 for a full section excavation to 4.43 for a partial excavation. In the case study of the UHE-PAIV, the reliability of the underground cavern was estimated using the commercial software Unwedge. The probability of failure of individual blocks was integrated along the length of the cavern and the concept of structural system was used to estimate the global probability of failure. The cavern presented a probability of failure of 3.11% to 3.22% and a risk of 7.22x10-3 x C and 7.29x10-3 x C - where C is the cost of failure of a large block. The critical individual block showed a ß equal to 3.63. The optimization was performed considering two design variables − liner thickness and number of bolt per square meter. The optimal design was found as the pair, [t, nb] which minimizes the total cost function. Also, a sensibility analysis was conducted to understand the influence of some parameters in the location of the optimal excavation design. Concluding, the results obtained here suggest that the quantitative risk analyses, as a base for the risk assessment and management, can and must be considered as a north for the practice of geotechnical engineering owing that these analyses reconcile the basic concepts of mechanical efficiency, safety and financial feasibility. Thus, risk quantification is fully affordable.
|
390 |
Avaliação de massas cardíacas pela ecocardiografia com perfusão em tempo real / Evaluation of cardiac masses by real time perfusion imaging echocardiographyUenishi, Eliza Kaori 11 May 2011 (has links)
Introdução: As massas cardíacas (MC) podem ser tumores, trombos ou pseudotumores. A avaliação da vascularização poderá ser uma ferramenta adicional para o seu diagnóstico diferencial. Neste estudo, demonstrou-se o valor diagnóstico da ecocardiografia com perfusão na caracterização das MC or meio de análises qualitativas e quantitativas de perfusão. Métodos: Estudo prospectivo que envolveu 107 pacientes, classificados em quatro grupos: 33 trombos, 23 tumores malignos (TM), 24 tumores benignos (TB) e 6 pseudotumores; 21 pacientes foram excluídos por não terem diagnóstico definitivo confirmado. A avaliação de perfusão foi realizada pela ecocardiografia com perfusão em tempo real, utilizando contraste à base de microbolhas. Em um grupo selecionado de pacientes (32), o estudo foi complementado com dipiridamol para avaliação da reserva de fluxo da massa. A análise foi feita qualitativa e quantitativamente por dois observadores independentes. Na análise qualitativa, os parâmetros foram: intensidade da perfusão (escore 0 a 3), velocidade do repreenchimento microvascular (escore 0 a 2), padrão de perfusão central ou periférico (escore 0 a 2) e presença de áreas de necrose (escore 0 e 1). Os dois parâmetros de quantificação das massas foram: volume de sangue microvascular (A) e fluxo microvascular regional, que é o produto da velocidade de fluxo () e volume (A). Resultados: Na análise qualitativa, o padrão mais frequente para o grupo trombos foi: sem perfusão (81,9%), sem velocidade de perfusão (81,9%) e sem área de necrose (93,4%); nos tumores, predominou perfusão discreta (62,3%), com velocidade lenta (64,2%) e áreas de necrose (30,2%). Na análise qualitativa, a variação intraobservador para escore de perfusão e de velocidade foi de 20%, para áreas de necrose de 25% e para padrão de perfusão foi de 45%. Na análise quantitativa, o grupo trombos apresentou valores de A e Ax significativamente menores quando comparados ao grupo de tumores: Trombos: A = 0,08 (0,01-0,22dB); Ax = 0,03 (0,010,14dB/s-1); TM: A = 2,78 (1,31-7,0dB); Ax = 2,0 (0,995,58dB/s-1); TB: A = 2,58 (1,24-4,55dB); Ax = 1,18 (0,453,4dB/s-1). Quando comparados apenas os grupos de tumores com o uso de dipiridamol, os TM apresentaram volume sanguíneo microvascular (A) maiores: A = 4,18 (2,14-7,93dB); Ax = 2,46 (1,424,59dB/s-1), TB: A = 2,69 (1,11-4,26dB); Ax = 1,55 (0,555,50dB/s-1). Na análise com a curva ROC, a área sob a curva = 0,95, no parâmetro volume sanguíneo microvascular (A) < 0,65dB na ecocardiografia de perfusão com e sem uso de dipiridamol foi preditor para trombo, bem como o parâmetro fluxo sanguíneo microvascular (Ax) < 0,30dB/s-1, (área sob a curva = 0,94). Para distinguir entre TM de TB, o parâmetro volume sanguíneo microvascular (A), com o uso de dipiridamol > 3,28dB foi preditor de TM (área sob a curva = 0,75). Conclusão: O estudo ecocardiográfico para avaliação da perfusão das MC mostrou que a análise qualitativa é um método diagnóstico rápido e reprodutível para diagnosticar trombos. Os tumores cardíacos apresentam volume microvascular e fluxo sanguíneo regional maior se comparados com os trombos. O uso do dipiridamol foi útil na diferenciação entre os TM e TB / Background: Cardiac masses (CM) can be tumors, thrombi or pseudotumors. Evaluation of their vascularization might be an additional tool to perform a differential diagnosis. In the present study we demonstrated the diagnostic value of perfusion echocardiography for CM characterization, by qualitative and quantitative analyses of perfusion. Methods: We prospectively studied 107 patients, who were classified into 4 groups: 33 thrombus, 23 malignant tumors (MT), 24 benign tumors (BT) and 6 pseudotumors, of which 21 were excluded because no definitive diagnosis could be confirmed. Perfusion evaluation was performed by contrast echocardiography with real time perfusion imaging using microbubbles. A group of patients (32) was selected for a complementary study using dipyridamole to evaluate mass flow reserve. Qualitative and quantitative analyses were performed by two independent observers. Parameters for qualitative analysis were perfusion intensity (0-3 score), microvascular refilling velocity (0-2 score), central or peripheral perfusion pattern (0-2 score), and presence of areas of necrosis (0 or 1 score). The two parameters for quantification of masses were microvascular blood volume (A), and regional microvascular flow which is the product of blood flow velocity and vomume (A). Results: The most frequent pattern for the thrombi group in the qualitative analysis was absence of perfusion (81.9%), followed by no perfusion velocity (81.9%), and no areas of necrosis (93.4%), whilst among tumors there was predominance of discrete perfusion (62.3%), with slowed velocity (64.2%), and areas of necrosis (30.2%). Qualitative analysis, perfusion velocity showed intraobserver variability 20%, presence of areas of necrosis of 25% and perfusion pattern of 45%. In the quantitative analysis, the thrombi group was shown to have A and Ax values significantly smaller compared to the tumor group: Thrombi: A = 0.08 (0.01-0.22dB); Ax = 0.03 (0.010.14dB/s-1); MT: A = 2.78 (1.31-7.0dB); Ax = 2.0 (0.995.58dB/s-1); BT: A = 2.58 (1.24-4.55dB); Ax = 1.18 (0.453.4dB/s-1). When only the tumor groups with the use of dipyridamole were compared, MT was shown to have greater microvascular blood volume (A): A = 4.18 (2.14-7.93dB); Ax = 2.46(1.424.59dB/s-1), BT: A = 2.69 (1.11-4.265dB); Ax = 1.55 (0.555.50dB/s-1). Analysis of the ROC curve showed that an area of 0.95 for a microvascular blood volume of A < 0.65 dB predictive curve on perfusion echocardiography, both with and without dipyridamole, predicts thrombi, and so does a <0.30dB/s-1microvascular blood flow (Ax), area under curve = 0.94. In order to distinguish MT from BT, a >3.28dB microvascular blood volume (A) using dipyridamole was predictor of MT (area under curve = 0.75). Conclusion: The echocardiographic study to evaluate CM perfusion showed that qualitative analysis is reproducible diagnostic approach for diagnosing thrombi. Cardiac tumors show greater microvascular volume and regional blood flow when compared with thrombi. Dipyridamole quantitative stress mass perfusion was useful to differentiate MT from BT
|
Page generated in 0.0556 seconds