Spelling suggestions: "subject:"[een] QUANTIFICATION"" "subject:"[enn] QUANTIFICATION""
91 |
Analyse de données d'IRM fonctionnelle rénale par quantification vectorielle / Analysis of renal dynamic contrast-enhanced sequences using vector quantization algorithmsChevaillier, Béatrice 09 March 2010 (has links)
Pour l'évaluation de la fonction rénale, l'Imagerie par Résonance Magnétique (IRM) dynamique à rehaussement de contraste est une alternative intéressante à la scintigraphie. Les résultats obtenus doivent cependant être évalués à grande échelle avant son utilisation en pratique clinique. L'exploitation des séquences acquises demande un recalage de la série d'images et la segmentation des structures internes du rein. Notre objectif est de fournir un outil fiable et simple à utiliser pour automatiser en partie ces deux opérations. Des méthodes statistiques de recalage utilisant l'information mutuelle sont testées sur des données réelles. La segmentation du cortex, de la médullaire et des cavités est réalisée en classant les voxels rénaux selon leurs courbes temps-intensité. Une stratégie de classification en deux étapes est proposée. Des classificateurs sont d'abord construits grâce à la coupe rénale principale en utilisant deux algorithmes de quantification vectorielle (K-moyennes et Growing Neural Gas). Ils sont validés sur données simulées, puis réelles, en évaluant des critères de similarité entre une segmentation manuelle de référence et des segmentations fonctionnelles ou une seconde segmentation manuelle. Les voxels des autres coupes sont ensuite triés avec le classificateur optimum pour la coupe principale. La théorie de la généralisation permet de borner l'erreur de classification faite lors de cette extension. La méthode proposée procure les avantages suivants par rapport à une segmentation manuelle : gain de temps important, intervention de l'opérateur limitée et aisée, bonne robustesse due à l'utilisation de toute la séquence et bonne reproductibilité / Dynamic-Contrast-Enhanced Magnetic Resonance Imaging has a great potential for renal function assessment but has to be evaluated on a large scale before its clinical application. Registration of image sequences and segmentation of internal renal structures is mandatory in order to exploit acquisitions. We propose a reliable and user-friendly tool to partially automate these two operations. Statistical registration methods based on mutual information are tested on real data. Segmentation of cortex, medulla and cavities is performed using time-intensity curves of renal voxels in a two step process. Classifiers are first built with pixels of the slice that contains the largest proportion of renal tissue : two vector quantization algorithms, namely the K-means and the Growing Neural Gas with targeting, are used here. These classifiers are first tested on synthetic data. For real data, as no ground truth is available for result evaluation, a manual anatomical segmentation is considered as a reference. Some discrepancy criteria like overlap, extra pixels and similarity index are computed between this segmentation and functional one. The same criteria are also evaluated between the referencee and another manual segmentation. Results are comparable for the two types of comparisons. Voxels of other slices are then sorted with the optimal classifier. Generalization theory allows to bound classification error for this extension. The main advantages of functional methods are the following : considerable time-saving, easy manual intervention, good robustness and reproductibility
|
92 |
Mapeamento granulométrico do solo via imagens de satélite e atributos de relevo / Mapping topsoil texture by satellite image and reliefFongaro, Caio Troula 20 January 2016 (has links)
O planeta terra tem grande dimensão, e seus recursos naturais precisam ser mapeados e conhecidos para nortear políticas públicas. O solo é um destes importantes recursos. O seu conhecimento passa pela caracterização e mapeamento pedológico e/ou de seus atributos. Para o adequado monitoramento, é necessário o conhecimento em escala detalhadas. Isto demanda recursos humanos, altos custos financeiros e de logística. Fato este ainda difícil de se atingir. Logo, é preciso investir em tecnologias que auxiliem na rápida obtenção de informações de qualidade, à baixo custo. Tendo em vista as áreas agrícolas da região de estudo, os objetivos deste trabalho foram: (i) definir uma metodologia que identifique em imagens de satélite, locais de o solo exposto; (ii) Mapear os teores granulométricos através de imagens de satélite e atributos do relevo, utilizou-se das imagens compostas do tópico (i). A área de estudo localiza-se na região de Araraquara, São Paulo, Brasil, com dimensão de 14.614 km2. Dentro desta área foram demarcados 952 pontos para coleta de amostras de terra na camada superficial, as quais foram georreferenciadas e analisadas granulometricamente em laboratório. Sua demarcação seguiu os preceitos do método da topossequência com o intuito de representar a variabilidade da região. Foram obtidas imagens do satélite Landsat 5 (sensor TM) multitemporais as quais foram processadas e transformadas em reflectância. As amostras de terra coletadas em campo passaram por sensor em laboratório (400-2500 nm), os espectros laboratoriais foram utilizados para validar aqueles obtidos nas imagens de satélite. Para tanto, nos locais onde foram coletadas as amostras, foram extraídos os dados espectrais dos pixels perfazendo os gráficos das curvas espectrais. Estas foram comparadas com os dados de obtidos em laboratório simulados. Feita a correlação, as imagens passaram por processos de eliminação de objetos que não fossem solo. Todas as imagens multitemporais foram finalizadas contendo apenas solo exposto, as quais dentro do software R foram sobrepostas e gerou-se uma imagem composta, com apenas solo exposto. Os resultados mostraram que as curvas espectrais de laboratório foram extremamente semelhantes aos das imagens de satélite, seguindo a lógica das variações texturais. Além disso, as técnicas de componentes principais e relação entre bandas 3-4, 5-7, e correlação entre bandas (sendo a mais expressiva com r de 0,87 entre TM7), comprovaram que a imagem apresentou solo exposto. Se um usuário utilizar-se somente uma imagem para estudar solos, teria na faixa de 4% de solo exposto, porém utilizando a técnica de composição de imagens, atingiria 43%. Não obstante, se a área de estudo fosse 100 % com agricultura poderia atingir 95% de solo exposto. Num segundo momento, o trabalho comprova, com o modelo Cubist, que tanto por imagens de satélite quanto por relevo foi possível quantificar os teores de argila da área da camada superficial, atingindo R2 de ≈0,65. No entanto, a qualidade visual do mapa gerado por relevo é ruim. Porém, quando se integra dados de imagens, relevo e geomorfologia, este resultado é de 0,72 e apresenta o melhor resultado visual. / Planet Earth has great dimension, and its natural resources has to be mapped and monitored, looking towards correct decisions. Soil is one of these important resources. Know soils is related with its caracterization and mapping by pedological and attributes recognition. For soil monitoring, its necessary maps in large scale, which demand man power and high cost. Thus, its necessary to invest in geotechnologies, to reach the goal faster and low cost. The objective of this work was to determine a method to determine exposed soils in satellite images, even when have vegetation, taking in account a multitemporal dataset, in agricultural areas, where as in a given season will have exposed soils. b. quantify clay and sand contents by satellite images and relief attributes. The area is located in Araraquara, SP, Brazil, with a 14.614 km2 dimension. We collected soil samples all over the area with a total of 952 points and 0-20 cm depth, georeferenced, representative of the area. Samples were granulometric analysed and afterwards passed throgh a vis-nir-swir sensor (400-2500 nm). We collected multitemporal images from landsat satellite from september and october n the last 15 years. Images were atmospheric corrected and transformed into reflectance. Laboratory spectral data was used to validate pixels spectra information from satellite. We extracted all objects which were not soils from all images. Using R software, we merged the multitemporal images and performed a unique bare soil image. Also, we made processing on the DEM of the área reaching several soil attribute factors. Results indicated as follows: a. labortory spectral curves validated satellite data; b) principal componentes and relation between bands ¾ and 5/7 reached great R2 until 0,87 between laboratory and satellite data; d) a user could reach 1,21% of na image with bare soil, while with our method could reach 43% in the entire image. On the other hand, if the user have only agriculture area, could reach until 95% with bare soil. In a second step of this work, we prove that by regression tree statistics, clay and sand content can be quantified by satellite images with a 0,62 of R2, as also with terrain atributes. On the other hand, when we associate image spectral data with terrain atributes, we can reach 0,72 on clay quantification. Despite this, the visual aspecto of data, is better using image data than relief , which presented more noise. Another conclusion, is that images could substitute geology information in the models. This work can considerably assist pedologists, farmers and environment professionals on soil monitoring.
|
93 |
Uncertainty quantification of engineering systems using the multilevel Monte Carlo methodUnwin, Helena Juliette Thomasin January 2018 (has links)
This thesis examines the quantification of uncertainty in real-world engineering systems using the multilevel Monte Carlo method. It is often infeasible to use the traditional Monte Carlo method to investigate the impact of uncertainty because computationally it can be prohibitively expensive for complex systems. Therefore, the newer multilevel method is investigated and the cost of this method is analysed in the finite element framework. The Monte Carlo and multilevel Monte Carlo methods are compared for two prototypical examples: structural vibrations and buoyancy driven flows through porous media. In the first example, the impact of random mass density is quantified for structural vibration problems in several dimensions using the multilevel Monte Carlo method. Comparable eigenvalues and energy density approximations are found for the traditional Monte Carlo method and the multilevel Monte Carlo method, but for certain problems the expectation and variance of the quantities of interest can be computed over 100 times faster using the multilevel Monte Carlo method. It is also tractable to use the multilevel method for three dimensional structures, where the traditional Monte Carlo method is often prohibitively expensive. In the second example, the impact of uncertainty in buoyancy driven flows through porous media is quantified using the multilevel Monte Carlo method. Again, comparable results are obtained from the two methods for diffusion dominated flows and the multilevel method is orders of magnitude cheaper. The finite element models for this investigation are formulated carefully to ensure that spurious numerical artefacts are not added to the solution and are compared to an analytical model describing the long term sequestration of CO2 in the presence of a background flow. Additional cost reductions are achieved by solving the individual independent samples in parallel using the new podS library. This library schedules the Monte Carlo and multilevel Monte Carlo methods in parallel across different computer architectures for the two examples considered in this thesis. Nearly linear cost reductions are obtained as the number of processes is increased.
|
94 |
Characterization of Radioactivity in the EnvironmentBorrelli, Robert Angelo 10 November 1999 (has links)
"Ionizing radiation is produced as the result of the decay of an unstable nucleus. The standard measure of radioactivity is quantified according to the rate of disintegration of the unstable nucleus. This method of quantification does not incorporate the total amount of ionizing radiation that is associated with each disintegration of the radionuclide. The ionizing radiation that is produced as a result of decay is specific to a given radionuclide. A radionuclide can be conceptualized as a source of ionizing radiation. Disintegration of the unstable nucleus will therefore result in the continual release of ionizing radiation throughout the fixed existence of the radionuclide. This thesis will present a reasonable and practical adjustment to the current mechanism regarding the quantification of radionuclides. This adjustment will provide a basis to which the specific decay attributes of radionuclides can be normalized. Such a normalization will allow for direct comparisons among important inventories of radionuclides. This adjustment will be used to formulate a characterization of common radionuclides that exist in the environment. Such a characterization can provide a control inventory of ionizing radiation to which more specific systems of radionuclides can be compared."
|
95 |
Asymptotic theory for Bayesian nonparametric procedures in inverse problemsRay, Kolyan Michael January 2015 (has links)
The main goal of this thesis is to investigate the frequentist asymptotic properties of nonparametric Bayesian procedures in inverse problems and the Gaussian white noise model. In the first part, we study the frequentist posterior contraction rate of nonparametric Bayesian procedures in linear inverse problems in both the mildly and severely ill-posed cases. This rate provides a quantitative measure of the quality of statistical estimation of the procedure. A theorem is proved in a general Hilbert space setting under approximation-theoretic assumptions on the prior. The result is applied to non-conjugate priors, notably sieve and wavelet series priors, as well as in the conjugate setting. In the mildly ill-posed setting, minimax optimal rates are obtained, with sieve priors being rate adaptive over Sobolev classes. In the severely ill-posed setting, oversmoothing the prior yields minimax rates. Previously established results in the conjugate setting are obtained using this method. Examples of applications include deconvolution, recovering the initial condition in the heat equation and the Radon transform. In the second part of this thesis, we investigate Bernstein--von Mises type results for adaptive nonparametric Bayesian procedures in both the Gaussian white noise model and the mildly ill-posed inverse setting. The Bernstein--von Mises theorem details the asymptotic behaviour of the posterior distribution and provides a frequentist justification for the Bayesian approach to uncertainty quantification. We establish weak Bernstein--von Mises theorems in both a Hilbert space and multiscale setting, which have applications in $L^2$ and $L^\infty$ respectively. This provides a theoretical justification for plug-in procedures, for example the use of certain credible sets for sufficiently smooth linear functionals. We use this general approach to construct optimal frequentist confidence sets using a Bayesian approach. We also provide simulations to numerically illustrate our approach and obtain a visual representation of the different geometries involved.
|
96 |
\"Quantificação de MDMA em amostras de ecstasy por cromatografia em fase gasosa (GC/NPD)\" / Quantification of MDMA in ecstasy samples by gas chromatography (GC/NPD)Lapachinske, Silvio Fernandes 31 March 2004 (has links)
Quimicamente, o ecstasy é a 3,4-metilenodioximetanfetamina (MDMA), um composto sintético com propriedades estimulante central e alucinogênicas. Algumas substâncias análogas à MDMA, já identificadas em comprimidos de ecstasy, são principalmente: 3,4-metilenodioxietilanfetamina (MDEA), 3,4-metilenodioxianfetamina (MDA), metanfetamina e anfetamina. Os adulterantes mais comuns, normalmente encontrados são: cafeína e efedrinas. O objetivo deste trabalho foi a validação de um método analítico para quantificar a MDMA em comprimidos e cápsulas de ecstasy, através da cromatografia em fase gasosa com detector de nitrogênio/fósforo (GC/NPD). Substâncias análogas à MDMA e adulterantes também foram identificados. Amostras de comprimidos e cápsulas de 25 diferentes lotes, apreendidos como ecstasy em São Paulo (SP), foram analisadas pelo método proposto. Desse total de amostras, 21 continham somente MDMA (84%) e apenas 1 delas apresentou MDMA associada com cafeína (4%). A concentração total de MDMA nessas amostras variou entre 30,9 e 92,7mg, resultando em uma média aritmética de 63mg. / Chemically, \"ecstasy\" is 3,4-methylenedioxymethamphetamine (MDMA), a synthetic compound with stimulant and hallucinogenic properties. Some MDMA analog substances such as 3,4-methylenedioxyethylamphetamine (MDEA), 3,4-methylenedioxyamphetamine (MDA), methamphetamine and amphetamine have already been identified in \"ecstasy\" tablets. Caffeine and ephedrines are the most common adulterants also found. The aim of this paper is to describe the validation of an analytical method to quantify MDMA in \"ecstasy\" tablets and capsules. Gas chromatography with nitrogen/phosphorus detector was used in the method. Analog substances to MDMA and adulterant compounds were also identified. Samples from 25 lots of tablets seized in the city of São Paulo were analyzed. From that total, 21 showed only MDMA (84%) and just 1 of them presented MDMA plus caffeine (4%). MDMA total concentration in these samples had a variation between 30.9 and 92.7mg, resulting in an arithmetic average of 63mg.
|
97 |
Correlative multiscale imaging and quantification of bone ingrowth in porous Ti implantsGeng, Hua January 2017 (has links)
Additive manufactured porous titanium scaffolds have been extensively investigated for orthopaedic applications. The quantification of tissue response to the biomaterial implants is primarily achieved by analysing a two-dimensional (2D) stained histological section. More recently, three-dimensional X-ray micro-computed tomography (μCT) has become increasingly applied. Although histology is the gold standard, μCT allows non- destructive quantification of 3D tissue structures with minimal sample preparation and high contrast. A methodology to correlate information from both histology and μCT of a single sample might provide greater insights than either examining the results separately. However, this task is challenging because histology and μCT provide different types of information (stained tissue morphology vs. greyscale dependent on the X-ray absorption of material) and dimensionality (2D vs 3D). A semi-automated methodology was developed to directly quantify tissue formation and efficacy within an additive manufactured titanium implant using histology and μCT. This methodology was then extended to correlatively integrate nano-scale elemental information from nano- secondary ion mass spectroscopy (NanoSIMS). The correlative information was applied to investigate the impact of silver release on bone formation within a nano-silver coated additive manufactured implant. The correlative imaging methodology allowed for the quantification of the significant volumetric shrinkage (~15%) that occurs on histology slice preparation. It also demonstrated the importance of the location of the histological sectioning of the tissue and implant, revealing that up to 30% differences in bone ingrowth can be found along the entire length of the porous implant due to preferential bone ingrowth from the periphery to the centre. The quality and quantity of newly formed bone were found to be comparable between the uncoated and nano-silver coated Ti-implants, suggesting that the layer of silver nanoparticles on the Ti-implant does not negatively impact bone formation. Further, the newly formed bone at 2 weeks had a trabecula morphology with bone at the interface of Ti-implant as well as at a distant. This indicates that both contact (bone apposition on implant) and distance (bone ingrowth from host bone) osteogenesis were present in both types of implants. Finally, nanoscale elemental mapping showed silver was present primarily in the osseous tissue and was co-localised to sulphur suggesting that silver sulphide may have formed.
|
98 |
Quantification de la distorsion de l'image corporelle chez des adolescentes atteintes d'anorexie mentale restrictive : évaluation informatique (Q-DIC) et applications cliniquesRoy, Mathieu January 2005 (has links)
Mémoire numérisé par la Direction des bibliothèques de l'Université de Montréal.
|
99 |
Mapeamento granulométrico do solo via imagens de satélite e atributos de relevo / Mapping topsoil texture by satellite image and reliefCaio Troula Fongaro 20 January 2016 (has links)
O planeta terra tem grande dimensão, e seus recursos naturais precisam ser mapeados e conhecidos para nortear políticas públicas. O solo é um destes importantes recursos. O seu conhecimento passa pela caracterização e mapeamento pedológico e/ou de seus atributos. Para o adequado monitoramento, é necessário o conhecimento em escala detalhadas. Isto demanda recursos humanos, altos custos financeiros e de logística. Fato este ainda difícil de se atingir. Logo, é preciso investir em tecnologias que auxiliem na rápida obtenção de informações de qualidade, à baixo custo. Tendo em vista as áreas agrícolas da região de estudo, os objetivos deste trabalho foram: (i) definir uma metodologia que identifique em imagens de satélite, locais de o solo exposto; (ii) Mapear os teores granulométricos através de imagens de satélite e atributos do relevo, utilizou-se das imagens compostas do tópico (i). A área de estudo localiza-se na região de Araraquara, São Paulo, Brasil, com dimensão de 14.614 km2. Dentro desta área foram demarcados 952 pontos para coleta de amostras de terra na camada superficial, as quais foram georreferenciadas e analisadas granulometricamente em laboratório. Sua demarcação seguiu os preceitos do método da topossequência com o intuito de representar a variabilidade da região. Foram obtidas imagens do satélite Landsat 5 (sensor TM) multitemporais as quais foram processadas e transformadas em reflectância. As amostras de terra coletadas em campo passaram por sensor em laboratório (400-2500 nm), os espectros laboratoriais foram utilizados para validar aqueles obtidos nas imagens de satélite. Para tanto, nos locais onde foram coletadas as amostras, foram extraídos os dados espectrais dos pixels perfazendo os gráficos das curvas espectrais. Estas foram comparadas com os dados de obtidos em laboratório simulados. Feita a correlação, as imagens passaram por processos de eliminação de objetos que não fossem solo. Todas as imagens multitemporais foram finalizadas contendo apenas solo exposto, as quais dentro do software R foram sobrepostas e gerou-se uma imagem composta, com apenas solo exposto. Os resultados mostraram que as curvas espectrais de laboratório foram extremamente semelhantes aos das imagens de satélite, seguindo a lógica das variações texturais. Além disso, as técnicas de componentes principais e relação entre bandas 3-4, 5-7, e correlação entre bandas (sendo a mais expressiva com r de 0,87 entre TM7), comprovaram que a imagem apresentou solo exposto. Se um usuário utilizar-se somente uma imagem para estudar solos, teria na faixa de 4% de solo exposto, porém utilizando a técnica de composição de imagens, atingiria 43%. Não obstante, se a área de estudo fosse 100 % com agricultura poderia atingir 95% de solo exposto. Num segundo momento, o trabalho comprova, com o modelo Cubist, que tanto por imagens de satélite quanto por relevo foi possível quantificar os teores de argila da área da camada superficial, atingindo R2 de ≈0,65. No entanto, a qualidade visual do mapa gerado por relevo é ruim. Porém, quando se integra dados de imagens, relevo e geomorfologia, este resultado é de 0,72 e apresenta o melhor resultado visual. / Planet Earth has great dimension, and its natural resources has to be mapped and monitored, looking towards correct decisions. Soil is one of these important resources. Know soils is related with its caracterization and mapping by pedological and attributes recognition. For soil monitoring, its necessary maps in large scale, which demand man power and high cost. Thus, its necessary to invest in geotechnologies, to reach the goal faster and low cost. The objective of this work was to determine a method to determine exposed soils in satellite images, even when have vegetation, taking in account a multitemporal dataset, in agricultural areas, where as in a given season will have exposed soils. b. quantify clay and sand contents by satellite images and relief attributes. The area is located in Araraquara, SP, Brazil, with a 14.614 km2 dimension. We collected soil samples all over the area with a total of 952 points and 0-20 cm depth, georeferenced, representative of the area. Samples were granulometric analysed and afterwards passed throgh a vis-nir-swir sensor (400-2500 nm). We collected multitemporal images from landsat satellite from september and october n the last 15 years. Images were atmospheric corrected and transformed into reflectance. Laboratory spectral data was used to validate pixels spectra information from satellite. We extracted all objects which were not soils from all images. Using R software, we merged the multitemporal images and performed a unique bare soil image. Also, we made processing on the DEM of the área reaching several soil attribute factors. Results indicated as follows: a. labortory spectral curves validated satellite data; b) principal componentes and relation between bands ¾ and 5/7 reached great R2 until 0,87 between laboratory and satellite data; d) a user could reach 1,21% of na image with bare soil, while with our method could reach 43% in the entire image. On the other hand, if the user have only agriculture area, could reach until 95% with bare soil. In a second step of this work, we prove that by regression tree statistics, clay and sand content can be quantified by satellite images with a 0,62 of R2, as also with terrain atributes. On the other hand, when we associate image spectral data with terrain atributes, we can reach 0,72 on clay quantification. Despite this, the visual aspecto of data, is better using image data than relief , which presented more noise. Another conclusion, is that images could substitute geology information in the models. This work can considerably assist pedologists, farmers and environment professionals on soil monitoring.
|
100 |
Representação e quantificação de redes vasculares a partir de imagens de angiografia tridimensional / Representation and quantification of blood vessels from three-dimensional angiographic imagesValverde, Miguel Angel Galarreta 12 December 2017 (has links)
As imagens de Angiografia por Ressonância Magnética (angio-RM) e Tomografia Computadorizada (angio-TC) são ferramentas amplamente usadas em processos de quantificação vascular e no diagnóstico de doenças cardiovasculares, as quais são consideradas entre as principais causas de morte. Contudo, a análise dos vasos em larga escala a partir das imagens é dificultada, tanto pela variabilidade natural dos vasos no corpo humano, quanto pela grande quantidade de dados disponíveis. Além disso, os métodos de quantificação existentes, usualmente extraem as características a partir dos esqueletos, ou até mesmo das próprias imagens de angiografia, razão pela qual tais métodos podem fazer necessária a reanálise das imagens repetidas vezes. Com o intuito de facilitar a análise e de fornecer uma ferramenta de apoio ao diagnóstico, neste trabalho são apresentados um modelo de representação textual de redes vasculares e uma metodologia de quantificação vascular automática, que é feita a partir dessa representação. A representação é obtida a partir da segmentação de imagens volumétricas de angio-RM e angio-TC, seguida da extração de trajetórias e diâmetros de redes vasculares. Tal representação é híbrida, combinando grafos e uma sequência textual de instruções, e permite não apenas a extração de caraterísticas morfológicas da rede vascular, como também a compressão das imagens e, ainda, a reconstrução de imagens similares às imagens originais. A partir das características extraídas, foram realizados estudos comparativos entre arquiteturas vasculares, o que é feito tanto por meio do uso de imagens sintéticas, como por meio de imagens reais, imagens nas quais foi possível encontrar diferenças entre arquiteturas, além de viabilizar a caracterização de aneurismas em um indivíduo. Paralelamente, desenvolvemos um método que permite identificar similaridade entre segmentos vasculares, o que por sua vez possibilita o reconhecimento e rotulação de segmentos em um conjunto de redes vasculares. A metodologia por nós desenvolvida deve também auxiliar no desenvolvimento de processos de classificação de vasos sanguíneos, de ferramentas para o diagnóstico automático de doenças vasculares, e para a melhora de técnicas utilizadas na prática clínica. / Magnetic Resonance Angiography (angio-MR) and Computed Tomography Angiography (angio-TC) are widely used imaging techniques used for vascular quantification and the diagnosis of cardiovascular diseases, considered one of the main causes of death. Blood vessel analysis using angiographic images is intrisically difficult because of the natural human vessel variability and the large amount of information. Additionally, most quantification methods perform the analysis of entire datasets, which can be very time consuming when they need to be reanalyzed. With the aim of reducing these problems and to provide a tool to aid diagnosis, we propose a textual representation model for vascular networks and an automatic vascular quantification methodology. The representation is obtained from volumetric image segmentation of angio-MR and angio-TC, followed by the extraction of the trajectory and the blood vessel diameters. This representation is hybrid in nature, combining graphs and a sequence of textual instructions, allowing for the extraction of morphological features, image compression, and the synthesis of angiographic images. Using extracted features derived from the model, comparative studies of vascular architecture can be performed. Experiments were made using synthetic and real images, in which was possible to find structural differences that make feasible to characterize abnormalities such as aneurysms. Also, a vessel similarity identification method was developed, which makes it possible to recognize vessel segments and label them in a set of vessel networks. The proposed methodology should aid in blood vessel classification processes, automatic diagnosis of cardiovascular diseases, and allow development of methods and applications that could be used in the clinical practice.
|
Page generated in 0.0508 seconds