• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 192
  • 141
  • 26
  • 13
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 430
  • 430
  • 124
  • 96
  • 74
  • 62
  • 58
  • 54
  • 49
  • 49
  • 43
  • 43
  • 39
  • 39
  • 35
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Segmentation Methods for Medical Image Analysis : Blood vessels, multi-scale filtering and level set methods

Läthén, Gunnar January 2010 (has links)
<p>Image segmentation is the problem of partitioning an image into meaningful parts, often consisting of an object and background. As an important part of many imaging applications, e.g. face recognition, tracking of moving cars and people etc, it is of general interest to design robust and fast segmentation algorithms. However, it is well accepted that there is no general method for solving all segmentation problems. Instead, the algorithms have to be highly adapted to the application in order to achieve good performance. In this thesis, we will study segmentation methods for blood vessels in medical images. The need for accurate segmentation tools in medical applications is driven by the increased capacity of the imaging devices. Common modalities such as CT and MRI generate images which simply cannot be examined manually, due to high resolutions and a large number of image slices. Furthermore, it is very difficult to visualize complex structures in three-dimensional image volumes without cutting away large portions of, perhaps important, data. Tools, such as segmentation, can aid the medical staff in browsing through such large images by highlighting objects of particular importance. In addition, segmentation in particular can output models of organs, tumors, and other structures for further analysis, quantification or simulation.</p><p>We have divided the segmentation of blood vessels into two parts. First, we model the vessels as a collection of lines and edges (linear structures) and use filtering techniques to detect such structures in an image. Second, the output from this filtering is used as input for segmentation tools. Our contributions mainly lie in the design of a multi-scale filtering and integration scheme for de- tecting vessels of varying widths and the modification of optimization schemes for finding better segmentations than traditional methods do. We validate our ideas on synthetical images mimicking typical blood vessel structures, and show proof-of-concept results on real medical images.</p>
72

Multi-scale modelling of shell failure for periodic quasi-brittle materials

Mercatoris, Benoît C.N. 04 January 2010 (has links)
<p align="justify">In a context of restoration of historical masonry structures, it is crucial to properly estimate the residual strength and the potential structural failure modes in order to assess the safety of buildings. Due to its mesostructure and the quasi-brittle nature of its constituents, masonry presents preferential damage orientations, strongly localised failure modes and damage-induced anisotropy, which are complex to incorporate in structural computations. Furthermore, masonry structures are generally subjected to complex loading processes including both in-plane and out-of-plane loads which considerably influence the potential failure mechanisms. As a consequence, both the membrane and the flexural behaviours of masonry walls have to be taken into account for a proper estimation of the structural stability.</p> <p align="justify">Macrosopic models used in structural computations are based on phenomenological laws including a set of parameters which characterises the average behaviour of the material. These parameters need to be identified through experimental tests, which can become costly due to the complexity of the behaviour particularly when cracks appear. The existing macroscopic models are consequently restricted to particular assumptions. Other models based on a detailed mesoscopic description are used to estimate the strength of masonry and its behaviour with failure. This is motivated by the fact that the behaviour of each constituent is a priori easier to identify than the global structural response. These mesoscopic models can however rapidly become unaffordable in terms of computational cost for the case of large-scale three-dimensional structures.</p> <p align="justify">In order to keep the accuracy of the mesoscopic modelling with a more affordable computational effort for large-scale structures, a multi-scale framework using computational homogenisation is developed to extract the macroscopic constitutive material response from computations performed on a sample of the mesostructure, thereby allowing to bridge the gap between macroscopic and mesoscopic representations. Coarse graining methodologies for the failure of quasi-brittle heterogeneous materials have started to emerge for in-plane problems but remain largely unexplored for shell descriptions. The purpose of this study is to propose a new periodic homogenisation-based multi-scale approach for quasi-brittle thin shell failure.</p> <p align="justify">For the numerical treatment of damage localisation at the structural scale, an embedded strong discontinuity approach is used to represent the collective behaviour of fine-scale cracks using average cohesive zones including mixed cracking modes and presenting evolving orientation related to fine-scale damage evolutions.</p> <p align="justify">A first originality of this research work is the definition and analysis of a criterion based on the homogenisation of a fine-scale modelling to detect localisation in a shell description and determine its evolving orientation. Secondly, an enhanced continuous-discontinuous scale transition incorporating strong embedded discontinuities driven by the damaging mesostructure is proposed for the case of in-plane loaded structures. Finally, this continuous-discontinuous homogenisation scheme is extended to a shell description in order to model the localised behaviour of out-of-plane loaded structures. These multi-scale approaches for failure are applied on typical masonry wall tests and verified against three-dimensional full fine-scale computations in which all the bricks and the joints are discretised.</p>
73

Multiscale Feature-Preserving Smoothing of Images and Volumes on the GPU

Jibai, Nassim 24 May 2012 (has links) (PDF)
Two-dimensional images and three-dimensional volumes have become a staple ingredient of our artistic, cultural, and scientific appetite. Images capture and immortalize an instance such as natural scenes, through a photograph camera. Moreover, they can capture details inside biological subjects through the use of CT (computer tomography) scans, X-Rays, ultrasound, etc. Three-dimensional volumes of objects are also of high interest in medical imaging, engineering, and analyzing cultural heritage. They are produced using tomographic reconstruction, a technique that combine a large series of 2D scans captured from multiple views. Typically, penetrative radiation is used to obtain each 2D scan: X-Rays for CT scans, radio-frequency waves for MRI (magnetic resonance imaging), electron-positron annihilation for PET scans, etc. Unfortunately, their acquisition is influenced by noise caused by different factors. Noise in two-dimensional images could be caused by low-light illumination, electronic defects, low-dose of radiation, and a mispositioning tool or object. Noise in three-dimensional volumes also come from a variety of sources: the limited number of views, lack of captor sensitivity, high contrasts, the reconstruction algorithms, etc. The constraint that data acquisition be noiseless is unrealistic. It is desirable to reduce, or eliminate, noise at the earliest stage in the application. However, removing noise while preserving the sharp features of an image or volume object remains a challenging task. We propose a multi-scale method to smooth 2D images and 3D tomographic data while preserving features at a specified scale. Our algorithm is controlled using a single user parameter - the minimum scale of features to be preserved. Any variation that is smaller than the specified scale is treated as noise and smoothed, while discontinuities such as corners, edges and detail at a larger scale are preserved. We demonstrate that our smoothed data produces clean images and clean contour surfaces of volumes using standard surface-extraction algorithms. In addition to, we compare our results with results of previous approaches. Our method is inspired by anisotropic diffusion. We compute our diffusion tensors from the local continuous histograms of gradients around each pixel in image
74

Numerial modelling based on the multiscale homogenization theory. Application in composite materials and structures

Badillo Almaraz, Hiram 16 April 2012 (has links)
A multi-domain homogenization method is proposed and developed in this thesis based on a two-scale technique. The method is capable of analyzing composite structures with several periodic distributions by partitioning the entire domain of the composite into substructures making use of the classical homogenization theory following a first-order standard continuum mechanics formulation. The need to develop the multi-domain homogenization method arose because current homogenization methods are based on the assumption that the entire domain of the composite is represented by one periodic or quasi-periodic distribution. However, in some cases the structure or composite may be formed by more than one type of periodic domain distribution, making the existing homogenization techniques not suitable to analyze this type of cases in which more than one recurrent configuration appears. The theoretical principles used in the multi-domain homogenization method were applied to assemble a computational tool based on two nested boundary value problems represented by a finite element code in two scales: a) one global scale, which treats the composite as an homogeneous material and deals with the boundary conditions, the loads applied and the different periodic (or quasi-periodic) subdomains that may exist in the composite; and b) one local scale, which obtains the homogenized response of the representative volume element or unit cell, that deals with the geometry distribution and with the material properties of the constituents. The method is based on the local periodicity hypothesis arising from the periodicity of the internal structure of the composite. The numerical implementation of the restrictions on the displacements and forces corresponding to the degrees of freedom of the domain's boundary derived from the periodicity was performed by means of the Lagrange multipliers method. The formulation included a method to compute the homogenized non-linear tangent constitutive tensor once the threshold of nonlinearity of any of the unit cells has been surpassed. The procedure is based in performing a numerical derivation applying a perturbation technique. The tangent constitutive tensor is computed for each load increment and for each iteration of the analysis once the structure has entered in the non-linear range. The perturbation method was applied at the global and local scales in order to analyze the performance of the method at both scales. A simple average method of the constitutive tensors of the elements of the cell was also explored for comparison purposes. A parallelization process was implemented on the multi-domain homogenization method in order to speed-up the computational process due to the huge computational cost that the nested incremental-iterative solution embraces. The effect of softening in two-scale homogenization was investigated following a smeared cracked approach. Mesh objectivity was discussed first within the classical one-scale FE formulation and then the concepts exposed were extrapolated into the two-scale homogenization framework. The importance of the element characteristic length in a multi-scale analysis was highlighted in the computation of the specific dissipated energy when strain-softening occurs. Various examples were presented to evaluate and explore the capabilities of the computational approach developed in this research. Several aspects were studied, such as analyzing different composite arrangements that include different types of materials, composites that present softening after the yield point is reached (e.g. damage and plasticity) and composites with zones that present high strain gradients. The examples were carried out in composites with one and with several periodic domains using different unit cell configurations. The examples are compared to benchmark solutions obtained with the classical one-scale FE method. / En esta tesis se propone y desarrolla un método de homogeneización multi-dominio basado en una técnica en dos escalas. El método es capaz de analizar estructuras de materiales compuestos con varias distribuciones periódicas dentro de un mismo continuo mediante la partición de todo el dominio del material compuesto en subestructuras utilizando la teoría clásica de homogeneización a través de una formulación estándar de mecánica de medios continuos de primer orden. La necesidad de desarrollar este método multi-dominio surgió porque los métodos actuales de homogeneización se basan en el supuesto de que todo el dominio del material está representado por solo una distribución periódica o cuasi-periódica. Sin embargo, en algunos casos, la estructura puede estar formada por más de un tipo de distribución de dominio periódico. Los principios teóricos desarrollados en el método de homogeneización multi-dominio se aplicaron para ensamblar una herramienta computacional basada en dos problemas de valores de contorno anidados, los cuales son representados por un código de elementos finitos (FE) en dos escalas: a) una escala global, que trata el material compuesto como un material homogéneo. Esta escala se ocupa de las condiciones de contorno, las cargas aplicadas y los diferentes subdominios periódicos (o cuasi-periódicos) que puedan existir en el material compuesto; y b) una escala local, que obtiene la respuesta homogenizada de un volumen representativo o celda unitaria. Esta escala se ocupa de la geometría, y de la distribución espacial de los constituyentes del compuesto así como de sus propiedades constitutivas. El método se basa en la hipótesis de periodicidad local derivada de la periodicidad de la estructura interna del material. La implementación numérica de las restricciones de los desplazamientos y las fuerzas derivadas de la periodicidad se realizaron por medio del método de multiplicadores de Lagrange. La formulación incluye un método para calcular el tensor constitutivo tangente no-lineal homogeneizado una vez que el umbral de la no-linealidad de cualquiera de las celdas unitarias ha sido superado. El procedimiento se basa en llevar a cabo una derivación numérica aplicando una técnica de perturbación. El tensor constitutivo tangente se calcula para cada incremento de carga y para cada iteración del análisis una vez que la estructura ha entrado en el rango no-lineal. El método de perturbación se aplicó tanto en la escala global como en la local con el fin de analizar la efectividad del método en ambas escalas. Se lleva a cabo un proceso de paralelización en el método con el fin de acelerar el proceso de cómputo debido al enorme coste computacional que requiere la solución iterativa incremental anidada. Se investiga el efecto de ablandamiento por deformación en el material usando el método de homogeneización en dos escalas a través de un enfoque de fractura discreta. Se estudió la objetividad en el mallado dentro de la formulación clásica de FE en una escala y luego los conceptos expuestos se extrapolaron en el marco de la homogeneización de dos escalas. Se enfatiza la importancia de la longitud característica del elemento en un análisis multi-escala en el cálculo de la energía específica disipada cuando se produce el efecto de ablandamiento. Se presentan varios ejemplos para evaluar la propuesta computacional desarrollada en esta investigación. Se estudiaron diferentes configuraciones de compuestos que incluyen diferentes tipos de materiales, así como compuestos que presentan ablandamiento después de que el punto de fluencia del material se alcanza (usando daño y plasticidad) y compuestos con zonas que presentan altos gradientes de deformación. Los ejemplos se llevaron a cabo en materiales compuestos con uno y con varios dominios periódicos utilizando diferentes configuraciones de células unitarias. Los ejemplos se comparan con soluciones de referencia obtenidas con el método clásico de elementos finitos en una escala.
75

A Multi-scale Stochastic Filter Based Approach to Inverse Scattering for 3D Ultrasound Soft Tissue Characterization

Tsui, Patrick Pak Chuen January 2009 (has links)
The goal of this research is to achieve accurate characterization of multi-layered soft tissues in three dimensions using focused ultrasound. The characterization of the acoustic parameters of each tissue layer is formulated as recursive processes of forward- and inverse- scattering. Forward scattering deals with the modeling of focused ultrasound wave propagation in multi-layered tissues, and the computation of the focused wave amplitudes in the tissues based on the acoustic parameters of the tissue as generated by inverse scattering. The model for mapping the tissue acoustic parameters to focused waves is highly nonlinear and stochastic. In addition, solving (or inverting) the model to obtain tissue acoustic parameters is an ill-posed problem. Therefore, a nonlinear stochastic inverse scattering method is proposed such that no linearization and mathematical inversion of the model are required. Inverse scattering aims to estimate the tissue acoustic parameters based on the forward scattering model and ultrasound measurements of the tissues. A multi-scale stochastic filter (MSF) is proposed to perform inverse scattering. MSF generates a set of tissue acoustic parameters, which are then mapped into focused wave amplitudes in the multi-layered tissues by forward scattering. The tissue acoustic parameters are weighted by comparing their focused wave amplitudes to the actual ultrasound measurements. The weighted parameters are used to estimate a weighted Gaussian mixture as the posterior probability density function (PDF) of the parameters. This PDF is optimized to achieve minimum estimation error variance in the sense of the posterior Cramer-Rao bound. The optimized posterior PDF is used to produce minimum mean-square-error estimates of the tissue acoustic parameters. As a result, both the estimation error and uncertainty of the parameters are minimized. PDF optimization is formulated based on a novel multi-scale PDF analysis framework. This framework is founded based on exploiting the analogy between PDFs and analog (or digital) signals. PDFs and signals are similar in the sense that they represent characteristics of variables in their respective domains, except that there are constraints imposed on PDFs. Therefore, it is reasonable to consider a PDF as a signal that is subject to amplitude constraints, and as such apply signal processing techniques to analyze the PDF. The multi-scale PDF analysis framework is proposed to recursively decompose an arbitrary PDF from its fine to coarse scales. The recursive decompositions are designed so as to ensure that requirements such as PDF constraints, zero-phase shift and non-creation of artifacts are satisfied. The relationship between the PDFs at consecutive scales is derived in order for the PDF optimization process to recursively reconstruct the posterior PDF from its coarse to fine scales. At each scale, PDF reconstruction aims to reduce the variances of the posterior PDF Gaussian components, and as a result the confidence in the estimate is increased. The overall posterior PDF variance reduction is guided by the posterior Cramer-Rao bound. A series of experiments is conducted to investigate the performance of the proposed method on ultrasound multi-layered soft tissue characterization. Multi-layered tissue phantoms that emulate ocular components of the eye are fabricated as test subjects. Experimental results confirm that the proposed MSF inverse scattering approach is well suited for three-dimensional ultrasound tissue characterization. In addition, performance comparisons between MSF and a state-of-the-art nonlinear stochastic filter are conducted. Results show that MSF is more accurate and less computational intensive than the state-of-the-art filter.
76

A Multi-scale Stochastic Filter Based Approach to Inverse Scattering for 3D Ultrasound Soft Tissue Characterization

Tsui, Patrick Pak Chuen January 2009 (has links)
The goal of this research is to achieve accurate characterization of multi-layered soft tissues in three dimensions using focused ultrasound. The characterization of the acoustic parameters of each tissue layer is formulated as recursive processes of forward- and inverse- scattering. Forward scattering deals with the modeling of focused ultrasound wave propagation in multi-layered tissues, and the computation of the focused wave amplitudes in the tissues based on the acoustic parameters of the tissue as generated by inverse scattering. The model for mapping the tissue acoustic parameters to focused waves is highly nonlinear and stochastic. In addition, solving (or inverting) the model to obtain tissue acoustic parameters is an ill-posed problem. Therefore, a nonlinear stochastic inverse scattering method is proposed such that no linearization and mathematical inversion of the model are required. Inverse scattering aims to estimate the tissue acoustic parameters based on the forward scattering model and ultrasound measurements of the tissues. A multi-scale stochastic filter (MSF) is proposed to perform inverse scattering. MSF generates a set of tissue acoustic parameters, which are then mapped into focused wave amplitudes in the multi-layered tissues by forward scattering. The tissue acoustic parameters are weighted by comparing their focused wave amplitudes to the actual ultrasound measurements. The weighted parameters are used to estimate a weighted Gaussian mixture as the posterior probability density function (PDF) of the parameters. This PDF is optimized to achieve minimum estimation error variance in the sense of the posterior Cramer-Rao bound. The optimized posterior PDF is used to produce minimum mean-square-error estimates of the tissue acoustic parameters. As a result, both the estimation error and uncertainty of the parameters are minimized. PDF optimization is formulated based on a novel multi-scale PDF analysis framework. This framework is founded based on exploiting the analogy between PDFs and analog (or digital) signals. PDFs and signals are similar in the sense that they represent characteristics of variables in their respective domains, except that there are constraints imposed on PDFs. Therefore, it is reasonable to consider a PDF as a signal that is subject to amplitude constraints, and as such apply signal processing techniques to analyze the PDF. The multi-scale PDF analysis framework is proposed to recursively decompose an arbitrary PDF from its fine to coarse scales. The recursive decompositions are designed so as to ensure that requirements such as PDF constraints, zero-phase shift and non-creation of artifacts are satisfied. The relationship between the PDFs at consecutive scales is derived in order for the PDF optimization process to recursively reconstruct the posterior PDF from its coarse to fine scales. At each scale, PDF reconstruction aims to reduce the variances of the posterior PDF Gaussian components, and as a result the confidence in the estimate is increased. The overall posterior PDF variance reduction is guided by the posterior Cramer-Rao bound. A series of experiments is conducted to investigate the performance of the proposed method on ultrasound multi-layered soft tissue characterization. Multi-layered tissue phantoms that emulate ocular components of the eye are fabricated as test subjects. Experimental results confirm that the proposed MSF inverse scattering approach is well suited for three-dimensional ultrasound tissue characterization. In addition, performance comparisons between MSF and a state-of-the-art nonlinear stochastic filter are conducted. Results show that MSF is more accurate and less computational intensive than the state-of-the-art filter.
77

Probabilistic complex phase representation objective function for multimodal image registration

Wong, Alexander 04 August 2010 (has links)
An interesting problem in computer vision is that of image registration, which plays an important role in many vision-based recognition and motion analysis applications. Of particular interest among data registration problems are multimodal image registration problems, where the image data sets are acquired using different imaging modalities. There are several important issues that make real-world multimodal registration a difficult problem to solve. First, images are often characterized by illumination and contrast non-uniformities. Such image non-uniformities result in local minima along the convergence plane that make it difficult for local optimization schemes to converge to the correct solution. Second, real-world images are often contaminated with signal noise, making the extraction of meaningful features for comparison purposes difficult to accomplish. Third, feature space differences make performing direct comparisons between the different data sets with a reasonable level of accuracy a challenging problem. Finally, solving the multimodal registration problem can be computationally expensive for large images. This thesis presents a probabilistic complex phase representation (PCPR) objective function for registering images acquired using different imaging modalities. A probabilistic multi-scale approach is introduced to create image representations based on local phase relationships extracted using complex wavelets. An objective function is introduced for assessing the alignment between the images based on a Geman-McClure error distribution model between the probabilistic complex phase representations of the images. Experimental results show that the proposed PCPR objective function can provide improved registration accuracies when compared to existing objective functions.
78

Using sex pheromone and a multi-scale approach to predict the distribution of a rare saproxylic beetle

Musa, Najihah January 2012 (has links)
The European red click beetle, Elater ferrugineus L., is associated with wood mould in old hollow deciduous tree. As a result of severe habitat fragmentation caused by human disturbance, it is threatened throughout its distribution range. A new odour-based trapping method, which is very efficient in attracting males, was used in the present study to relate the occurrence of E. ferrugineus to the density of deciduous trees using a recently completed regional survey recording &gt;120, 000 deciduous trees. Results showed that the occurrence of E. ferrugineus increased with increasing amount of large hollow and large non-hollow trees in the surrounding landscape. Quercus robur was found as an important substrate for E. ferrugineus and two groups of tree species (Carpinus betulus, Fagus sylvatica, Ulmus glabra, vs. Acer platanoides, Aesculus hippocastanum, Fraxinus excelsior and Tilia cordata) could be a complement to the existence of Quercus in sustaining the beetle’s population. E. ferrugineus responded to the density of Quercus at two different spatial scales, 327 m and 4658 m, suggesting that a multi-scale approach is important for studying the species. In conclusion, for conservation management, priority should be given to Quercus, and also to all deciduous trees in the genera listed above, regardless of the tree quality. The response showed by E. ferrugineus on the amount of substrate at two different scales indicates that using multi-scale approach in studying this particular species is the better option as single-scale approach may result in poor decision support.
79

Spectral Integral Method and Spectral Element Method Domain Decomposition Method for Electromagnetic Field Analysis

Lin, Yun January 2011 (has links)
<p>In this work, we proposed a spectral integral method (SIM)-spectral element method (SEM)- finite element method (FEM) domain decomposition method (DDM) for solving inhomogeneous multi-scale problems. The proposed SIM-SEM-FEM domain decomposition algorithm can efficiently handle problems with multi-scale structures, </p><p>by using FEM to model electrically small sub-domains and using SEM to model electrically large and smooth sub-domains. The SIM is utilized as an efficient boundary condition. This combination can reduce the total number of elements used in solving multi-scale problems, thus it is more efficient than conventional FEM or conventional FEM domain decomposition method. Another merit of the proposed method is that it is capable of handling arbitrary non-conforming elements. Both geometry modeling and mesh generation are totally independent for different sub-domains, thus the geometry modeling and mesh generation are highly flexible for the proposed SEM-FEM domain decomposition method. As a result, the proposed SIM-SEM-FEM DDM algorithm is very suitable for solving inhomogeneous multi-scale problems.</p> / Dissertation
80

Homogenization and Bridging Multi-scale Methods for the Dynamic Analysis of Periodic Solids

Gonella, Stefano 03 May 2007 (has links)
This work investigates the application of homogenization techniques to the dynamic analysis of periodic solids, with emphasis on lattice structures. The presented analysis is conducted both through a Fourier-based technique and through an alternative approach involving Taylor series expansions directly performed in the spatial domain in conjunction with a finite element formulation of the lattice unit cell. The challenge of increasing the accuracy and the range of applicability of the existing homogenization methods is addressed with various techniques. Among them, a multi-cell homogenization is introduced to extend the region of good approximation of the methods to include the short wavelength limit. The continuous partial differential equations resulting from the homogenization process are also used to estimate equivalent mechanical properties of lattices with various internal configurations. In particular, a detailed investigation is conducted on the in-plane behavior of hexagonal and re-entrant honeycombs, for which both static properties and wave propagation characteristics are retrieved by applying the proposed techniques. The analysis of wave propagation in homogenized media is furthermore investigated by means of the bridging scales method to address the problem of modelling travelling waves in homogenized media with localized discontinuities. This multi-scale approach reduces the computational cost associated with a detailed finite element analysis conducted over the entire domain and yields considerable savings in CPU time. The combined use of homogenization and bridging method is suggested as a powerful tool for fast and accurate wave simulation and its potentials for NDE applications are discussed.

Page generated in 0.0336 seconds