• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 4
  • 3
  • 1
  • Tagged with
  • 70
  • 70
  • 50
  • 28
  • 27
  • 26
  • 19
  • 16
  • 16
  • 16
  • 16
  • 15
  • 15
  • 13
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Efficient numerical methods for ultrasound elastography

Squires, Timothy Richard January 2012 (has links)
In this thesis, two algorithms are introduced for use in ultrasound elastography. Ultrasound elastography is a technique developed in the last 20 years by which anomalous regions in soft tissue are located and diagnosed without the need for biopsy. Due to this, the relativity cheap cost of ultrasound imaging and the high level of accuracy in the methods, ultrasound elastography methods have shown great potential for the diagnosis of cancer in soft tissues. The algorithms introduced in this thesis represent an advance in this field. The first algorithm is a two-step iteration procedure consisting of two minimization problems - displacement estimation and elastic parameter calculation that allow for diagnosis of any anomalous regions within soft tissue. The algorithm represents an improvement on existing methods in several ways. A weighting factor is introduced for each different point in the tissue dependent on the confidence in the accuracy of the data at that point, an exponential substitution is made for the elasticity modulus, an adjoint method is used for efficient calculation of the gradient vector and a total variation regularization technique is used. Most importantly, an adaptive mesh refinement strategy is introduced that allows highly efficient calculation of the elasticity distribution of the tissue though using a number of degrees of freedom several orders lower than methods that use a uniform mesh refinement strategy. Results are presented that show the algorithm is robust even in the presence of significant noise and that it can locate a tumour of 4mm in diameter within a 5cm square region of tissue. Also, the algorithm is extended into 3 dimensions and results are presented that show that it can calculate a 3 dimensional elasticity distribution efficiently. This extension into 3-d is a significant advance in the field. The second algorithm is a one-step algorithm that seeks to combine the two problems of elasticity distribution and displacement calculation into one. As in the two-step algorithm, a weighting factor, exponential substitution for the elasticity parameter, adjoint method for calculation of the gradient vector, total variation regularization and adaptive mesh refinement strategy are incorporated. Results are presented that show that this original approach can locate tumours of varying sizes and shapes in the presence of varying levels of added artificial noise and that it can determine the presence of a tumour in images taken from breast tissue in vivo.
62

Analysis of 3D echocardiography

Chykeyuk, Kiryl January 2014 (has links)
Heart disease is the major cause of death in the developed world. Due to its fast, portable, low-cost and harmless way of imaging the heart, echocardiography has become the most frequent tool for diagnosis of cardiac function in clinical routine. However, visual assessment of heart function from echocardiography is challenging, highly operatordependant and is subject to intra- and inter observer errors. Therefore, development of automated methods for echocardiography analysis is important towards accurate assessment of cardiac function. In this thesis we develop new ways to model echocardiography data using Bayesian machine learning methods and concern three problems: (i) wall motion analysis in 2D stress echocardiography, (ii) segmentation of the myocardium in 3D echocardiography, and (iii) standard views extraction from 3D echocardiography. Firstly, we propose and compare four discriminative methods for feature extraction and wall motion classification of 2D stress echocardiography (images of the heart taken at rest and after exercise or pharmalogical stress). The four methods are based on (i) Support Vector Machines, (ii) Relevance Vector Machines, (iii) Lasso algorithm and Regularised Least Squares, (iv) Elastic Net regularisation and Regularised Least Squares. Although all the methods are shown to have superior performance to the state-of-the-art, one conclusion is that good segmentation of the myocardium in echocardiography is key for accurate assessment of cardiac wall motion. We investigate the application of one of the most promising current machine learning techniques, called Decision Random Forests, to segment the myocardium from 3D echocardiograms. We demonstrate that more reliable and ultrasound specific descriptors are needed in order to achieve the best results. Specifically, we introduce two sets of new features to improve the segmentation results: (i) LoCo and GloCo features with a local and a global shape constraint on coupled endoand epicardial boundaries, and (ii) FA features, which use the Feature Asymmetry measure to highlight step-like edges in echocardiographic images. We also reinforce the traditional features such as Haar and Rectangular features by aligning 3D echocardiograms. For that we develop a new registration technique, which is based on aligning centre lines of the left ventricles. We show that with alignment performance is boosted by approximately 15%. Finally, a novel approach to detect planes in 3D images using regression voting is proposed. To the best of our knowledge we are the first to use a one-step regression approach for the task of plane detection in 3D images. We investigate the application to standard views extraction from 3D echocardiography to facilitate efficient clinical inspection of cardiac abnormalities and diseases. We further develop a new method, called the Class- Specific Regression Forest, where class label information is incorporating into the training phase to reinforce the learning from semantically relevant to the problem classes. During testing the votes from irrelevant classes are excluded from voting to maximise the confidence of output predictors. We demonstrate that the Class-Specific Regression Random Forest outperforms the classic Regression Random Forest and produces results comparable to the manual annotations.
63

Iterative Local Model Selection for tracking and mapping

Segal, Aleksandr V. January 2014 (has links)
The past decade has seen great progress in research on large scale mapping and perception in static environments. Real world perception requires handling uncertain situations with multiple possible interpretations: e.g. changing appearances, dynamic objects, and varying motion models. These aspects of perception have been largely avoided through the use of heuristics and preprocessing. This thesis is motivated by the challenge of including discrete reasoning directly into the estimation process. We approach the problem by using Conditional Linear Gaussian Networks (CLGNs) as a generalization of least-squares estimation which allows the inclusion of discrete model selection variables. CLGNs are a powerful framework for modeling sparse multi-modal inference problems, but are difficult to solve efficiently. We propose the Iterative Local Model Selection (ILMS) algorithm as a general approximation strategy specifically geared towards the large scale problems encountered in tracking and mapping. Chapter 4 introduces the ILMS algorithm and compares its performance to traditional approximate inference techniques for Switching Linear Dynamical Systems (SLDSs). These evaluations validate the characteristics of the algorithm which make it particularly attractive for applications in robot perception. Chief among these is reliability of convergence, consistent performance, and a reasonable trade off between accuracy and efficiency. In Chapter 5, we show how the data association problem in multi-target tracking can be formulated as an SLDS and effectively solved using ILMS. The SLDS formulation allows the addition of additional discrete variables which model outliers and clutter in the scene. Evaluations on standard pedestrian tracking sequences demonstrates performance competitive with the state of the art. Chapter 6 applies the ILMS algorithm to robust pose graph estimation. A non-linear CLGN is constructed by introducing outlier indicator variables for all loop closures. The standard Gauss-Newton optimization algorithm is modified to use ILMS as an inference algorithm in between linearizations. Experiments demonstrate a large improvement over state-of-the-art robust techniques. The ILMS strategy presented in this thesis is simple and general, but still works surprisingly well. We argue that these properties are encouraging for wider applicability to problems in robot perception.
64

Určování poloh robotů Trilobot / Determination of Trilobot Robots Positions

Loyka, Tomáš January 2007 (has links)
This master's thesis is engaged in machine vision, methods of image processing and analysis. The reason is to create application to determine relative positions of Trilobot robots in the laboratory.
65

Developing clinical measures of lung function in COPD patients using medical imaging and computational modelling

Doel, Thomas MacArthur Winter January 2012 (has links)
Chronic obstructive pulmonary disease (COPD) describes a range of lung conditions including emphysema, chronic bronchitis and small airways disease. While COPD is a major cause of death and debilitating illness, current clinical assessment methods are inadequate: they are a poor predictor of patient outcome and insensitive to mild disease. A new imaging technology, hyperpolarised xenon MRI, offers the hope of improved diagnostic techniques, based on regional measurements using functional imaging. There is a need for quantitative analysis techniques to assist in the interpretation of these images. The aim of this work is to develop these techniques as part of a clinical trial into hyperpolarised xenon MRI. In this thesis we develop a fully automated pipeline for deriving regional measurements of lung function, making use of the multiple imaging modalities available from the trial. The core of our pipeline is a novel method for automatically segmenting the pulmonary lobes from CT data. This method combines a Hessian-based filter for detecting pulmonary fissures with anatomical cues from segmented lungs, airways and pulmonary vessels. The pipeline also includes methods for segmenting the lungs from CT and MRI data, and the airways from CT data. We apply this lobar map to the xenon MRI data using a multi-modal image registration technique based on automatically segmented lung boundaries, using proton MRI as an intermediate stage. We demonstrate our pipeline by deriving lobar measurements of ventilated volumes and diffusion from hyperpolarised xenon MRI data. In future work, we will use the trial data to further validate the pipeline and investigate the potential of xenon MRI in the clinical assessment of COPD. We also demonstrate how our work can be extended to build personalised computational models of the lung, which can be used to gain insights into the mechanisms of lung disease.
66

\"Processamento e análise de imagens para medição de vícios de refração ocular\" / Image Processing and Analysis for Measuring Ocular Refraction Errors

Valerio Netto, Antonio 18 August 2003 (has links)
Este trabalho apresenta um sistema computacional que utiliza técnicas de Aprendizado de Máquina (AM) para auxiliar o diagnóstico oftalmológico. Trata-se de um sistema de medidas objetivas e automáticas dos principais vícios de refração ocular, astigmatismo, hipermetropia e miopia. O sistema funcional desenvolvido aplica técnicas convencionais de processamento a imagens do olho humano fornecidas por uma técnica de aquisição chamada Hartmann-Shack (HS), ou Shack-Hartmann (SH), com o objetivo de extrair e enquadrar a região de interesse e remover ruídos. Em seguida, vetores de características são extraídos dessas imagens pela técnica de transformada wavelet de Gabor e, posteriormente, analisados por técnicas de AM para diagnosticar os possíveis vícios refrativos presentes no globo ocular representado. Os resultados obtidos indicam a potencialidade dessa abordagem para a interpretação de imagens de HS de forma que, futuramente, outros problemas oculares possam ser detectados e medidos a partir dessas imagens. Além da implementação de uma nova abordagem para a medição dos vícios refrativos e da introdução de técnicas de AM na análise de imagens oftalmológicas, o trabalho contribui para a investigação da utilização de Máquinas de Vetores Suporte e Redes Neurais Artificiais em sistemas de Entendimento/Interpretação de Imagens (Image Understanding). O desenvolvimento deste sistema permite verificar criticamente a adequação e limitações dessas técnicas para a execução de tarefas no campo do Entendimento/Interpretação de Imagens em problemas reais. / This work presents a computational system that uses Machine Learning (ML) techniques to assist in ophthalmological diagnosis. The system developed produces objective and automatic measures of ocular refraction errors, namely astigmatism, hypermetropia and myopia from functional images of the human eye acquired with a technique known as Hartmann-Shack (HS), or Shack-Hartmann (SH). Image processing techniques are applied to these images in order to remove noise and extract the regions of interest. The Gabor wavelet transform technique is applied to extract feature vectors from the images, which are then input to ML techniques that output a diagnosis of the refractive errors in the imaged eye globe. Results indicate that the proposed approach creates interesting possibilities for the interpretation of HS images, so that in the future other types of ocular diseases may be detected and measured from the same images. In addition to implementing a novel approach for measuring ocular refraction errors and introducing ML techniques for analyzing ophthalmological images, this work investigates the use of Artificial Neural Networks and Support Vector Machines (SVMs) for tasks in Image Understanding. The description of the process adopted for developing this system can help in critically verifying the suitability and limitations of such techniques for solving Image Understanding tasks in \"real world\" problems.
67

Photogrammetric techniques for characterisation of anisotropic mechanical properties of Ti-6Al-4V

Arthington, Matthew Reginald January 2010 (has links)
The principal aims of this research have been the development of photogrammetric techniques for the measurement of anisotropic deformation in uniaxially loaded cylindrical specimens. This has been achieved through the use of calibrated cameras and the application of edge detection and multiple view geometry. The techniques have been demonstrated at quasi-static strain rates, 10^-3 s^-1, using a screw-driven loading device and high strain rates, 10^3 s^-1, using Split Hopkinson Bars. The materials that have been measured using the technique are nearlyisotropic steel, anisotropic cross-rolled Ti-6Al-4V and anisotropic clock-rolled commercially pure Zr. These techniques allow the surface shapes of specimens that deform elliptically to be completely tracked and measured in situ during loading. This has allowed the measurement of properties that could not have been recorded before, including true direct stress and the ratio of transverse strains in principal material directions, at quasi-static and elevated strain rates, in tension and compression. The techniques have been validated by measuring elliptical prisms of various aspect ratios and independently measuring interrupted specimens using a coordinate measurement machine. A secondary aim of this research has been to improve the characterisation of the anisotropic mechanical properties of cross-rolled Ti-6Al-4V using the techniques developed. In particular, the uniaxial yield stresses, hardening properties and the associated anisotropic deformation behaviour along the principal material directions, have all been recorded in detail not seen before. Significant findings include: higher yield stresses in-plane than in the through-thickness direction in both tension and compression, and the near transverse-isotropy of the through-thickness direction for loading conditions other than quasi-static tension, where significant anisotropy was observed.
68

\"Processamento e análise de imagens para medição de vícios de refração ocular\" / Image Processing and Analysis for Measuring Ocular Refraction Errors

Antonio Valerio Netto 18 August 2003 (has links)
Este trabalho apresenta um sistema computacional que utiliza técnicas de Aprendizado de Máquina (AM) para auxiliar o diagnóstico oftalmológico. Trata-se de um sistema de medidas objetivas e automáticas dos principais vícios de refração ocular, astigmatismo, hipermetropia e miopia. O sistema funcional desenvolvido aplica técnicas convencionais de processamento a imagens do olho humano fornecidas por uma técnica de aquisição chamada Hartmann-Shack (HS), ou Shack-Hartmann (SH), com o objetivo de extrair e enquadrar a região de interesse e remover ruídos. Em seguida, vetores de características são extraídos dessas imagens pela técnica de transformada wavelet de Gabor e, posteriormente, analisados por técnicas de AM para diagnosticar os possíveis vícios refrativos presentes no globo ocular representado. Os resultados obtidos indicam a potencialidade dessa abordagem para a interpretação de imagens de HS de forma que, futuramente, outros problemas oculares possam ser detectados e medidos a partir dessas imagens. Além da implementação de uma nova abordagem para a medição dos vícios refrativos e da introdução de técnicas de AM na análise de imagens oftalmológicas, o trabalho contribui para a investigação da utilização de Máquinas de Vetores Suporte e Redes Neurais Artificiais em sistemas de Entendimento/Interpretação de Imagens (Image Understanding). O desenvolvimento deste sistema permite verificar criticamente a adequação e limitações dessas técnicas para a execução de tarefas no campo do Entendimento/Interpretação de Imagens em problemas reais. / This work presents a computational system that uses Machine Learning (ML) techniques to assist in ophthalmological diagnosis. The system developed produces objective and automatic measures of ocular refraction errors, namely astigmatism, hypermetropia and myopia from functional images of the human eye acquired with a technique known as Hartmann-Shack (HS), or Shack-Hartmann (SH). Image processing techniques are applied to these images in order to remove noise and extract the regions of interest. The Gabor wavelet transform technique is applied to extract feature vectors from the images, which are then input to ML techniques that output a diagnosis of the refractive errors in the imaged eye globe. Results indicate that the proposed approach creates interesting possibilities for the interpretation of HS images, so that in the future other types of ocular diseases may be detected and measured from the same images. In addition to implementing a novel approach for measuring ocular refraction errors and introducing ML techniques for analyzing ophthalmological images, this work investigates the use of Artificial Neural Networks and Support Vector Machines (SVMs) for tasks in Image Understanding. The description of the process adopted for developing this system can help in critically verifying the suitability and limitations of such techniques for solving Image Understanding tasks in \"real world\" problems.
69

Towards large area single crystalline two dimensional atomic crystals for nanotechnology applications

Wu, Yimin A. January 2012 (has links)
Nanomaterials have attracted great interest due to the unique physical properties and great potential in the applications of nanoscale devices. Two dimensional atomic crystals, which are atomic thickness, especially graphene, have triggered the gold rush recently due to the fascinating high mobility at room temperature for future electronics. The crystal structure of nanomaterials will have great influence on their physical properties. Thus, this thesis is focused on developing the methods to control the crystal structure of nanomaterials, namely quantum dots as semiconductor, boron nitride (BN) as insulator, graphene as semimetal, with low cost for their applications in photonics, structural support and electronics. In this thesis, firstly, Mn doped ZnSe quantum dots have been synthesized using colloidal synthesis. The shape control of Mn doped ZnSe quantum dots has been achieved from branched to spherical by switching the injection temperature from kinetics to thermodynamics region. Injection rates have been found to have effect on controlling the crystal phase from zinc blende to wurtzite. The structural-property relationship has been investigated. It is found that the spherical wurtzite Mn doped ZnSe quantum dots have the highest quantum yield comparing with other shape or crystal phase of the dots. Then, the Mn doped ZnSe quantum dots were deposited onto the BN sheets, which were micron-sized and fabricated by chemical exfoliation, for high resolution imaging. It is the first demonstration of utilizing ultrathin carbon free 2D atomic crystal as support for high resolution imaging. Phase contrast images reveal moiré interference patterns between nanocrystals and BN substrate that are used to determine the relative orientation of the nanocrystals with respect to the BN sheets and interference lattice planes using a newly developed equation method. Double diffraction is observed and has been analyzed using a vector method. As only a few microns sized 2D atomic crystal, like BN, can be fabricated by the chemical exfoliation. Chemical vapour deposition (CVD) is as used as an alternative to fabricate large area graphene. The mechanism and growth dynamics of graphene domains have been investigated using Cu catalyzed atmospheric pressure CVD. Rectangular few layer graphene domains were synthesized for the first time. It only grows on the Cu grains with (111) orientation due to the interplay between atomic structure of Cu lattice and graphene domains. Hexagonal graphene domains can form on nearly all non-(111) Cu surfaces. The few layer hexagonal single crystal graphene domains were aligned in their crystallographic orientation over millimetre scale. In order to improve the alignment and reduce the layer of graphene domains, a novel method is invented to perform the CVD reaction above the melting point of copper (1090 ºC) and using molybdenum or tungsten to prevent the balling of the copper from dewetting. By controlling the amount of hydrogen during the growth, individual single crystal domains of monolayer over 200 µm are produced determined by electron diffraction mapping. Raman mapping shows the monolayer nature of graphene grown by this method. This graphene exhibits a linear dispersion relationship and no sign of doping. The large scale alignment of monolayer hexagonal graphene domains with epitaxial relationship on Cu is the key to get wafer-sized single crystal monolayer graphene films. This paves the way for industry scale production of 2D single crystal graphene.
70

Left ventricle functional analysis in 2D+t contrast echocardiography within an atlas-based deformable template model framework

Casero Cañas, Ramón January 2008 (has links)
This biomedical engineering thesis explores the opportunities and challenges of 2D+t contrast echocardiography for left ventricle functional analysis, both clinically and within a computer vision atlas-based deformable template model framework. A database was created for the experiments in this thesis, with 21 studies of contrast Dobutamine Stress Echo, in all 4 principal planes. The database includes clinical variables, human expert hand-traced myocardial contours and visual scoring. First the problem is studied from a clinical perspective. Quantification of endocardial global and local function using standard measures shows expected values and agreement with human expert visual scoring, but the results are less reliable for myocardial thickening. Next, the problem of segmenting the endocardium with a computer is posed in a standard landmark and atlas-based deformable template model framework. The underlying assumption is that these models can emulate human experts in terms of integrating previous knowledge about the anatomy and physiology with three sources of information from the image: texture, geometry and kinetics. Probabilistic atlases of contrast echocardiography are computed, while noting from histograms at selected anatomical locations that modelling texture with just mean intensity values may be too naive. Intensity analysis together with the clinical results above suggest that lack of external boundary definition may preclude this imaging technique for appropriate measuring of myocardial thickening, while endocardial boundary definition is appropriate for evaluation of wall motion. Geometry is presented in a Principal Component Analysis (PCA) context, highlighting issues about Gaussianity, the correlation and covariance matrices with respect to physiology, and analysing different measures of dimensionality. A popular extension of deformable models ---Active Appearance Models (AAMs)--- is then studied in depth. Contrary to common wisdom, it is contended that using a PCA texture space instead of a fixed atlas is detrimental to segmentation, and that PCA models are not convenient for texture modelling. To integrate kinetics, a novel spatio-temporal model of cardiac contours is proposed. The new explicit model does not require frame interpolation, and it is compared to previous implicit models in terms of approximation error when the shape vector changes from frame to frame or remains constant throughout the cardiac cycle. Finally, the 2D+t atlas-based deformable model segmentation problem is formulated and solved with a gradient descent approach. Experiments using the similarity transformation suggest that segmentation of the whole cardiac volume outperforms segmentation of individual frames. A relatively new approach ---the inverse compositional algorithm--- is shown to decrease running times of the classic Lucas-Kanade algorithm by a factor of 20 to 25, to values that are within real-time processing reach.

Page generated in 0.1202 seconds