191 |
Statistical shape analysis of the proximal femur : development of a fully automatic segmentation system and its applicationsLindner, Claudia January 2014 (has links)
Osteoarthritis (OA) is the most common form of human joint disease causing significant pain and disability. Current treatment for hip OA is limited to pain management and joint replacement for end-stage disease. The development of methods for early diagnosis and new treatment options are urgently needed to minimise the impact of the disease. Studies of hip OA have shown that hip joint morphology correlates with susceptibility to hip OA and disease progression. Bone shape analyses play an important role in disease diagnosis, pre-operative planning, and treatment analysis as well as in epidemiological studies aimed at identifying risk factors for hip OA. Statistical Shape Models (SSMs) are being increasingly applied to imaging-based bone shape analyses as they provide a means of quantitatively describing the global shape of the bone. This is in contrast to conventional clinical and research practice where the analysis of bone shape is reduced to a series of measurements of lengths and angles. This thesis describes the development of a novel fully automatic software system that segments the proximal femur from anteroposterior (AP) pelvic radiographs by densely placing 65 points along its contour. These annotations can then be used for the detailed morphometric analysis of proximal femur shape. The performance of the system was evaluated on a large dataset of 839 radiographs of mixed quality. Achieving a mean point-to-curve error of less than 0.9mm for 99% of all 839 AP pelvic radiographs, this is the most accurate and robust automatic method for segmenting the proximal femur in two-dimensional radiographs yet published. The system was also applied to a number of morphometric analyses of the proximal femur, showing that SSM-based radiographic proximal femur shape significantly differs between males and females, and is highly symmetric between the left and right hip joint of an individual. In addition, the research described in this thesis demonstrates how the point annotations resulting from the system can be used for univariate and multivariate genetic association analyses, identifying three novel genetic variants that contribute to radiographic proximal femur shape while also showing an association with hip OA.The developed system will facilitate complex morphometric and genetic analyses of shape variation of the proximal femur across large datasets, paving the way for the development of new options to diagnose, treat and prevent hip OA.
|
192 |
Marketingový význam body image pro generaci Y / Marketing importance of body image for generation YBarvínek, Jakub January 2014 (has links)
This master thesis deals with the determination of meaning of body image for generation Y. Its goal is to identify the perception of body image by young generation and compare this perception with body image which is communicated via commercial messages focusing Czech generation Y. In the theoretical part of this thesis there is a description of the evolution of beauty ideal, the impact of media on the perception of elements of body image and theoretical conception the role of body image for market segmentation. The analytical part of this thesis deals with the evaluation of communication using content analysis of commercial messages and description of posture of generation Y to the elements related to body image and lifestyle of the target group. The posture of target group is described by analysis MML -- TGI data. Based on results of this thesis it is possible to assess the importance of meaning of individual components of body image for target group of young people and interpret described data for purposes of market segmentation and marketing communication.
|
193 |
Automatic Tissue Segmentation of Volumetric CT Data of the Pelvic RegionJeuthe, Julius January 2017 (has links)
Automatic segmentation of human organs allows more accurate calculation of organ doses in radiationtreatment planning, as it adds prior information about the material composition of imaged tissues. For instance, the separation of tissues into bone, adipose tissue and remaining soft tissues allows to use tabulated material compositions of those tissues. This approximation is not perfect because of variability of tissue composition among patients, but is still better than no approximation at all. Another use for automated tissue segmentationis in model based iterative reconstruction algorithms. An example of such an algorithm is DIRA, which is developed at the Medical Radiation Physics and the Center for Medical Imaging Science and Visualization(CMIV) at Linköpings University. DIRA uses dual-energy computed tomography (DECT) data to decompose patient tissues into two or three base components. So far DIRA has used the MK2014 algorithm which segments human pelvis into bones, adipose tissue, gluteus maximus muscles and the prostate. One problem was that MK2014 was limited to 2D and it was not very robust. Aim: The aim of this thesis work was to extend the MK2014 to 3D as well as to improve it. The task was structured to the following activities: selection of suitable segmentation algorithms, evaluation of their results and combining of those to an automated segmentation algorithm. Of special interest was image registration usingthe Morphon. Methods: Several different algorithms were tested. For instance: Otsu's method followed by threshold segmentation; histogram matching followed by threshold segmentation, region growing and hole-filling; affine phase-based registration and the Morphon. The best-performing algorithms were combined into the newly developed JJ2016. Results: For the segmentation of adipose tissue and the bones in the eight investigated data sets, the JJ2016 algorithm gave better results than the MK2014. The better results of the JJ2016 were achieved by: (i) a new segmentation algorithm for adipose tissue which was not affected by the amount of air surrounding the patient and segmented smaller regions of adipose tissue and (ii) a new filling algorithm for connecting segments of compact bone. The JJ2016 algorithm also estimates a likely position for the prostate and the rectum by combining linear and non-linear phase-based registration for atlas based segmentation. The estimated position (center point) was in most cases close to the true position of the organs. Several deficiencies of the MK2014 algorithm were removed but the improved version (MK2014v2) did not perform as well as the JJ2016. Conclusions: JJ2016 performed well for all data sets. The JJ2016 algorithm is usable for the intended application, but is (without further improvements) too slow for interactive usage. Additionally, a validation of the algorithm for clinical use should be performed on a larger number of data sets, covering the variability of patients in shape and size.
|
194 |
Human pose augmentation for facilitating Violence Detection in videos: a combination of the deep learning methods DensePose and VioNetHuman pose augmentation for facilitating Violence Detection in videos: a combination of the deep learning methods DensePose and VioNetCalzavara, Ivan January 2020 (has links)
In recent years, deep learning, a critical technology in computer vision, has achieved remarkable milestones in many fields, such as image classification and object detection. In particular, it has also been introduced to address the problem of violence detection, which is a big challenge considering the complexity to establish an exact definition for the phenomenon of violence. Thanks to the ever increasing development of new technologies for surveillance, we have nowadays access to an enormous database of videos that can be analyzed to find any abnormal behavior. However, by dealing with such huge amount of data it is unrealistic to manually examine all of them. Deep learning techniques, instead, can automatically study, learn and perform classification operations. In the context of violence detection, with the extraction of visual harmful patterns, it is possible to design various descriptors to represent features that can identify them. In this research we tackle the task of generating new augmented datasets in order to try to simplify the identification step performed by a violence detection technique in the field of Deep Learning. The novelty of this work is to introduce the usage of DensePose model to enrich the images in a dataset by highlighting (i.e. by identifying and segmenting) all the human beings present in them. With this approach we gained knowledge of how this algorithm performs on videos with a violent context and how the violent detection network benefit from this procedure. Performances have been evaluated from the point of view of segmentation accuracy and efficiency of the violence detection network, as well from the computational point of view. Results shows how the context of the scene is the major indicator that brings the DensePose model to correct segment human beings and how the context of violence does not seem to be the most suitable field for the application of this model since the common overlap of bodies (distinctive aspect of violence) acts as disadvantage for the segmentation. For this reason, the violence detection network does not exploit its full potential. Finally, we understood how such augmented datasets can boost up the training speed by reducing the time needed for the weights-update phase, making this procedure a helpful adds-on for implementations in different contexts where the identification of human beings still plays the major role.
|
195 |
An analytic approach to tensor scale with efficient computational solution and applications to medical imagingXu, Ziyue 01 May 2012 (has links)
Scale is a widely used notion in medical image analysis that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, a notion of local morphometric scale referred to as "tensor scale" was introduced using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, with previous framework, 3-D application is not practical due to computational complexity. The overall aim of the Ph.D. research is to establish an analytic definition of tensor scale in n-dimensional (n-D) images, to develop an efficient computational solution for 2- and 3-D images and to investigate its role in various medical imaging applications including image interpolation, filtering, and segmentation. Firstly, an analytic definition of tensor scale for n-D images consisting of objects formed by pseudo-Riemannian partitioning manifolds has been formulated. Tensor scale captures contextual structural information which is useful in local structure-adaptive anisotropic parameter control and local structure description for object/image matching. Therefore, it is helpful in a wide range of medical imaging algorithms and applications. Secondly, an efficient computational solution of tensor scale for 2- and 3-D images has been developed. The algorithm has combined Euclidean distance transform and several novel differential geometric approaches. The accuracy of the algorithm has been verified on both geometric phantoms and real images compared to the theoretical results generated using brute-force method. Also, a matrix representation has been derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Thirdly, an inter-slice interpolation algorithm using 2-D tensor scale information of adjacent slices has been developed to determine the interpolation line at each image location in a gray level image. Experimental results have established the superiority of the tensor scale based interpolation method as compared to existing interpolation algorithms. Fourthly, an anisotropic diffusion filtering algorithm based on tensor scale has been developed. The method made use of tensor scale to design the conductance function for diffusion process so that along structure diffusion is encouraged and boundary sharpness is preserved. The performance has been tested on phantoms and medical images at various noise levels and the results were quantitatively compared with conventional gradient and structure tensor based algorithms. The experimental results formed are quite encouraging. Also, a tensor scale based n-linear interpolation method has been developed where the weights of neighbors were locally tuned based on local structure size and orientation. The method has been applied on several phantom and real images and the performance has been evaluated in comparison with standard linear interpolation and windowed Sinc interpolation methods. Experimental results have shown that the method helps to generate more precise structure boundaries without causing ringing artifacts. Finally, a new anisotropic constrained region growing method locally controlled by tensor scale has been developed for vessel segmentation that encourages axial region growing while arresting cross-structure leaking. The method has been successfully applied on several non-contrast pulmonary CT images. The accuracy of the new method has been evaluated using manually selection and the results found are very promising.
|
196 |
Retinal Vessel Segmentation on Ultra Wide-field Fluorescein Angiography ImagesBondada, Harshith January 2019 (has links)
No description available.
|
197 |
Face recognition with partial occlusions using weighing and image segmentationChanaiwa, Tapfuma January 2020 (has links)
This dissertation studied the problem of face recognition when facial images have partial occlusions like sunglasses and scarfs. These partial occlusions lead to the loss of discriminatory information when trying to recognise a person's face using traditional face recognition techniques that do not take into account these shortcomings. This dissertation aimed to fill the gap of knowledge. Several papers in literature put forward the theory that not all regions of the face contribute equally when discriminating between different subjects. They state that some regions of the face are more equal than others, like the eyes and nose. While this may be true in theory there was a need to comprehensively study this problem.
A weighting technique was introduced that that took into account the different features of the face and assigned weights for the different features of the face based on their distance from the five points that were identified as the centre of the weighing technique. Five centres were chosen which were the left eye, the right eye, the centre of the brows, the nose and the mouth. These centres perfectly captured were the five dominant regions of the face where roughly located. This weighing technique was fused with an image segmentation process that ultimately led to a hybrid approach to face recognition.
Five features of the face were identified and studied quantitatively on how much they influence face recognition. These five features were the chin (C), eyes (E), forehead (F), mouth (M) and finally the nose (N). For the system to be robust and thorough, combinations of these five features were constructed to make 31 models that were used for both training and testing purposes. This meant that each of the five features had 16 models associated with it. For example, the chin (C) had the following models associated with it; C, CE, CF, CM, CN, CE, CEM, CEN, CFM, CFN, CMN, CEFM CEFN, CEMN, CFMN and CEFMN. These models were put in five different groupings called Category 1 up to Category 5. A Category 3 model implied that only three out of the five features were utilised for training the algorithm and testing. An example of a Category 3 model was the CFN model. This meant that this model simulated partial occlusion on the mouth and the chin region. The face recognition algorithm was trained on all these different models in order to ascertain the efficiency and effectiveness of this proposed technique. The results were then compared with various methods from the literature. / Dissertation (MEng (Computer Engineering))--University of Pretoria, 2020. / Electrical, Electronic and Computer Engineering / MEng (Computer Engineering) / Unrestricted
|
198 |
Cliffordovy algebry v kolorimetrii a analýze obrazu / Clifford algebras in colour theory and image analysisTichý, Radek January 2018 (has links)
This thesis deals with conformal geometric algebra CGA for colour image processing, particularly with colour segmentation. For this reason it is not sufficient to work in RGB colour space. It is more convenient to use a colour space called CIELAB. CIELAB is endowed by Euclidean metric corresponding with human perception of colours. Afterwards an algorithm for an object detection via CGA based on colour differences is included. The final part of the thesis deals with least squares fitting of sphere to points using CGA. The sphere fitting is then used to adjust colour differences in an image to improve the algorithm for an object detection.
|
199 |
Segmentace obrazových dat pomocí grafových neuronových sítí / Image segmentation using graph neural networksBoszorád, Matej January 2020 (has links)
This diploma thesis describes and implements the design of a graph neural network usedfor 2D segmentation of neural structure. The first chapter of the thesis briefly introduces the problem of segmentation. In this chapter, segmentation techniques are divided according to the principles of the methods they use. Each type of technique contains the essence of this category as well as a description of one representative. The second chapter of the diploma thesis explains graph neural networks (GNN for short). Here, the thesis divides graph neural networks in general and describes recurrent graph neural networks(RGNN for short) and graph autoencoders, that can be used for image segmentation, in more detail. The specific image segmentation solution is based on the message passing method in RGNN, which can replace convolution masks in convolutional neural networks.RGNN also provides a simpler multilayer perceptron topology. The second type of graph neural networks characterised in the thesis are graph autoencoders, which use various methods for better encoding of graph vertices into Euclidean space. The last part ofthe diploma thesis deals with the analysis of the problem, the proposal of its specific solution and the evaluation of results. The purpose of the practical part of the work was the implementation of GNN for image data segmentation. The advantage of using neural networks is the ability to solve different types of segmentation by changing training data. RGNN with messaging passing and node2vec were used as implementation GNNf or segmentation problem. RGNN training was performed on graphics cards provided bythe school and Google Colaboratory. Learning RGNN using node2vec was very memory intensive and therefore it was necessary to train on a processor with an operating memory larger than 12GB. As part of the RGNN optimization, learning was tested using various loss functions, changing topology and learning parameters. A tree structure method was developed to use node2vec to improve segmentation, but the results did not confirman improvement for a small number of iterations. The best outcomes of the practical implementation were evaluated by comparing the tested data with the convolutional neural network U-Net. It is possible to state comparable results to the U-Net network, but further testing is needed to compare these neural networks. The result of the thesisis the use of RGNN as a modern solution to the problem of image segmentation and providing a foundation for further research.
|
200 |
Segmentace obrazových dat / Image data segmentationStodůlka, Stanislav January 2012 (has links)
Na začátku diplomové práce je čtenář seznámen s procesem zpracování obrazu a v navazující části jsou popsáný a vysvětleny dnes nejpoužívanější algoritmy pro segmentaci obrazu. Na základě watershed transform je vytvořen segmentační operátor pro volně šiřitelný program Rapid Miner a v dokumentu je popsáno, jak proces vývoje probíhal. V poslední části práce jsou prezentovány segmentované obrazy a popsána úskalí takto implementované watershed transform metody.
|
Page generated in 0.1524 seconds