Spelling suggestions: "subject:"image registration."" "subject:"lmage registration.""
51 |
Experimental Validation of Mathematical Models to Include Biomechanics into Dose Accumulation Calculation in RadiotherapyNiu, Jiafei 15 February 2010 (has links)
Inaccurate dose calculation in radiotherapy can lead to errors in treatment delivery and evaluation of treatment efficacy. Respiration can cause of intra-fractional motions, leading to uncertainties in tumor targeting. These motions should therefore be included in dose calculation. The finite element method-based deformable registration platform MORFEUS is able to accurately quantify organ deformations. The dose accumulation algorithm included in MORFEUS takes organ deformation and tumor movement into account. This study has experimentally validated this dose accumulation algorithm by combining 3D gel dosimetry, respiratory motion-mimicking actuation mechanism, and finite element analysis. Results have shown that within the intrinsic measurement uncertainties of gel dosimetry, under normal conformal dose distribution conditions, more than 90% of the voxels in MORFEUS generated dose grids have met the criterion analogous to the gamma test. The average (SD) distance between selected pairs of isodose surfaces on the gel and MORFEUS dose distributions is 0.12 (0.08) cm.
|
52 |
Nonrigid Registration of Dynamic Contrast-enhanced MRI Data using Motion Informed Intensity CorrectionsLausch, Anthony 13 December 2011 (has links)
Effective early detection and monitoring of patient response to cancer therapy is important for improved patient outcomes, avoiding unnecessary procedures and their associated toxicities, as well as the development of new therapies. Dynamic contrast-enhanced magnetic resonance imaging shows promise as a way to evaluate tumour vasculature and assess the efficacy of new anti-angiogenic drugs. However, unavoidable patient motion can decrease the accuracy of subsequent analyses rendering the data unusable. Motion correction algorithms are challenging to develop for contrast-enhanced data since intensity changes due to contrast-enhancement and patient motion must somehow be differentiated from one another. A novel method is presented that employs a motion-informed intensity correction in order to facilitate the registration of contrast enhanced data. The intensity correction simulates the presence or absence of contrast agent in the image volumes to be registered in an attempt to emulate the level of contrast-enhancement present in a single reference image volume.
|
53 |
Investigating tract-specific changes in white matter with diffusion tensor imagingArlinghaus, Lori Rose. January 2009 (has links)
Thesis (Ph. D. in Biomedical Engineering)--Vanderbilt University, May 2009. / Title from title screen. Includes bibliographical references.
|
54 |
Region based image matching for 3D object recognition /Liu, Tian. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2010. / Printout. Includes bibliographical references (leaves 43-46). Also available on the World Wide Web.
|
55 |
Multimodality image registrationPrasai, Persis. January 2006 (has links) (PDF)
Thesis (M.S.)--University of Alabama at Birmingham, 2006. / Description based on contents viewed June 26, 2007; title from title screen. Includes bibliographical references.
|
56 |
Panoramic e-learning videos for non-linear navigationSchneider, Rosália Galiazzi January 2013 (has links)
Este trabalho introduz uma interface para estender vídeos educacionais com panoramas e navegação não-linear baseada em conteúdo. Em vídeos de e-learning convencionais, cada quadro está restrito ao subconjunto da cena capturado naquele momento. Isso torna difícil para o usuário revisitar conteúdos mostrados anteriormente, que podem ser essenciais para o entendimento dos conceitos seguintes. Localizar conteúdos anteriores nesses vídeos requer uma navegação linear no tempo, o que pode ser ineficiente. Estendemos vídeo-aulas para prover ao usuário o acesso direto a todo o conteúdo apresentado através de uma simples interface. Isso é feito pela detecção automática de pontos relevantes no vídeo e a criação de hyperlinks a partir desses pontos de maneira completamente transparente. Nossa interface constrói gradualmente um panorama clicável que mostra todo o conteúdo visto no vídeo até o dado momento. O usuário pode navegar pelo vídeo simplesmente clicando no conteúdo desejado, ao invés de utilizar a tradicional barra deslizante de tempo. Nosso panorama também pode ser exportado no final da execução, juntamente com anotações feitas pelo usuário, como um conjunto de notas de aula. A eficiência da nossa técnica foi demonstrada com a aplicação bem-sucedida a três categorias de vídeos que são representativas de todo o conjunto de vídeo-aulas disponíveis: Khan Academy, Coursera e aulas convencionais gravadas com uma câmera. Demonstramos que foi possível atingir os resultados em tempo real para vídeos de baixa resolução (320x240). No caso de resoluções mais altas, é necessário que a detecção de features (usando SIFT) seja feita em uma fase de pré-processamento. Como a parte mais custosa do nosso pipeline é extremamente paralelizável, acreditamos que a execução de vídeos de alta resolução em tempo real seja um resultado alcançável em curto prazo. As técnicas descritas nessa dissertação disponibilizam maneiras mais eficientes de explorar vídeos educacionais. Dessa forma, elas tem potencial para impactar a educação, disponibilizando experiências educacionais mais customizáveis para milhões de estudantes em todo o mundo. / This thesis introduces a new interface for augmenting existing e-learning videos with panoramic frames and content-based non-linear navigation. In conventional e-learning videos, each frame is constrained to the subset of the lecture content captured by the camera or frame grabber at that moment. This makes it harder for users to quickly revisit and check previously shown subjects, which might be crucial for understanding subsequent concepts. Locating previously seen materials in pre-recorded videos requires one to perform visual inspection by sequentially navigating through time, which can be distracting and time-consuming. We augment e-learning videos to provide users direct access to all previously shown content through a simple pointing interface. This is achieved by automatically detecting relevant features in the videos as they play, and assigning them hyperlinks to a buffered version in a completely transparent way. The interface gradually builds panoramic video frames displaying all previously shown content. The user can then navigate through the video in a non-linear way by directly clicking over the content, as opposed to using a conventional time slider. As an additional feature, the final panorama can be exported as a set of annotated lecture notes. We demonstrate the effectiveness of our approach by successfully applying it to three representative styles of e-learning videos: Khan Academy, Coursera, and conventional lecture recorded with a camera. We show that we can achieve real-time performance for low-resolution videos (e.g., 320x240) on a single desktop PC. For higher resolution videos, some pre-processing is required for feature detection (using SIFT). However, since the most expensive parts of our processing pipeline are highly parallel, we believe that real-time performance might be soon achievable even for full HD resolution. The techniques described in this thesis provide more efficient ways for exploring the benefits of e-learning videos. As such, they have the potential to impact education by providing more customizable learning experiences for millions of e-learners around the world.
|
57 |
Panoramic e-learning videos for non-linear navigationSchneider, Rosália Galiazzi January 2013 (has links)
Este trabalho introduz uma interface para estender vídeos educacionais com panoramas e navegação não-linear baseada em conteúdo. Em vídeos de e-learning convencionais, cada quadro está restrito ao subconjunto da cena capturado naquele momento. Isso torna difícil para o usuário revisitar conteúdos mostrados anteriormente, que podem ser essenciais para o entendimento dos conceitos seguintes. Localizar conteúdos anteriores nesses vídeos requer uma navegação linear no tempo, o que pode ser ineficiente. Estendemos vídeo-aulas para prover ao usuário o acesso direto a todo o conteúdo apresentado através de uma simples interface. Isso é feito pela detecção automática de pontos relevantes no vídeo e a criação de hyperlinks a partir desses pontos de maneira completamente transparente. Nossa interface constrói gradualmente um panorama clicável que mostra todo o conteúdo visto no vídeo até o dado momento. O usuário pode navegar pelo vídeo simplesmente clicando no conteúdo desejado, ao invés de utilizar a tradicional barra deslizante de tempo. Nosso panorama também pode ser exportado no final da execução, juntamente com anotações feitas pelo usuário, como um conjunto de notas de aula. A eficiência da nossa técnica foi demonstrada com a aplicação bem-sucedida a três categorias de vídeos que são representativas de todo o conjunto de vídeo-aulas disponíveis: Khan Academy, Coursera e aulas convencionais gravadas com uma câmera. Demonstramos que foi possível atingir os resultados em tempo real para vídeos de baixa resolução (320x240). No caso de resoluções mais altas, é necessário que a detecção de features (usando SIFT) seja feita em uma fase de pré-processamento. Como a parte mais custosa do nosso pipeline é extremamente paralelizável, acreditamos que a execução de vídeos de alta resolução em tempo real seja um resultado alcançável em curto prazo. As técnicas descritas nessa dissertação disponibilizam maneiras mais eficientes de explorar vídeos educacionais. Dessa forma, elas tem potencial para impactar a educação, disponibilizando experiências educacionais mais customizáveis para milhões de estudantes em todo o mundo. / This thesis introduces a new interface for augmenting existing e-learning videos with panoramic frames and content-based non-linear navigation. In conventional e-learning videos, each frame is constrained to the subset of the lecture content captured by the camera or frame grabber at that moment. This makes it harder for users to quickly revisit and check previously shown subjects, which might be crucial for understanding subsequent concepts. Locating previously seen materials in pre-recorded videos requires one to perform visual inspection by sequentially navigating through time, which can be distracting and time-consuming. We augment e-learning videos to provide users direct access to all previously shown content through a simple pointing interface. This is achieved by automatically detecting relevant features in the videos as they play, and assigning them hyperlinks to a buffered version in a completely transparent way. The interface gradually builds panoramic video frames displaying all previously shown content. The user can then navigate through the video in a non-linear way by directly clicking over the content, as opposed to using a conventional time slider. As an additional feature, the final panorama can be exported as a set of annotated lecture notes. We demonstrate the effectiveness of our approach by successfully applying it to three representative styles of e-learning videos: Khan Academy, Coursera, and conventional lecture recorded with a camera. We show that we can achieve real-time performance for low-resolution videos (e.g., 320x240) on a single desktop PC. For higher resolution videos, some pre-processing is required for feature detection (using SIFT). However, since the most expensive parts of our processing pipeline are highly parallel, we believe that real-time performance might be soon achievable even for full HD resolution. The techniques described in this thesis provide more efficient ways for exploring the benefits of e-learning videos. As such, they have the potential to impact education by providing more customizable learning experiences for millions of e-learners around the world.
|
58 |
Panoramic e-learning videos for non-linear navigationSchneider, Rosália Galiazzi January 2013 (has links)
Este trabalho introduz uma interface para estender vídeos educacionais com panoramas e navegação não-linear baseada em conteúdo. Em vídeos de e-learning convencionais, cada quadro está restrito ao subconjunto da cena capturado naquele momento. Isso torna difícil para o usuário revisitar conteúdos mostrados anteriormente, que podem ser essenciais para o entendimento dos conceitos seguintes. Localizar conteúdos anteriores nesses vídeos requer uma navegação linear no tempo, o que pode ser ineficiente. Estendemos vídeo-aulas para prover ao usuário o acesso direto a todo o conteúdo apresentado através de uma simples interface. Isso é feito pela detecção automática de pontos relevantes no vídeo e a criação de hyperlinks a partir desses pontos de maneira completamente transparente. Nossa interface constrói gradualmente um panorama clicável que mostra todo o conteúdo visto no vídeo até o dado momento. O usuário pode navegar pelo vídeo simplesmente clicando no conteúdo desejado, ao invés de utilizar a tradicional barra deslizante de tempo. Nosso panorama também pode ser exportado no final da execução, juntamente com anotações feitas pelo usuário, como um conjunto de notas de aula. A eficiência da nossa técnica foi demonstrada com a aplicação bem-sucedida a três categorias de vídeos que são representativas de todo o conjunto de vídeo-aulas disponíveis: Khan Academy, Coursera e aulas convencionais gravadas com uma câmera. Demonstramos que foi possível atingir os resultados em tempo real para vídeos de baixa resolução (320x240). No caso de resoluções mais altas, é necessário que a detecção de features (usando SIFT) seja feita em uma fase de pré-processamento. Como a parte mais custosa do nosso pipeline é extremamente paralelizável, acreditamos que a execução de vídeos de alta resolução em tempo real seja um resultado alcançável em curto prazo. As técnicas descritas nessa dissertação disponibilizam maneiras mais eficientes de explorar vídeos educacionais. Dessa forma, elas tem potencial para impactar a educação, disponibilizando experiências educacionais mais customizáveis para milhões de estudantes em todo o mundo. / This thesis introduces a new interface for augmenting existing e-learning videos with panoramic frames and content-based non-linear navigation. In conventional e-learning videos, each frame is constrained to the subset of the lecture content captured by the camera or frame grabber at that moment. This makes it harder for users to quickly revisit and check previously shown subjects, which might be crucial for understanding subsequent concepts. Locating previously seen materials in pre-recorded videos requires one to perform visual inspection by sequentially navigating through time, which can be distracting and time-consuming. We augment e-learning videos to provide users direct access to all previously shown content through a simple pointing interface. This is achieved by automatically detecting relevant features in the videos as they play, and assigning them hyperlinks to a buffered version in a completely transparent way. The interface gradually builds panoramic video frames displaying all previously shown content. The user can then navigate through the video in a non-linear way by directly clicking over the content, as opposed to using a conventional time slider. As an additional feature, the final panorama can be exported as a set of annotated lecture notes. We demonstrate the effectiveness of our approach by successfully applying it to three representative styles of e-learning videos: Khan Academy, Coursera, and conventional lecture recorded with a camera. We show that we can achieve real-time performance for low-resolution videos (e.g., 320x240) on a single desktop PC. For higher resolution videos, some pre-processing is required for feature detection (using SIFT). However, since the most expensive parts of our processing pipeline are highly parallel, we believe that real-time performance might be soon achievable even for full HD resolution. The techniques described in this thesis provide more efficient ways for exploring the benefits of e-learning videos. As such, they have the potential to impact education by providing more customizable learning experiences for millions of e-learners around the world.
|
59 |
Thermal Imaging As A Biometrics Approach To Facial Signature AuthenticationGuzman Tamayo, Ana M 07 November 2011 (has links)
This dissertation develops an image processing framework with unique feature extraction and similarity measurements for human face recognition in the mid-wave infrared portion of the electromagnetic spectrum. The goal is to design specialized algorithms that would extract vasculature information, create a thermal facial signature and identify the individual. The objective is to use such findings in support of a biometrics system for human identification with a high degree of accuracy and a high degree of reliability. This last assertion is due to the minimal to no risk for potential alteration of the intrinsic physiological characteristics seen through thermal imaging. Thermal facial signature authentication is fully integrated and consolidates the main and critical steps of feature extraction, registration, matching through similarity measures, and validation through the principal component analysis.
Feature extraction was accomplished by first registering the images to a reference image using the functional MRI of the Brain’s (FMRIB’s) Linear Image Registration Tool (FLIRT) modified to suit thermal images. This was followed by segmentation of the facial region using an advanced localized contouring algorithm applied on anisotropically diffused thermal images. Thermal feature extraction from facial images was attained by performing morphological operations such as opening and top-hat segmentation to yield thermal signatures for each subject. Four thermal images taken over a period of six months were used to generate a thermal signature template for each subject to contain only the most prevalent and consistent features. Finally a similarity measure technique was used to match images to the signature templates and the Principal Component Analysis (PCA) was used to validating the results of the matching process.
Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system. The matching using the similarity measures showed 88% accuracy in case of skeletonized feature signatures and 90% accuracy for anisotropically diffused feature signatures.
The highly accurate results obtained in the matching process along with the generalized design process clearly demonstrate the ability of the developed thermal infrared system to be used on other thermal imaging based systems and related databases.
|
60 |
Předzpracování oftalmologických obrazů pro registraci / Preprocessing of ophthalmology images for image registrationOrešanská, Hana January 2009 (has links)
The aim of my thesis was research of the ophthalmology images processing. These methods contains image filtration, detection of important points in the image and registration. Image adjustment and the follow up registration is very important to find some deseases (e.g. glaucoma, where is a change of nervous threads going from retina ). For the image changes and registrations was made computer programme (Matlab, Graphical User Interface), where the user can try the different methods decribed in teoretical part of thesis.
|
Page generated in 0.0868 seconds