• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 203
  • 28
  • 18
  • 13
  • 8
  • 7
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 343
  • 343
  • 78
  • 71
  • 63
  • 56
  • 52
  • 38
  • 32
  • 32
  • 28
  • 28
  • 28
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Deep Learning Based Deformable Image Registration of Pelvic Images / Bildregistrering av bäckenbilder baserade på djupinlärning

Cabrera Gil, Blanca January 2020 (has links)
Deformable image registration is usually performed manually by clinicians,which is time-consuming and costly, or using optimization-based algorithms, which are not always optimal for registering images of different modalities. In this work, a deep learning-based method for MR-CT deformable image registration is presented. In the first place, a neural network is optimized to register CT pelvic image pairs. Later, the model is trained on MR-CT image pairs to register CT images to match its MR counterpart. To solve the unavailability of ground truth data problem, two approaches were used. For the CT-CT case, perfectly aligned image pairs were the starting point of our model, and random deformations were generated to create a ground truth deformation field. For the multi-modal case, synthetic CT images were generated from T2-weighted MR using a CycleGAN model, plus synthetic deformations were applied to the MR images to generate ground truth deformation fields. The synthetic deformations were created by combining a coarse and fine deformation grid, obtaining a field with deformations of different scales. Several models were trained on images of different resolutions. Their performance was benchmarked with an analytic algorithm used in an actual registration workflow. The CT-CT models were tested using image pairs created by applying synthetic deformation fields. The MR-CT models were tested using two types of test images. The first one contained synthetic CT images and MR ones deformed by synthetically generated deformation fields. The second test set contained real MR-CT image pairs. The test performance was measured using the Dice coefficient. The CT-CT models obtained Dice scores higherthan 0.82 even for the models trained on lower resolution images. Despite the fact that all MR-CT models experienced a drop in their performance, the biggest decrease came from the analytic method used as a reference, both for synthetic and real test data. This means that the deep learning models outperformed the state-of-the-art analytic benchmark method. Even though the obtained Dice scores would need further improvement to be used in a clinical setting, the results show great potential for using deep learning-based methods for multi- and mono-modal deformable image registration.
162

Using FPGAs to perform embedded image registration

White, Brandyn A. 01 January 2009 (has links)
Image registration is the process of relating the intensity values of one image to another image using their pixel c~?tent alone. An example use of this technique is to create panoramas from individual images taken froin a rotating camera. A class of image registration algorithms, known as direct registration methods, uses intensity derivatives to iteratively estimate the parameters modeling the transformation between the images. Direct methods are known for their sub-pixel accurate results; however, their execution is computationally expensive, often times preventing use in an embedded capacity like those encountered in small UIUllann~d aerial vehicle or mobile phone applications. In this work, a high performance FPGA-based direct affine image registration core is presented. The proposed method combines two features: a fully pipelined architecture to compute the linear system of equations, and a Gaussian elimination module, implemented as a finite state machine, to solve the resulting linear system. The design is implemented on a Xilinx ML506 development board featuring a Virtex-5 SX50 FPGA, zero bus turn-around (ZBT) RAM, and VGA input. Experimentation is performed on both real and synthetic data. The registration core performs in excess of 80 frames per second on 640x480 images using one registration iteration.
163

A Unified Approach to GPU-Accelerated Aerial Video Enhancement Techniques

Cluff, Stephen Thayn 12 February 2009 (has links) (PDF)
Video from aerial surveillance can provide a rich source of data for analysts. From the time-critical perspective of wilderness search and rescue operations, information extracted from aerial videos can mean the difference between a successful search and an unsuccessful search. When using low-cost, payload-limited mini-UAVs, as opposed to more expensive platforms, several challenges arise, including jittery video, narrow fields of view, low resolution, and limited time on screen for key features. These challenges make it difficult for analysts to extract key information in a timely manner. Traditional approaches may address some of these issues, but no existing system effectively addresses all of them in a unified and efficient manner. Building upon a hierarchical dense image correspondence technique, we create a unifying framework for reducing jitter, enhancing resolution, and expanding the field of view while lengthening the time that features remain on screen. It also provides for easy extraction of moving objects in the scene. Our method incorporates locally adaptive warps which allows for robust image alignment even in the presence of parallax and without the aid of internal or external camera parameters. We accelerate the image registration process using commodity Graphics Processing Units (GPUs) to accomplish all of these tasks in near real-time with no external telemetry data.
164

CUDA Accelerated 3D Non-rigid Diffeomorphic Registration / CUDA-accelererad icke-rigid diffeomorf registrering i 3D

Qu, An January 2017 (has links)
Advances of magnetic resonance imaging (MRI) techniques enable visualguidance to identify the anatomical target of interest during the image guidedintervention(IGI). Non-rigid image registration is one of the crucial techniques,aligning the target tissue with the MRI preoperative image volumes. As thegrowing demand for the real-time interaction in IGI, time used for intraoperativeregistration is increasingly important. This work implements 3D diffeomorphicdemons algorithm on Nvidia GeForce GTX 1070 GPU in C++ based on CUDA8.0.61 programming environment, using which the average registration time hasaccelerated to 5s. We have also extensively evaluated GPU accelerated 3D diffeomorphicregistration against both CPU implementation and Matlab codes, and theresults show that GPU implementation performs a much better algorithm efficiency.
165

Hybrid And Hierarchical Image Registration Techniques

Xu, Dongjiang 01 January 2004 (has links)
A large number of image registration techniques have been developed for various types of sensors and applications, with the aim to improve the accuracy, computational complexity, generality, and robustness. They can be broadly classified into two categories: intensity-based and feature-based methods. The primary drawback of the intensity-based approaches is that it may fail unless the two images are misaligned by a moderate difference in scale, rotation, and translation. In addition, intensity-based methods lack the robustness in the presence of non-spatial distortions due to different imaging conditions between images. In this dissertation, the image registration is formulated as a two-stage hybrid approach combining both an initial matching and a final matching in a coarse-to-fine manner. In the proposed hybrid framework, the initial matching algorithm is applied at the coarsest scale of images, where the approximate transformation parameters could be first estimated. Subsequently, the robust gradient-based estimation algorithm is incorporated into the proposed hybrid approach using a multi-resolution scheme. Several novel and effective initial matching algorithms have been proposed for the first stage. The variations of the intensity characteristics between images may be large and non-uniform because of non-spatial distortions. Therefore, in order to effectively incorporate the gradient-based robust estimation into our proposed framework, one of the fundamental questions should be addressed: what is a good image representation to work with using gradient-based robust estimation under non-spatial distortions. With the initial matching algorithms applied at the highest level of decomposition, the proposed hybrid approach exhibits superior range of convergence. The gradient-based algorithms in the second stage yield a robust solution that precisely registers images with sub-pixel accuracy. A hierarchical iterative searching further enhances the convergence range and rate. The simulation results demonstrated that the proposed techniques provide significant benefits to the performance of image registration.
166

Sub-pixel Registration In Computational Imaging And Applications To Enhancement Of Maxillofacial Ct Data

Balci, Murat 01 January 2006 (has links)
In computational imaging, data acquired by sampling the same scene or object at different times or from different orientations result in images in different coordinate systems. Registration is a crucial step in order to be able to compare, integrate and fuse the data obtained from different measurements. Tomography is the method of imaging a single plane or slice of an object. A Computed Tomography (CT) scan, also known as a CAT scan (Computed Axial Tomography scan), is a Helical Tomography, which traditionally produces a 2D image of the structures in a thin section of the body. It uses X-ray, which is ionizing radiation. Although the actual dose is typically low, repeated scans should be limited. In dentistry, implant dentistry in specific, there is a need for 3D visualization of internal anatomy. The internal visualization is mainly based on CT scanning technologies. The most important technological advancement which dramatically enhanced the clinician's ability to diagnose, treat, and plan dental implants has been the CT scan. Advanced 3D modeling and visualization techniques permit highly refined and accurate assessment of the CT scan data. However, in addition to imperfections of the instrument and the imaging process, it is not uncommon to encounter other unwanted artifacts in the form of bright regions, flares and erroneous pixels due to dental bridges, metal braces, etc. Currently, removing and cleaning up the data from acquisition backscattering imperfections and unwanted artifacts is performed manually, which is as good as the experience level of the technician. On the other hand the process is error prone, since the editing process needs to be performed image by image. We address some of these issues by proposing novel registration methods and using stonecast models of patient's dental imprint as reference ground truth data. Stone-cast models were originally used by dentists to make complete or partial dentures. The CT scan of such stone-cast models can be used to automatically guide the cleaning of patients' CT scans from defects or unwanted artifacts, and also as an automatic segmentation system for the outliers of the CT scan data without use of stone-cast models. Segmented data is subsequently used to clean the data from artifacts using a new proposed 3D inpainting approach.
167

Design of a System for Target Localization and Tracking in Image-Guided Radiation Therapy

Peshko, Olesya January 2016 (has links)
This thesis contributes to the topic of image-based feature localization and tracking in fluoroscopic (2D x-ray) image sequences. Such tracking is needed to automatically measure organ motion in cancer patients treated with radiation therapy. While the use of 3D cone-beam computed tomography (CBCT) images is a standard clinical practice for verifying the agreement of the patient's position to a plan, it is done before the treatment procedure. Hence, measurement of the motion during the procedure could improve plan design and the accuracy of treatment delivery. Using an existing CBCT imaging system is one way of collecting fluoroscopic sequences for such analysis. Since x-ray images of soft tissues are typically characterized with low contrast and high noise, radio-opaque fiducial markers are often inserted in or around the target. This thesis describes techniques that comprise a complete system for automated detection and tracking of the markers in fluoroscopic image sequences. One of the cornerstone design ideas in this thesis is the use of the 3D CBCT image of the patient, from which the markers can be extracted more easily, to initialize the tracking in the fluoroscopic image sequences. To do this, a specific marker-based image registration framework was proposed. It includes multiple novel techniques, such as marker segmentation and modelling, the marker enhancement filter, and marker-specific template image generation approaches. Through extensive experiments on testing data sets, these novel techniques were combined with appropriate state-of-the-art methods to produce a sleek, computationally efficient, fully automated system that achieved reliable marker localization and tracking. The accuracy of the system is sufficient for clinical implementation. The thesis demonstrates an application of the system to the images of prostate cancer patients, and includes examples of statistical analysis of organ motion that can be used to improve treatment planning. / Dissertation / Doctor of Philosophy (PhD) / This thesis presents the development of a software system that analyzes sequences of 2D x-ray images to automatically measure organ motion in patients undergoing radiation therapy for cancer treatment. The knowledge of motion statistics obtained from this system creates opportunities for patient-specific treatment design that may lead to a better outcome. Automated processing of organ motion is challenging due to the low contrast and high noise levels in the x-ray images. To achieve reliable detection, the proposed system was designed to make use of 3D cone-beam computed tomography images of the patient, where the features (markers) are easier to identify. This required the development of a specific image registration framework for aligning the images, including a number of novel feature modelling and image processing techniques. The proposed motion tracking approach was implemented as a complete software system that was extensively validated on phantom and patient studies. It achieved a level of accuracy and reliability that is suitable for clinical implementation.
168

Automated Image Registration And Mosaicking For Multi-Sensor Images Acquired By A Miniature Unmanned Aerial Vehicle Platform

Orduyilmaz, Adnan 05 August 2006 (has links)
Algorithms for automatic image registration and mosaicking are developed for a miniature Unmanned Aerial Vehicle (MINI-UAV) platform, assembled by Air-O-Space International (AOSI) L.L.C.. Three cameras onboard this MINI-UAV platform acquire images in a single frame simultaneously at green (550nm), red (650 nm), and near infrared (820nm) wavelengths, but with shifting and rotational misalignment. The area-based method is employed in the developed algorithms for control point detection, which is applicable when no prominent feature details are present in image scenes. Because the three images to be registered have different spectral characteristics, region of interest determination and control point selection are the two key steps that ensure the quality of control points. Affine transformation is adopted for spatial transformation, followed by bilinear interpolation for image resampling. Mosaicking is conducted between adjacent frames after three-band co-registration. Pre-introducing the rotation makes the area-based method feasible when the rotational misalignment cannot be ignored. The algorithms are tested on three image sets collected at Stennis Space Center, Greenwood, and Oswalt in Mississippi. Manual evaluation confirms the effectiveness of the developed algorithms. The codes are converted into a software package, which is executable under the Microsoft Windows environment of personal computer platforms without the requirement of MATLAB or other special software support for commercial-off-the-shelf (COTS) product. The near real-time decision-making support is achievable with final data after its installation into the ground control station. The final products are color-infrared (CIR) composite and normalized difference vegetation index (NDVI) images, which are used in agriculture, forestry, and environmental monitoring.
169

Computer Vision Approaches for Mapping Gene Expression onto Lineage Trees

Lalit, Manan 06 December 2022 (has links)
This project concerns studying the early development of living organisms. This period is accompanied by dynamic morphogenetic events. There is an increase in the number of cells, changes in the shape of cells and specification of cell fate during this time. Typically, in order to capture the dynamic morphological changes, one can employ a form of microscopy imaging such as Selective Plane Illumination Microscopy (SPIM) which offers a single-cell resolution across time, and hence allows observing the positions, velocities and trajectories of most cells in a developing embryo. Unfortunately, the dynamic genetic activity which underlies these morphological changes and influences cellular fate decision, is captured only as static snapshots and often requires processing (sequencing or imaging) multiple distinct individuals. In order to set the stage for characterizing the factors which influence cellular fate, one must bring the data arising from the above-mentioned static snapshots of multiple individuals and the data arising from SPIM imaging of other distinct individual(s) which characterizes the changes in morphology, into the same frame of reference. In this project, a computational pipeline is established, which achieves the aforementioned goal of mapping data from these various imaging modalities and specimens to a canonical frame of reference. This pipeline relies on the three core building blocks of Instance Segmentation, Tracking and Registration. In this dissertation work, I introduce EmbedSeg which is my solution to performing instance segmentation of 2D and 3D (volume) image data. Next, I introduce LineageTracer which is my solution to performing tracking of a time-lapse (2d+t, 3d+t) recording. Finally, I introduce PlatyMatch which is my solution to performing registration of volumes. Errors from the application of these building blocks accumulate which produces a noisy observation estimate of gene expression for the digitized cells in the canonical frame of reference. These noisy estimates are processed to infer the underlying hidden state by using a Hidden Markov Model (HMM) formulation. Lastly, for wider dissemination of these methods, one requires an effective visualization strategy. A few details about the employed approach are also discussed in the dissertation work. The pipeline was designed keeping imaging volume data in mind, but can easily be extended to incorporate other data modalities, if available, such as single cell RNA Sequencing (scRNA-Seq) (more details are provided in the Discussion chapter). The methods elucidated in this dissertation would provide a fertile playground for several experiments and analyses in the future. Some of such potential experiments and current weaknesses of the computational pipeline are also discussed additionally in the Discussion Chapter.
170

CRYO-IMAGING ASSESSMENT OF IMAGING AGENT TARGETING TO DISPERSING AND METASTATIC TUMOR CELLS

Qutaish, Mohammed Q. 02 September 2014 (has links)
No description available.

Page generated in 0.1228 seconds