• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 310
  • 85
  • 65
  • 65
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 560
  • 560
  • 560
  • 560
  • 196
  • 133
  • 91
  • 88
  • 85
  • 81
  • 76
  • 73
  • 73
  • 73
  • 71
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
551

Performance analysis of EM-MPM and K-means clustering in 3D ultrasound breast image segmentation

Yang, Huanyi 05 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Mammographic density is an important risk factor for breast cancer, detecting and screening at an early stage could help save lives. To analyze breast density distribution, a good segmentation algorithm is needed. In this thesis, we compared two popularly used segmentation algorithms, EM-MPM and K-means Clustering. We applied them on twenty cases of synthetic phantom ultrasound tomography (UST), and nine cases of clinical mammogram and UST images. From the synthetic phantom segmentation comparison we found that EM-MPM performs better than K-means Clustering on segmentation accuracy, because the segmentation result fits the ground truth data very well (with superior Tanimoto Coefficient and Parenchyma Percentage). The EM-MPM is able to use a Bayesian prior assumption, which takes advantage of the 3D structure and finds a better localized segmentation. EM-MPM performs significantly better for the highly dense tissue scattered within low density tissue and for volumes with low contrast between high and low density tissues. For the clinical mammogram, image segmentation comparison shows again that EM-MPM outperforms K-means Clustering since it identifies the dense tissue more clearly and accurately than K-means. The superior EM-MPM results shown in this study presents a promising future application to the density proportion and potential cancer risk evaluation.
552

Parallel acceleration of deadlock detection and avoidance algorithms on GPUs

Abell, Stephen W. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Current mainstream computing systems have become increasingly complex. Most of which have Central Processing Units (CPUs) that invoke multiple threads for their computing tasks. The growing issue with these systems is resource contention and with resource contention comes the risk of encountering a deadlock status in the system. Various software and hardware approaches exist that implement deadlock detection/avoidance techniques; however, they lack either the speed or problem size capability needed for real-time systems. The research conducted for this thesis aims to resolve issues present in past approaches by converging the two platforms (software and hardware) by means of the Graphics Processing Unit (GPU). Presented in this thesis are two GPU-based deadlock detection algorithms and one GPU-based deadlock avoidance algorithm. These GPU-based algorithms are: (i) GPU-OSDDA: A GPU-based Single Unit Resource Deadlock Detection Algorithm, (ii) GPU-LMDDA: A GPU-based Multi-Unit Resource Deadlock Detection Algorithm, and (iii) GPU-PBA: A GPU-based Deadlock Avoidance Algorithm. Both GPU-OSDDA and GPU-LMDDA utilize the Resource Allocation Graph (RAG) to represent resource allocation status in the system. However, the RAG is represented using integer-length bit-vectors. The advantages brought forth by this approach are plenty: (i) less memory required for algorithm matrices, (ii) 32 computations performed per instruction (in most cases), and (iii) allows our algorithms to handle large numbers of processes and resources. The deadlock detection algorithms also require minimal interaction with the CPU by implementing matrix storage and algorithm computations on the GPU, thus providing an interactive service type of behavior. As a result of this approach, both algorithms were able to achieve speedups over two orders of magnitude higher than their serial CPU implementations (3.17-317.42x for GPU-OSDDA and 37.17-812.50x for GPU-LMDDA). Lastly, GPU-PBA is the first parallel deadlock avoidance algorithm implemented on the GPU. While it does not achieve two orders of magnitude speedup over its CPU implementation, it does provide a platform for future deadlock avoidance research for the GPU.
553

Video anatomy : spatial-temporal video profile

Cai, Hongyuan 31 July 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A massive amount of videos are uploaded on video websites, smooth video browsing, editing, retrieval, and summarization are demanded. Most of the videos employ several types of camera operations for expanding field of view, emphasizing events, and expressing cinematic effect. To digest heterogeneous videos in video websites and databases, video clips are profiled to 2D image scroll containing both spatial and temporal information for video preview. The video profile is visually continuous, compact, scalable, and indexing to each frame. This work analyzes the camera kinematics including zoom, translation, and rotation, and categorize camera actions as their combinations. An automatic video summarization framework is proposed and developed. After conventional video clip segmentation and video segmentation for smooth camera operations, the global flow field under all camera actions has been investigated for profiling various types of video. A new algorithm has been designed to extract the major flow direction and convergence factor using condensed images. Then this work proposes a uniform scheme to segment video clips and sections, sample video volume across the major flow, compute flow convergence factor, in order to obtain an intrinsic scene space less influenced by the camera ego-motion. The motion blur technique has also been used to render dynamic targets in the profile. The resulting profile of video can be displayed in a video track to guide the access to video frames, help video editing, and facilitate the applications such as surveillance, visual archiving of environment, video retrieval, and online video preview.
554

Active geometric model : multi-compartment model-based segmentation & registration

Mukherjee, Prateep 26 August 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / We present a novel, variational and statistical approach for model-based segmentation. Our model generalizes the Chan-Vese model, proposed for concurrent segmentation of multiple objects embedded in the same image domain. We also propose a novel shape descriptor, namely the Multi-Compartment Distance Functions or mcdf. Our proposed framework for segmentation is two-fold: first, several training samples distributed across various classes are registered onto a common frame of reference; then, we use a variational method similar to Active Shape Models (or ASMs) to generate an average shape model and hence use the latter to partition new images. The key advantages of such a framework is: (i) landmark-free automated shape training; (ii) strict shape constrained model to fit test data. Our model can naturally deal with shapes of arbitrary dimension and topology(closed/open curves). We term our model Active Geometric Model, since it focuses on segmentation of geometric shapes. We demonstrate the power of the proposed framework in two important medical applications: one for morphology estimation of 3D Motor Neuron compartments, another for thickness estimation of Henle's Fiber Layer in the retina. We also compare the qualitative and quantitative performance of our method with that of several other state-of-the-art segmentation methods.
555

A high resolution 3D and color image acquisition system for long and shallow impressions in crime scenes

Egoda Gamage, Ruwan Janapriya January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In crime scene investigations it is necessary to capture images of impression evidence such as tire track or shoe impressions. Currently, such evidence is captured by taking two-dimensional (2D) color photographs or making a physical cast of the impression in order to capture the three-dimensional (3D) structure of the information. This project aims to build a digitizing device that scans the impression evidence and generates (i) a high resolution three-dimensional (3D) surface image, and (ii) a co-registered two-dimensional (2D) color image. The method is based on active structured lighting methods in order to extract 3D shape information of a surface. A prototype device was built that uses an assembly of two line laser lights and a high-definition video camera that is moved at a precisely controlled and constant speed along a mechanical actuator rail in order to scan the evidence. A prototype software was also developed which implements the image processing, calibration, and surface depth calculations. The methods developed in this project for extracting the digitized 3D surface shape and 2D color images include (i) a self-contained calibration method that eliminates the need for pre-calibration of the device; (ii) the use of two colored line laser lights projected from two different angles to eliminate problems due to occlusions; and (iii) the extraction of high resolution color image of the impression evidence with minimal distortion.The system results in sub-millimeter accuracy in the depth image and a high resolution color image that is registered with the depth image. The system is particularly suitable for high quality images of long tire track impressions without the need for stitching multiple images.
556

Morphometric analysis of hippocampal subfields : segmentation, quantification and surface modeling

Cong, Shan January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Object segmentation, quantification, and shape modeling are important areas inmedical image processing. By combining these techniques, researchers can find valuableways to extract and represent details on user-desired structures, which can functionas the base for subsequent analyses such as feature classification, regression, and prediction. This thesis presents a new framework for building a three-dimensional (3D) hippocampal atlas model with subfield information mapped onto its surface, with which hippocampal surface registration can be done, and the comparison and analysis can be facilitated and easily visualized. This framework combines three powerful tools for automatic subcortical segmentation and 3D surface modeling. Freesurfer and Functional magnetic resonance imaging of the brain's Integrated Registration and Segmentation Tool (FIRST) are employed for hippocampal segmentation and quantification, while SPherical HARMonics (SPHARM) is employed for parametric surface modeling. This pipeline is shown to be effective in creating a hippocampal surface atlas using the Alzheimer's Disease Neuroimaging Initiative Grand Opportunity and phase 2 (ADNI GO/2) dataset. Intra-class Correlation Coefficients (ICCs) are calculated for evaluating the reliability of the extracted hippocampal subfields. The complex folding anatomy of the hippocampus offers many analytical challenges, especially when informative hippocampal subfields are usually ignored in detailed morphometric studies. Thus, current research results are inadequate to accurately characterize hippocampal morphometry and effectively identify hippocampal structural changes related to different conditions. To address this challenge, one contribution of this study is to model the hippocampal surface using a parametric spherical harmonic model, which is a Fourier descriptor for general a 3D surface. The second contribution of this study is to extend hippocampal studies by incorporating valuable hippocampal subfield information. Based on the subfield distributions, a surface atlas is created for both left and right hippocampi. The third contribution is achieved by calculating Fourier coefficients in the parametric space. Based on the coefficient values and user-desired degrees, a pair of averaged hippocampal surface atlas models can be reconstructed. These contributions lay a solid foundation to facilitate a more accurate, subfield-guided morphometric analysis of the hippocampus and have the potential to reveal subtle hippocampal structural damage associated.
557

A new adaptive trilateral filter for in-loop filtering

Kesireddy, Akitha January 2014 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / HEVC has achieved significant coding efficiency improvement beyond existing video coding standard by employing many new coding tools. Deblocking Filter, Sample Adaptive Offset and Adaptive Loop Filter for in-loop filtering are currently introduced for the HEVC standardization. However these filters are implemented in spatial domain despite the fact of temporal correlation within video sequences. To reduce the artifacts and better align object boundaries in video , a new algorithm in in-loop filtering is proposed. The proposed algorithm is implemented in HM-11.0 software. This proposed algorithm allows an average bitrate reduction of about 0.7% and improves the PSNR of the decoded frame by 0.05%, 0.30% and 0.35% in luminance and chroma.
558

A scalable approach to processing adaptive optics optical coherence tomography data from multiple sensors using multiple graphics processing units

Kriske, Jeffery Edward, Jr. 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Adaptive optics-optical coherence tomography (AO-OCT) is a non-invasive method of imaging the human retina in vivo. It can be used to visualize microscopic structures, making it incredibly useful for the early detection and diagnosis of retinal disease. The research group at Indiana University has a novel multi-camera AO-OCT system capable of 1 MHz acquisition rates. Until this point, a method has not existed to process data from such a novel system quickly and accurately enough on a CPU, a GPU, or one that can scale to multiple GPUs automatically in an efficient manner. This is a barrier to using a MHz AO-OCT system in a clinical environment. A novel approach to processing AO-OCT data from the unique multi-camera optics system is tested on multiple graphics processing units (GPUs) in parallel with one, two, and four camera combinations. The design and results demonstrate a scalable, reusable, extensible method of computing AO-OCT output. This approach can either achieve real time results with an AO-OCT system capable of 1 MHz acquisition rates or be scaled to a higher accuracy mode with a fast Fourier transform of 16,384 complex values.
559

Advancing profiling sensors with a wireless approach

Galvis, Alejandro 20 November 2013 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In general, profiling sensors are low-cost crude imagers that typically utilize a sparse detector array, whereas traditional cameras employ a dense focal-plane array. Profiling sensors are of particular interest in applications that require classification of a sensed object into broad categories, such as human, animal, or vehicle. However, profiling sensors have many other applications in which reliable classification of a crude silhouette or profile produced by the sensor is of value. The notion of a profiling sensor was first realized by a Near-Infrared (N-IR), retro-reflective prototype consisting of a vertical column of sparse detectors. Alternative arrangements of detectors have been implemented in which a subset of the detectors have been offset from the vertical column and placed at arbitrary locations along the anticipated path of the objects of interest. All prior work with the N-IR, retro-reflective profiling sensors has consisted of wired detectors. This thesis surveys prior work and advances this work with a wireless profiling sensor prototype in which each detector is a wireless sensor node and the aggregation of these nodes comprises a profiling sensor’s field of view. In this novel approach, a base station pre-processes the data collected from the sensor nodes, including data realignment, prior to its classification through a back-propagation neural network. Such a wireless detector configuration advances deployment options for N-IR, retro-reflective profiling sensors.
560

Facial and keystroke biometric recognition for computer based assessments

Adetunji, Temitope Oluwafunmilayo 12 1900 (has links)
M. Tech. (Department of Information Technology, Faculty of Applied and Computer Sciences), Vaal University of Technology. / Computer based assessments have become one of the largest growing sectors in both nonacademic and academic establishments. Successful computer based assessments require security against impersonation and fraud and many researchers have proposed the use of Biometric technologies to overcome this issue. Biometric technologies are defined as a computerised method of authenticating an individual (character) based on behavioural and physiological characteristic features. Basic biometric based computer based assessment systems are prone to security threats in the form of fraud and impersonations. In a bid to combat these security problems, keystroke dynamic technique and facial biometric recognition was introduced into the computer based assessment biometric system so as to enhance the authentication ability of the computer based assessment system. The keystroke dynamic technique was measured using latency and pressure while the facial biometrics was measured using principal component analysis (PCA). Experimental performance was carried out quantitatively using MATLAB for simulation and Excel application package for data analysis. System performance was measured using the following evaluation schemes: False Acceptance Rate (FAR), False Rejection Rate (FRR), Equal Error Rate (EER) and Accuracy (AC), for a comparison between the biometric computer based assessment system with and without the keystroke and face recognition alongside other biometric computer based assessment techniques proposed in the literature. Successful implementation of the proposed technique would improve computer based assessment’s reliability, efficiency and effectiveness and if deployed into the society would improve authentication and security whilst reducing fraud and impersonation in our society.

Page generated in 0.1365 seconds