• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 21
  • 21
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

High Performance DSP-Based Image Acqisition and Pattern Recognition System

Yen, Jui-Yu 09 July 2002 (has links)
We propose to design a DSP based image acquisition and pattern recognition system. This system which could mainly apply to do the vision guided automatic drill on the Flexible Printed Circuit Board (FPCB) includes three sub systems as ¡§Image acquisition system¡¨ , ¡§Pattern recognition system¡¨ and ¡§PCI communication system¡¨ . First , we obtain the FPCB image by the CCD camera , and do the pattern match for the drill goal on it . After computing , DSP transmits the goal coordinates to the computer user interface application . By the experiment result , we successfully make the whole system match the original purpose by using two image pre-process steps.
2

AN INVESTIGATION OF IMAGE PROCESSING TECHNIQUES FOR PAINT DEFECT DETECTION USING A MACHINE VISION SYSTEM

Kamat, Ashish V. 01 January 2004 (has links)
Detection and inspection of metal surface corrosion in the ballast tanks of U.S. Navy ships has been a long time problem. The adverse climatic conditions to which the ballast tanks are exposed and the uneven geometry of ballast tanks makes the visual inspection process of surface coatings a difficult job. Thousands of tanks are inspected yearly, with the average cost of an individual tank inspection at approximately $8-15 thousand/each. To aid the visual inspection process, this research is conducted to develop a new technique to automate the visual task of metal surface inspection by image acquisition and post processing. The best results of image processing are achieved by the enhanced contrast between the paint defect and the background using a newly developed optically active additive (OAA) used in paints. Thorough investigation of image processing algorithms has been carried out and a background of imaging theory and experiments is illustrated in this work.
3

Snímání kapiláry pomocí barevného skeneru a hledání významných zón / Scanning capillary column by color scanner and zones recognition

Wojnar, Petr January 2010 (has links)
The master’s thesis focuses on creating an application for scanner control through WIA interface. This aplication is used for reading capillary during electrical separation methods in analytic chemistry. Scanned pictures of capillary are possible to process using online application for detection of significant zones within experiment. Offline detection of a set of scanned pictures is also possible. These applications have been created in LabVIEW environment made by National Instruments.
4

Atrics- A New System For Image Acquisition In Dendrochronology

Levanič, Tom 12 1900 (has links)
We developed a new system for image acquisition in dendrochronology called ATRICS. The new system was compared with existing measurement methods. Images derived from the ATRICS program and processed in any of the available programs for automatic tree-ring recognition are of much higher detail than those from flatbed scanners, as optical magnification has many advantages over digital magnification (especially in areas with extremely narrow tree rings). The quality of stitching was tested using visual assessment - no blurred areas were detected between adjacent images and no tree rings were missing because of the stitching procedure. A test for distortion showed no differences between the original and captured square, indicating that the captured images are distortion free. Differences between manual and automatic measurement are statistically insignificant. The processing of very long cores also poses no problems.
5

DSP Based Real-Time Human Face Recognition System

Tseng, Yu-Chan 04 July 2005 (has links)
The thesis illustrates the development of DSP-based¡§Real-Time Human Face Recognition System¡¨.The principal system consists of three major subsystems.There are Image Acquisition System¡AImage Preprocessing System and human face characteristic extraction . For experiment,we adopted colored face image with complex background and simulate on PC.We found the characteristic points and characteristic vectors from the face image which is searched from Gene algorithm.Then,we use the recognition system to recognize the face image.Finally we implant it to DSP. Shown by the experimental result,this system has good recognition and efficiency.
6

An artificial intelligent system for oncological volumetric medical PET classification

Sharif, Mhd Saeed January 2013 (has links)
Positron emission tomography (PET) imaging is an emerging medical imaging modality. Due to its high sensitivity and ability to model physiological function, it is effective in identifying active regions that may be associated with different types of tumour. Increasing numbers of patient scans have led to an urgent need for the development of new efficient data analysis system to aid clinicians in the diagnosis of disease and save decent amount of the processing time, as well as the automatic detection of small lesions. In this research, an automated intelligent system for oncological PET volume analysis has been developed. Experimental NEMA (national electrical manufacturers association) IEC (International Electrotechnical Commission) body phantom data set, Zubal anthropomorphic phantom data set with simulated tumours, clinical data set from patient with histologically proven non-small cell lung cancer, and clinical data sets from seven patients with laryngeal squamous cell carcinoma have been utilised in this research. The initial stage of the developed system involves different thresholding approaches, and transforming the processed volumes into the wavelet domain at different levels of decomposition by deploying Haar wavelet transform. K-means approach is also deployed to classify the processed volume into a distinct number of classes. The optimal number of classes for each processed data set has been obtained automatically based on Bayesian information criterion. The second stage of the system involves artificial intelligence approaches including feedforward neural network, adaptive neuro-fuzzy inference system, self organising map, and fuzzy C-means. The best neural network design for PET application has been thoroughly investigated. All the proposed classifiers have been evaluated and tested on the experimental, simulated and clinical data sets. The final stage of the developed system includes the development of new optimised committee machine for PET application and tumour classification. Objective and subjective evaluations have been carried out for all the systems outputs, they show promising results for classifying patient lesions. The new approach results have been compared with all of the results obtained from the investigated classifiers and the developed committee machines. Superior results have been achieved using the new approach. An accuracy of 99.95% is achieved for clinical data set of patient with histologically proven lung tumour, and an average accuracy of 98.11% is achieved for clinical data set of seven patients with laryngeal tumour.
7

Návrh software jednoúčelového stroje pro vizuální kontrolu / Software Design of Single Purpose Machine for Visual Inspection

Horák, Daniel January 2021 (has links)
This master’s thesis deals with the fundamentals of machine vision application and its practical implementation. The research part is focused on the basic possibilities of image acquisition and image processing in different dimensions. The practical part describes the design of the dimension control algorithm using a 3D camera. This algorithm is then implemented in a single-purpose machine for optical dimension control.
8

Characterization of Computed Tomography Radiomic Features using Texture Phantoms

Shafiq ul Hassan, Muhammad 05 April 2018 (has links)
Radiomics treats images as quantitative data and promises to improve cancer prediction in radiology and therapy response assessment in radiation oncology. However, there are a number of fundamental problems that need to be solved in order to potentially apply radiomic features in clinic. The first basic step in computed tomography (CT) radiomic analysis is the acquisition of images using selectable image acquisition and reconstruction parameters. Radiomic features have shown large variability due to variation of these parameters. Therefore, it is important to develop methods to address these variability issues in radiomic features due to each CT parameter. To this end, texture phantoms provide a stable geometry and Hounsfield Units (HU) to characterize the radiomic features with respect to image acquisition and reconstruction parameters. In this project, normalization methods were developed to address the variability issues in CT Radiomics using texture phantoms. In the first part of this project, variability in radiomic features due to voxel size variation was addressed. A voxel size resampling method is presented as a preprocessing step for imaging data acquired with variable voxel sizes. After resampling, variability due to variable voxel size in 42 radiomic features was reduced significantly. Voxel size normalization is presented to address the intrinsic dependence of some key radiomic features. After normalization, 10 features became robust as a function of voxel size. Some of these features were identified as predictive biomarkers in diagnostic imaging or useful in response assessment in radiation therapy. However, these key features were found to be intrinsically dependent on voxel size (which also implies dependence on lesion volume). The normalization factors are also developed to address the intrinsic dependence of texture features on the number of gray levels. After normalization, the variability due to gray levels in 17 texture features was reduced significantly. In the second part of the project, voxel size and gray level (GL) normalizations developed based on phantom studies, were tested on the actual lung cancer tumors. Eighteen patients with non-small cell lung cancer of varying tumor volumes were studied and compared with phantom scans acquired on 8 different CT scanners. Eight out of 10 features showed high (Rs > 0.9) and low (Rs < 0.5) Spearman rank correlations with voxel size before and after normalizations, respectively. Likewise, texture features were unstable (ICC < 0.6) and highly stable (ICC > 0.9) before and after gray level normalizations, respectively. This work showed that voxel size and GL normalizations derived from texture phantom also apply to lung cancer tumors. This work highlights the importance and utility of investigating the robustness of CT radiomic features using CT texture phantoms. Another contribution of this work is to develop correction factors to address the variability issues in radiomic features due to reconstruction kernels. Reconstruction kernels and tube current contribute to noise texture in CT. Most of texture features were sensitive to correlated noise texture due to reconstruction kernels. In this work, noise power spectra (NPS) was measured on 5 CT scanners using standard ACR phantom to quantify the correlated noise texture. The variability in texture features due to different kernels was reduced by applying the NPS peak frequency and the region of interest (ROI) maximum intensity as correction factors. Most texture features were radiation dose independent but were strongly kernel dependent, which is demonstrated by a significant shift in NPS peak frequency among kernels. Percent improvements in robustness of 19 features were in the range of 30% to 78% after corrections. In conclusion, most texture features are sensitive to imaging parameters such as reconstruction kernels, reconstruction Field of View (FOV), and slice thickness. All reconstruction parameters contribute to inherent noise in CT images. The problem can be partly solved by quantifying noise texture in CT radiomics using a texture phantom and an ACR phantom. Texture phantoms should be a pre-requisite to patient studies as they provide stable geometry and HU distribution to characterize the radiomic features and provide ground truths for multi-institutional validation studies.
9

Data-guided statistical sparse measurements modeling for compressive sensing

Schwartz, Tal Shimon January 2013 (has links)
Digital image acquisition can be a time consuming process for situations where high spatial resolution is required. As such, optimizing the acquisition mechanism is of high importance for many measurement applications. Acquiring such data through a dynamically small subset of measurement locations can address this problem. In such a case, the measured information can be regarded as incomplete, which necessitates the application of special reconstruction tools to recover the original data set. The reconstruction can be performed based on the concept of sparse signal representation. Recovering signals and images from their sub-Nyquist measurements forms the core idea of compressive sensing (CS). In this work, a CS-based data-guided statistical sparse measurements method is presented, implemented and evaluated. This method significantly improves image reconstruction from sparse measurements. In the data-guided statistical sparse measurements approach, signal sampling distribution is optimized for improving image reconstruction performance. The sampling distribution is based on underlying data rather than the commonly used uniform random distribution. The optimal sampling pattern probability is accomplished by learning process through two methods - direct and indirect. The direct method is implemented for learning a nonparametric probability density function directly from the dataset. The indirect learning method is implemented for cases where a mapping between extracted features and the probability density function is required. The unified model is implemented for different representation domains, including frequency domain and spatial domain. Experiments were performed for multiple applications such as optical coherence tomography, bridge structure vibration, robotic vision, 3D laser range measurements and fluorescence microscopy. Results show that the data-guided statistical sparse measurements method significantly outperforms the conventional CS reconstruction performance. Data-guided statistical sparse measurements method achieves much higher reconstruction signal-to-noise ratio for the same compression rate as the conventional CS. Alternatively, Data-guided statistical sparse measurements method achieves similar reconstruction signal-to-noise ratio as the conventional CS with significantly fewer samples.
10

Data-guided statistical sparse measurements modeling for compressive sensing

Schwartz, Tal Shimon January 2013 (has links)
Digital image acquisition can be a time consuming process for situations where high spatial resolution is required. As such, optimizing the acquisition mechanism is of high importance for many measurement applications. Acquiring such data through a dynamically small subset of measurement locations can address this problem. In such a case, the measured information can be regarded as incomplete, which necessitates the application of special reconstruction tools to recover the original data set. The reconstruction can be performed based on the concept of sparse signal representation. Recovering signals and images from their sub-Nyquist measurements forms the core idea of compressive sensing (CS). In this work, a CS-based data-guided statistical sparse measurements method is presented, implemented and evaluated. This method significantly improves image reconstruction from sparse measurements. In the data-guided statistical sparse measurements approach, signal sampling distribution is optimized for improving image reconstruction performance. The sampling distribution is based on underlying data rather than the commonly used uniform random distribution. The optimal sampling pattern probability is accomplished by learning process through two methods - direct and indirect. The direct method is implemented for learning a nonparametric probability density function directly from the dataset. The indirect learning method is implemented for cases where a mapping between extracted features and the probability density function is required. The unified model is implemented for different representation domains, including frequency domain and spatial domain. Experiments were performed for multiple applications such as optical coherence tomography, bridge structure vibration, robotic vision, 3D laser range measurements and fluorescence microscopy. Results show that the data-guided statistical sparse measurements method significantly outperforms the conventional CS reconstruction performance. Data-guided statistical sparse measurements method achieves much higher reconstruction signal-to-noise ratio for the same compression rate as the conventional CS. Alternatively, Data-guided statistical sparse measurements method achieves similar reconstruction signal-to-noise ratio as the conventional CS with significantly fewer samples.

Page generated in 0.1005 seconds