Spelling suggestions: "subject:"image processing digital techniques"" "subject:"lmage processing digital techniques""
441 |
Investigation on Segmentation, Recognition and 3D Reconstruction of Objects Based on LiDAR Data Or MRITang, Shijun 05 1900 (has links)
Segmentation, recognition and 3D reconstruction of objects have been cutting-edge research topics, which have many applications ranging from environmental and medical to geographical applications as well as intelligent transportation. In this dissertation, I focus on the study of segmentation, recognition and 3D reconstruction of objects using LiDAR data/MRI. Three main works are that (I). Feature extraction algorithm based on sparse LiDAR data. A novel method has been proposed for feature extraction from sparse LiDAR data. The algorithm and the related principles have been described. Also, I have tested and discussed the choices and roles of parameters. By using correlation of neighboring points directly, statistic distribution of normal vectors at each point has been effectively used to determine the category of the selected point. (II). Segmentation and 3D reconstruction of objects based on LiDAR/MRI. The proposed method includes that the 3D LiDAR data are layered, that different categories are segmented, and that 3D canopy surfaces of individual tree crowns and clusters of trees are reconstructed from LiDAR point data based on a region active contour model. The proposed method allows for delineations of 3D forest canopy naturally from the contours of raw LiDAR point clouds. The proposed model is suitable not only for a series of ideal cone shapes, but also for other kinds of 3D shapes as well as other kinds dataset such as MRI. (III). Novel algorithms for recognition of objects based on LiDAR/MRI. Aimed to the sparse LiDAR data, the feature extraction algorithm has been proposed and applied to classify the building and trees. More importantly, the novel algorithms based on level set methods have been provided and employed to recognize not only the buildings and trees, the different trees (e.g. Oak trees and Douglas firs), but also the subthalamus nuclei (STNs). By using the novel algorithms based on level set method, a 3D model of the subthalamus nuclei (STNs) in the brain has been successfully reconstructed based on the statistical data of previous investigations of an anatomy atlas as reference. The 3D rendering of the subthalamic nuclei and the skull directly from MR imaging is also utilized to determine the 3D coordinates of the STNs in the brain. In summary, the novel methods and algorithms of segmentation, recognition and 3D reconstruction of objects have been proposed. The related experiments have been done to test and confirm the validation of the proposed methods. The experimental results also demonstrate the accuracy, efficiency and effectiveness of the proposed methods. A framework for segmentation, recognition and 3D reconstruction of objects has been established, which has been applied to many research areas.
|
442 |
Detection of Ulcerative Colitis Severity and Enhancement of Informative Frame Filtering Using Texture Analysis in Colonoscopy VideosDahal, Ashok 12 1900 (has links)
There are several types of disorders that affect our colon’s ability to function properly such as colorectal cancer, ulcerative colitis, diverticulitis, irritable bowel syndrome and colonic polyps. Automatic detection of these diseases would inform the endoscopist of possible sub-optimal inspection during the colonoscopy procedure as well as save time during post-procedure evaluation. But existing systems only detects few of those disorders like colonic polyps. In this dissertation, we address the automatic detection of another important disorder called ulcerative colitis. We propose a novel texture feature extraction technique to detect the severity of ulcerative colitis in block, image, and video levels. We also enhance the current informative frame filtering methods by detecting water and bubble frames using our proposed technique. Our feature extraction algorithm based on accumulation of pixel value difference provides better accuracy at faster speed than the existing methods making it highly suitable for real-time systems. We also propose a hybrid approach in which our feature method is combined with existing feature method(s) to provide even better accuracy. We extend the block and image level detection method to video level severity score calculation and shot segmentation. Also, the proposed novel feature extraction method can detect water and bubble frames in colonoscopy videos with very high accuracy in significantly less processing time even when clustering is used to reduce the training size by 10 times.
|
443 |
Using customised image processing for noise reduction to extract data from early 20th century African newspapersUsher, Sarah January 2017 (has links)
A research report submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering, 2017 / The images from the African articles dataset presented challenges to the Optical Character Recognition (OCR) tool. Despite successful binerisation in the Image Processing step of the pipeline, noise remained in the foreground of the images. This noise caused the OCR tool to misinterpret the text from the images and thus needed removal from the foreground. The technique involved the application of the Maximally Stable Extremal Region (MSER) algorithm, borrowed from Scene-Text Detection, and supervised machine learning classifiers. The algorithm creates regions from the foreground elements. Regions are classifiable into noise and characters based on the characteristics of their shapes. Classifiers were trained to recognise noise and characters. The technique is useful for a researcher wanting to process and analyse the large dataset. They could semi-automate the foreground noise-removal process using this technique. This would allow for better quality OCR output, for use in the Text Analysis step of the pipeline. Better OCR quality means less compromises would be required at the Text Analysis step. These concessions can lead to false results when searching noisy text. Fewer compromises means simpler, less error-prone analysis and more trustworthy results. The technique was tested against specifically selected images from the dataset which exhibited noise. It involved a number of steps. Training regions were selected and manually classified. After training and running many classifiers, the highest performing classifier was selected. The classifier categorised regions from all images. New images were created by removing noise regions from the original images. To discover whether an improvement in the OCR output was achieved, a text comparison was conducted. OCR text was generated from both the original and processed images. The two outputs of each image were compared for similarity against the test text. The test text was a manually created version of the expected OCR output per image. The similarity test for both original and processed images produced a score. A change in the similarity score indicated whether the technique had successfully removed noise or not. The test results showed that blotches in the foreground could be removed, and OCR output improved. Bleed-through and page fold noise was not removable. For images affected by noise blotches, this technique can be applied and hence less concessions will be needed when processing the text generated from those images. / CK2018
|
444 |
Advances in Piecewise Smooth Image ReconstructionJuengling, Ralf 17 March 2014 (has links)
Advances and new insights into algorithms for piecewise smooth image reconstruction are presented. Such algorithms fit a piecewise smooth function to image data without prior knowledge of the number of regions or the location of region boundaries in the best fitting function. This is a difficult model selection problem since the number of parameters of possible solutions varies widely.
The approach followed in this work was proposed by Yvan Leclerc. It uses the Minimum Description Length principle to make the reconstruction problem well-posed: the best fitting function yields the shortest encoding of the image data. In order to derive a code length formula, the class of functions is restricted to piecewise polynomial. The resulting optimization problem may have many local minima, and a good initial approximation is required in order to find acceptable solutions. Good initial approximations may be generated at the cost of solving a sequence of related optimization problems, as prescribed by a continuation method.
Several problems with this approach are identified and addressed. First, success or failure of the continuation method is found to be sensitive to the choice of objective function parameters. Second, the optimization method used in prior work may fail to converge, and, third, it converges too slowly to be useful in many vision applications.
I address the first problem in three different ways. First, a revised continuation method is less sensitive to parameter choice. Second, I show how to move control over success or failure from the objective function parameters to the continuation method. Third, a new objective function is derived which includes one parameter instead of the two parameters used in prior work. Experimental results show that all measures improve robustness with respect to parameter choice.
In order to address the optimization-related problems I use a quasi-Newton line-search method. This method is guaranteed to converge and may converge at a faster rate than the relaxation method used in prior work. To realize a faster convergence rate, I introduce a new parameter whose role is to improve variable scaling and problem conditioning. Further runtime improvements result from using extrapolation in the continuation method. Experimental results show overall runtime improvements of an order of magnitude and more.
My reconstruction algorithm performs superior to the well-known Canny edge detector on the Berkeley boundary detection task. This is a novel result that demonstrates the merits of image reconstruction as a means for extracting information from an image.
|
445 |
A probabilistic model to learn, detect, localize and classify patterns in arbitrary images /Toews, Matthew. January 2008 (has links)
No description available.
|
446 |
Design of the electronics and optics needed to support charge-coupled devices : a project report ...Zee, Kah Yep 01 January 1989 (has links) (PDF)
Over the last five years, charge-coupled devices (CCD) have been improved dramatically in terms of sensitivity, manufacturability and particularly, cost. This has enabled them to be used economically in many more industrial and commercial electronic imaging processes. They are found in products ranging from video cameras to satellite-based camera systems. This has sparked my interests in these devices, and with a great deal of encouragement from Dr. Turpin, I decided to base my Master's thesis/project on a CCD. The project was mainly based on the design of the electronics and optics needed to support a CCD. The particular circuit design which I used other designs which are available. Many of the designs are microprocessor- based, which tends to limit the speed of operation of the imaging process. Other circuits employ specially coded memory chips to implement the required logic processes, but again, the speed of operation is limited by the access times of the memory chips. The circuit employed in the project uses only logic gates and flip flops, and is probably one of the fastest circuits available for the capture of single-frame images.
|
447 |
Measurement techniques to characterize bubble motion in swarmsAcuña Pérez, Claudio Abraham January 2007 (has links)
No description available.
|
448 |
An automatic system for converting digitized line drawings into highly compressed mathematical primitivesSanford, Jerald Patrick January 1985 (has links)
The design of an efficient, low-cost system for automatically converting a hardcopy technical drawing into a highly compressed electronic representation is the motivation for this work. An improved method for extracting line and region information from a typical engineering drawing is presented. An efficient encoding method has also been proposed that takes advantage of the preprocessing done by the region and line extraction steps. Finally, a technique for creating a highly compressed mathematical representation (based on spline approximations) for the drawing is presented. / M.S.
|
449 |
Efficient restoration of digital images with physical optics blursCostello, Thomas P. 01 July 2001 (has links)
No description available.
|
450 |
Artificial intelligence machine vision grading systemLuwes, Nicolaas Johannes January 1900 (has links)
Thesis (M. Tech.) -- Central University of Technology, Free State, 2010
|
Page generated in 0.1534 seconds