• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 50
  • 20
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 219
  • 59
  • 41
  • 33
  • 30
  • 27
  • 26
  • 25
  • 25
  • 25
  • 24
  • 20
  • 19
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Summarization of very large spatial dataset

Liu, Qing, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
Nowadays there are a large number of applications, such as digital library information retrieval, business data analysis, CAD/CAM, multimedia applications with images and sound, real-time process control and scientific computation, with data sets about gigabytes, terabytes or even petabytes. Because data distributions are too large to be stored accurately, maintaining compact and accurate summarized information about underlying data is of crucial important. The summarizing problem for Level 1 (disjoint and non-disjoint) topological relationship has been well studied for the past few years. However the spatial database users are often interested in a much richer set of spatial relations such as contains. Little work has been done on summarization for Level 2 topological relationship which includes contains, contained, overlap, equal and disjoint relations. We study the problem of effective summatization to represent the underlying data distribution to answer window queries for Level 2 topological relationship. Cell-density based approach has been demonstrated as an effective way to this problem. But the challenges are the accuracy of the results and the storage space required which should be linearly proportional to the number of cells to be practical. In this thesis, we present several novel techniques to effectively construct cell density based spatial histograms. Based on the framework proposed, exact results could be obtained in constant time for aligned window queries. To minimize the storage space of the framework, an approximate algorithm with the approximate ratio 19/12 is presented, while the problem is shown NP-hard generally. Because the framework requires only a storage space linearly proportional to the number of cells, it is practical for many popular real datasets. To conform to a limited storage space, effective histogram construction and query algorithms are proposed which can provide approximate results but with high accuracy. The problem for non-aligned window queries is also investigated and techniques of un-even partitioned space are developed to support non-aligned window queries. Finally, we extend our techniques to 3D space. Our extensive experiments against both synthetic and real world datasets demonstrate the efficiency of the algorithms developed in this thesis.
2

Learning Decision Trees and Random Forests from Histogram Data : An application to component failure prediction for heavy duty trucks

Gurung, Ram Bahadur January 2017 (has links)
A large volume of data has become commonplace in many domains these days. Machine learning algorithms can be trained to look for any useful hidden patterns in such data. Sometimes, these big data might need to be summarized to make them into a manageable size, for example by using histograms, for various reasons. Traditionally, machine learning algorithms can be trained on data expressed as real numbers and/or categories but not on a complex structure such as histogram. Since machine learning algorithms that can learn from data with histograms have not been explored to a major extent, this thesis intends to further explore this domain. This thesis has been limited to classification algorithms, tree-based classifiers such as decision trees, and random forest in particular. Decision trees are one of the simplest and most intuitive algorithms to train. A single decision tree might not be the best algorithm in term of its predictive performance, but it can be largely enhanced by considering an ensemble of many diverse trees as a random forest. This is the reason why both algorithms were considered. So, the objective of this thesis is to investigate how one can adapt these algorithms to make them learn better on histogram data. Our proposed approach considers the use of multiple bins of a histogram simultaneously to split a node during the tree induction process. Treating bins simultaneously is expected to capture dependencies among them, which could be useful. Experimental evaluation of the proposed approaches was carried out by comparing them with the standard approach of growing a tree where a single bin is used to split a node. Accuracy and the area under the receiver operating characteristic (ROC) curve (AUC) metrics along with the average time taken to train a model were used for comparison. For experimental purposes, real-world data from a large fleet of heavy duty trucks were used to build a component-failure prediction model. These data contain information about the operation of trucks over the years, where most operational features are summarized as histograms. Experiments were performed further on the synthetically generated dataset. From the results of the experiments, it was observed that the proposed approach outperforms the standard approach in performance and compactness of the model but lags behind in terms of training time. This thesis was motivated by a real-life problem encountered in the operation of heavy duty trucks in the automotive industry while building a data driven failure-prediction model. So, all the details about collecting and cleansing the data and the challenges encountered while making the data ready for training the algorithm have been presented in detail.
3

Image histogram features for nano-scale particle detection and classification.

Pahalawatta, Kapila Kithsiri January 2015 (has links)
This research proposes a method to detect and classify the smoke particles of common household fires by analysing the image histogram features of smoke particles generated by Rayleigh scattered light. This research was motivated by the failure of commercially available photoelectric smoke detectors to detect smoke particles less than 100 nm in diameter, such as those in polyurethane (in furniture) fires, and the occurrence of false positives such as those caused by steam. Seven different types of particles (pinewood smoke, polyurethane smoke, steam, kerosene smoke, cotton wool smoke, cooking oil smoke and a test Smoke) were selected and exposed to a continuous spectrum of light in a closed particle chamber. A significant improvement over the common photoelectric smoke detectors was demonstrated by successfully detecting and classifying all test particles using colour histograms. As Rayleigh theory suggested, comparing the intensities of scattered light of different wavelengths is the best method to classify different sized particles. Existing histogram comparison methods based on histogram bin values failed to evaluate a relationship between the scattered intensities of individual red, green and blue laser beams with different sized particles due to the uneven particles movements inside the chamber. The current study proposes a new method to classify these nano-scale particles using the particle density independent intensity histograms feature; Maximum Value Index. When a Rayleigh scatter (particles that have the diameter which is less than one tenth of the incident wavelength) is exposed to a light with different wavelengths, the intensities of scattered light of each wavelength is unique according to the particle size and hence, a single unique maximum value index in the image intensity histogram can be detected. Each captured image in the video frame sequence was divided into its red, green and blue planes (single R, G, B channel arrays) and the particles were isolated using a modified frame difference method. Mean and the standard deviation of the Maximum Value Index of intensity histograms over predefined number of frames (N) were used to differentiate different types of particles. The proposed classification algorithm successfully classified all the monotype particles with 100% accuracy when N ≥ 100. As expected, the classifier failed to distinguish wood smoke from other monotype particles due to the rapid variation of the maximum value index of the intensity histograms of the consecutive images of the image sequence since wood smoke is itself a complex composition of many monotype particles such as water vapour and resin smoke. The results suggest that the proposed algorithm may enable a smoke detector to be safer by detecting a wider range of fires and reduce false alarms such as those caused by steam.
4

Sledování objektu ve videosekvenci pomocí integrálního histogramu / Object tracking in video sequence using the integral histogram

Přibyl, Jakub January 2020 (has links)
This thesis focuses on object tracking in real-time. Tracked object is defined by bounding rectangle. The thesis works on issue of image processing and using histogram for real-time object tracking. The main contribution of the work is the extension of the provided program to track object in real-time with changing bounding rectangle. Size of the rectangle is changing as the object moves closer of further from camera. Furthemore the detection behavior in different scenarios is analyzed. In addition, various weight calculations were tested. The program is written in C++ using OpenCV library.
5

Reversible Watermarking Using Multi-Prediction values

Chen, Nan-Tung 20 July 2011 (has links)
Reversible watermarking techniques extract the watermark and recover the original image from the watermarked image without any distortion. They have been applied for those sensitive fields, such as the medicine and the military. In this thesis, a novel watermarking algorithm using multi-prediction values has been proposed. It exploits the correlation between the original pixel and the neighboring pixels to obtain twelve prediction candidates, and then selects a candidate as the prediction value according to the original pixel and the temporary prediction value. Due to the algorithm use the original pixel as one of the parameters to decide the prediction value, our prediction values are obtained with great precision. The experimental results reveal that the performance of our proposed method outperforms that proposed by Sachnev. For example the variance of the prediction errors histogram obtained by the proposed method is less than that obtained by the algorithm proposed by Sachnev et al. about 44.2%; the mean PSNR greater than about 1.47 dB and 1.1 dB under the watermark capacity 0~0.04 bpp and 0.04~0.5 bpp, respectively. Therefore, the proposed method is especially appropriate for embedding watermark in low or medium capacity. Keyword¡G reversible watermarking, watermarking, prediction, histogram shifting.
6

Image Retrieval By Local Contrast Patterns and Color Histogram

Bashar, M.K., Ohnishi, N. 12 1900 (has links)
No description available.
7

Texture anisotropy analysis of brain scans

Segovia-Martinez, Manuel January 2001 (has links)
Currently, the world population is aging. People over 75 is one of the fastest growing age groups. This is the group most affected by Alzheimer's disease. Reliable early diagnosis and tracking methods are essential to assist therapy and prevention. This research aims to study anisotropy texture in tomographic brain scans to diagnose and quantify the severity of Alzheimer's disease. A full methodology to study computer tomography, magnetic resonance imaging and multispectral magnetic resonance imaging is presented in this thesis. Before applying any texture method to the tomographic brain images, a segmentation technique has to be used to extract the different regions of interest. We propose the use of connected filters and iterative region merging to perform the segmentation. Gradient vector histogram is applied to study the texture anisotropy of computer tomography scans. Computer tomography scans present evidence of texture changes in demented subjects compare to normal subjects. The overlap between these groups is considerable, so anisotropy texture using computer tomography does not seem to add more useful information to the diagnosis of Alzheimer's disease than other clinical criteria. Another method to study texture anisotropy is grey-level dependance histogram, which is based in a 3D generalisation for arbitrary orientation of the 2D co-occurrence matrices. This texture technique is applied to magnetic resonance imaging scans, where features extracted from the grey matter component have a strong correlation with the mini mental state examination1 scores. Finally, Multispectral Grey-Level Dependence Histogram (MGLDH), Absolute Difference Histogram (ADH) and spatial correlations are texture techniques designed to study multispectral images. These techniques are applied to multispectral magnetic resonance images. We evaluate the performance of the different multispectral texture methods, and compare them with single channel texture methods.
8

Development of a color machine vision method for wood surface inspection

Kauppinen, H. (Hannu) 03 November 1999 (has links)
Abstract The purpose of this thesis is to present a case study of the development, implementation and performance analysis of a color-based visual surface inspection method for wood properties. The main contribution of the study is to answer the need of design strategies, performance characterization methods and case studies in the field of automated visual inspection, and especially wood surface inspection. In real time color-based inspection, the complexity of the methods is important. In this study, defect detection and recognition methods based on color histogram percentile features are proposed. The color histogram percentile features were noticed to be able to recognize well wood surface defects with relatively low complexity. A common problem in visual inspection applications is the collection and labelling of training material since human made labellings can be errorneous. Further, the classifiers are relatively static when once trained, thus offering only little possibilities for adjusting classification. In the study, a self-organizing map (SOM) -based approach for classifier user interface in visual surface inspection problems is introduced. The approach relieves the labelling of training material, simplifies retraining, provides an illustrative an intuitive user interface and offers a convenient way of controlling classification. The study is illustrated with four experiments related to the method development and analysis. In the first experiment, a simulator environment is used for determining the relationship of the defect detection and recognition and grading accuracy. The second experiment considers the suitability of different color spaces for wood defect recognition under changing illumination. RGB color space gives the best results compared to grey-level and other color spaces. The third experiment presents the experimental wood surface inspection setup implementing the method developed in this study. Comparative performance analysis results are presented and the difficulties, mainly caused by segmentation of the defects, are discussed. The fourth experiment demonstrates the suitability of the method for parquet sorting and shows the potential of the non-segmenting approach.
9

Sledování objektu ve videu / Object Tracking in Video

Sojma, Zdeněk January 2011 (has links)
This master's thesis describes principles of the most widely used object tracking systems in video and then mainly focuses on characterization and on implementation of an interactive offline tracking system for generic color objects. The algorithm quality consists in high accuracy evaluation of object trajectory. The system creates the output trajectory from input data specified by user which may be interactively modified and added to improve the system accuracy. The algorithm is based on a detector which uses a color bin features and on the temporal coherence of object motion to generate multiple candidate object trajectories. Optimal output trajectory is then calculated by dynamic programming whose parameters are also interactively modified by user. The system achieves 15-70 fps on a 480x360 video. The thesis describes implementation of an application which purpose is to optimally evaluate the tracker accuracy. The final results are also discussed.
10

Contrast enhancement in digital imaging using histogram equalization

Gomes, David Menotti 18 June 2008 (has links) (PDF)
Nowadays devices are able to capture and process images from complex surveillance monitoring systems or from simple mobile phones. In certain applications, the time necessary to process the image is not as important as the quality of the processed images (e.g., medical imaging), but in other cases the quality can be sacrificed in favour of time. This thesis focuses on the latter case, and proposes two methodologies for fast image contrast enhancement methods. The proposed methods are based on histogram equalization (HE), and some for handling gray-level images and others for handling color images As far as HE methods for gray-level images are concerned, current methods tend to change the mean brightness of the image to the middle level of the gray-level range. This is not desirable in the case of image contrast enhancement for consumer electronics products, where preserving the input brightness of the image is required to avoid the generation of non-existing artifacts in the output image. To overcome this drawback, Bi-histogram equalization methods for both preserving the brightness and contrast enhancement have been proposed. Although these methods preserve the input brightness on the output image with a significant contrast enhancement, they may produce images which do not look as natural as the ones which have been input. In order to overcome this drawback, we propose a technique called Multi-HE, which consists of decomposing the input image into several sub-images, and then applying the classical HE process to each one of them. This methodology performs a less intensive image contrast enhancement, in a way that the output image presented looks more natural. We propose two discrepancy functions for image decomposition which lead to two new Multi-HE methods. A cost function is also used for automatically deciding in how many sub-images the input image will be decomposed on. Experimental results show that our methods are better in preserving the brightness and producing more natural looking images than the other HE methods. In order to deal with contrast enhancement in color images, we introduce a generic fast hue-preserving histogram equalization method based on the RGB color space, and two instances of the proposed generic method. The first instance uses R-red, G-green, and Bblue 1D histograms to estimate a RGB 3D histogram to be equalized, whereas the second instance uses RG, RB, and GB 2D histograms. Histogram equalization is performed using 7 Abstract 8 shift hue-preserving transformations, avoiding the appearance of unrealistic colors. Our methods have linear time and space complexities with respect to the image dimension, and do not require conversions between color spaces in order to perform image contrast enhancement. Objective assessments comparing our methods and others are performed using a contrast measure and color image quality measures, where the quality is established as a weighed function of the naturalness and colorfulness indexes. This is the first work to evaluate histogram equalization methods with a well-known database of 300 images (one dataset from the University of Berkeley) by using measures such as naturalness and colorfulness. Experimental results show that the value of the image contrast produced by our methods is in average 50% greater than the original image value, and still keeping the quality of the output images close to the original

Page generated in 0.0553 seconds