Spelling suggestions: "subject:"data 4hupstream 2analysis"" "subject:"data 4hupstream 3analysis""
1 |
Lightweight Top-K Analysis in DBMSs Using Data Stream Analysis TechniquesHuang, Jing 03 September 2009 (has links)
Problem determination is the identification of problems and performance issues that occur in an observed system and the discovery of solutions to resolve them. Top-k analysis is common task in problem determination in database management systems. It involves the identification of the set of most frequently occurring objects according to some criteria, such as the top-k most frequently used tables or most frequent queries, or the top-k queries with respect to CPU usage or amount of I/O.
Effective problem determination requires sufficient monitoring and rapid analysis of the collected monitoring statistics. System monitoring often incurs a great deal of overhead and can interfere with the performance of the observed system. Processing vast amounts of data may require several passes through the analysis system and thus be very time consuming.
In this thesis, we present our lightweight top-k analysis framework in which lightweight monitoring tools are used to continuously poll system statistics producing several continuous data streams which are then processed by stream mining techniques. The results produced by our tool are the “top-k” values for the observed statistics. This information can be valuable to an administrator in determining the source of a problem.
We implement the framework as a prototype system called Tempo. Tempo uses IBM DB2’s snapshot API and a lightweight monitoring tool called DB2PD to generate the data streams. The system reports the top-k executed SQL statements and the top-k most frequently accessed tables in an on-line fashion. Several experiments are conducted to verify the feasibility and effectiveness of our approach. The experimental results show that our approach achieves low system overhead. / Thesis (Master, Computing) -- Queen's University, 2009-08-31 12:42:48.944
|
2 |
Image Structures For Steganalysis And EncryptionSuresh, V 04 1900 (has links) (PDF)
In this work we study two aspects of image security: improper usage and illegal access of images. In the first part we present our results on steganalysis – protection against improper usage of images. In the second part we present our results on image encryption – protection against illegal access of images.
Steganography is the collective name for methodologies that allow the creation of invisible –hence secret– channels for information transfer. Steganalysis, the counter to steganography, is a collection of approaches that attempt to detect and quantify the presence of hidden messages in cover media.
First we present our studies on stego-images using features developed for data stream classification towards making some qualitative assessments about the effect of steganography on the lower order bit planes(LSB) of images. These features are effective in classifying different data streams. Using these features, we study the randomness properties of image and stego-image LSB streams and observe that data stream analysis techniques are inadequate for steganalysis purposes. This provides motivation to arrive at steganalytic techniques that go beyond the LSB properties. We then present our steganalytic approach which takes into account such properties.
In one such approach, we perform steganalysis from the point of view of quantifying the effect of perturbations caused by mild image processing operations–zoom-in/out, rotation, distortions–on stego-images. We show that this approach works both in detecting and estimating the presence of stego-contents for a particularly difficult steganographic technique known as LSB matching steganography.
Next, we present our results on our image encryption techniques. Encryption approaches which are used in the context of text data are usually unsuited for the purposes of encrypting images(and multimedia objects) in general. The reasons are: unlike text, the volume to be encrypted could be huge for images and leads to increased computational requirements; encryption used for text renders images incompressible thereby resulting in poor use of bandwidth. These issues are overcome by designing image encryption approaches that obfuscate the image by intelligently re-ordering the pixels or encrypt only parts of a given image in attempts to render them imperceptible. The obfuscated image or the partially encrypted image is still amenable to compression. Efficient image encryption schemes ensure that the obfuscation is not compromised by the inherent correlations present in the image. Also they ensure that the unencrypted portions of the image do not provide information about the encrypted parts. In this work we present two approaches for efficient image encryption.
First, we utilize the correlation preserving properties of the Hilbert space-filling-curves to reorder images in such a way that the image is obfuscated perceptually. This process does not compromise on the compressibility of the output image. We show experimentally that our approach leads to both perceptual security and perceptual encryption. We then show that the space-filling curve based approach also leads to more efficient partial encryption of images wherein only the salient parts of the image are encrypted thereby reducing the encryption load.
In our second approach, we show that Singular Value Decomposition(SVD) of images is useful from the point of image encryption by way of mismatching the unitary matrices resulting from the decomposition of images. It is seen that the images that result due to the mismatching operations are perceptually secure.
|
Page generated in 0.0614 seconds