21 |
Image Compression by Using Haar Wavelet Transform and Singualr Value DecompositionIdrees, Zunera, Hashemiaghjekandi, Eliza January 2011 (has links)
The rise in digital technology has also rose the use of digital images. The digital imagesrequire much storage space. The compression techniques are used to compress the dataso that it takes up less storage space. In this regard wavelets play important role. Inthis thesis, we studied the Haar wavelet system, which is a complete orthonormal systemin L2(R): This system consists of the functions j the father wavelet, and y the motherwavelet. The Haar wavelet transformation is an example of multiresolution analysis. Ourpurpose is to use the Haar wavelet basis to compress an image data. The method ofaveraging and differencing is used to construct the Haar wavelet basis. We have shownthat averaging and differencing method is an application of Haar wavelet transform. Afterdiscussing the compression by using Haar wavelet transform we used another method tocompress that is based on singular value decomposition. We used mathematical softwareMATLAB to compress the image data by using Haar wavelet transformation, and singularvalue decomposition.
|
22 |
A Precoding Scheme Based on Perfect Sequences without Data Identification Problem for Data-Dependent Superimposed TrainingLin, Yu-sing 25 August 2011 (has links)
In data-dependent superimposed training (DDST) system, the data sequence subtracts a data-dependent sequence before transmission. The receiver cannot correctly find the unknown term which causes an error floor at high SNR.
In this thesis, we list some helpful conditions to enhance the performance for precoding design in DDST system, and analyze the major cause of data misidentification by singular value decomposition (SVD) method. Finally, we propose a precoding matrix based on [C.-P. Li and W.-C. Huang, ¡§A constructive representation for the Fourier dual of the Zadoff¡VChu sequences,¡¨ IEEE Trans. Inf. Theory, vol. 53, no. 11, pp. 4221¡Ð4224, Nov. 2007]. The precoding matrix is constructed by an inverse discrete Fourier transform (IDFT) matrix and a diagonal matrix with the elements consist of an arbitrary perfect sequence. The proposed method satisfies these conditions and simulation results show that the data identification problem is solved.
|
23 |
A Neuro-Fuzzy Approach for ClassificaionLin, Wen-Sheng 08 September 2004 (has links)
We develop a neuro-fuzzy network technique to extract TSK-type fuzzy rules from a given set of input-output data for classification problems. Fuzzy clusters are generated incrementally from the training data set, and similar clusters are merged dynamically together through input-similarity, output-similarity, and output-variance tests. The associated membership functions are defined with statistical means and deviations. Each cluster corresponds to a fuzzy IF-THEN rule, and the obtained rules can be further refined by a fuzzy neural network with a hybrid learning algorithm which combines a recursive SVD-based least squares estimator and the gradient descent method. The proposed technique has several advantages. The information about input and output data subspaces is considered simultaneously for cluster generation and merging. Membership functions match closely with and describe properly the real distribution of the training data points. Redundant clusters are combined and the sensitivity to the input order of training data is reduced. Besides, generation of the whole set of clusters from the scratch can be avoided when new training data are considered.
|
24 |
Stability Analysis of Method of Foundamental Solutions for Laplace's EquationsHuang, Shiu-ling 21 June 2006 (has links)
This thesis consists of two parts. In the first part, to solve the boundary value problems of homogeneous equations, the fundamental solutions (FS) satisfying the homogeneous equations are chosen, and their linear combination is forced to satisfy the exterior and
the interior boundary conditions. To avoid the logarithmic
singularity, the source points of FS are located outside of the solution domain S. This method is called the method of fundamental solutions (MFS). The MFS was first used in Kupradze in 1963. Since then, there have appeared numerous
reports of MFS for computation, but only a few for analysis. The part one of this thesis is to derive the eigenvalues for the Neumann and the Robin boundary conditions in the simple case, and to estimate the bounds of condition number for the mixed boundary conditions in some non-disk domains. The same exponential rates of
Cond are obtained. And to report numerical results for two kinds of cases. (I) MFS for Motz's problem by adding singular functions. (II) MFS for Motz's problem by local refinements of collocation nodes. The values of traditional condition number are huge, and those of effective condition number are moderately large. However,
the expansion coefficients obtained by MFS are scillatingly
large, to cause another kind of instability: subtraction
cancellation errors in the final harmonic solutions. Hence, for practical applications, the errors and the ill-conditioning must be balanced each other. To mitigate the ill-conditioning, it is suggested that the number of FS should not be large, and the distance between the source circle and the partial S should not be far, either.
In the second part, to reduce the severe instability of MFS, the truncated singular value decomposition(TSVD) and Tikhonov regularization(TR) are employed. The computational formulas of the condition number and the effective condition number are derived, and their analysis is explored in detail. Besides, the error analysis of TSVD and TR is also made. Moreover, the combination of
TSVD and TR is proposed and called the truncated Tikhonov
regularization in this thesis, to better remove some effects of infinitesimal sigma_{min} and high frequency eigenvectors.
|
25 |
System identification of dynamic patterns of genome-wide gene expressionWang, Daifeng 31 January 2012 (has links)
High-throughput methods systematically measure the internal state of the entire cell, but powerful computational tools are needed to infer dynamics from their raw data. Therefore, we have developed a new computational method, Eigen-genomic System Dynamic-pattern Analysis (ESDA), which uses systems theory to infer dynamic parameters from a time series of gene expression measurements. As many genes are measured at a modest number of time points, estimation of the system matrix is underdetermined and traditional approaches for estimating dynamic parameters are ineffective; thus, ESDA uses the principle of dimensionality reduction to overcome the data imbalance. We identify degradation dynamic patterns of a genomic system using ESDA. We also combine ESDA and Principal-oscillation-pattern (POP) analysis, which has been widely used in geosciences, to identify oscillation patterns. We demonstrate the first application of POP analysis to genome-wide time-series gene-expression data. Both simulation data and real-world data are used in this study to demonstrate the applicability of ESDA to genomic data. The biological interpretations of dynamic patterns are provided. We also show that ESDA not only compares favorably with previous experimental methods and existing computational methods, but that it also provides complementary information relative to other approaches. / text
|
26 |
Higher-order generalized singular value decomposition : comparative mathematical framework with applications to genomic signal processingPonnapalli, Sri Priya 03 December 2010 (has links)
The number of high-dimensional datasets recording multiple aspects of a single phenomenon is ever increasing in many areas of science. This is accompanied by a fundamental need for mathematical frameworks that can compare data tabulated as multiple large-scale matrices of di erent numbers of rows. The only such framework to date, the generalized singular value
decomposition (GSVD), is limited to two matrices. This thesis addresses this limitation and de fines a higher-order GSVD
(HO GSVD) of N > 2 datasets, that provides a mathematical framework that can compare multiple high-dimensional datasets tabulated as large-scale matrices of different numbers of rows. / text
|
27 |
Analyzing photochemical and physical processes for organic materialsCone, Craig William 07 February 2011 (has links)
Since their discovery, organic electronic materials have been of great interest as an alternative active layer material for active area materials in electronic applications. Initially studied as probes or lasing material the field has progressed to the point where both conjugated polymers and small organics have become fashionable objects of current device oriented solid state research. Organic electronic materials are liquid crystalline materials, packing into well-ordered domains when annealed thermally or via solvent annealing. The macromolecular orientation of the molecules in the solid state causes a shift in the electronic properties due to coupling of the dipoles. The amount of interaction between molecules can be correlated to different nanoscale morphologies. Such morphologies can be measured using microscopy techniques and compared to the spectroscopic results. This can then be extrapolated out to infer how the charges move within a film. Cyanine dyes represent an interesting form class of dyes as the molecular packing is strongly affected by hydrophilic and hydrophobic pendent groups, which cause the dye to arrange into a tubular bilayer. Spectroelectrochemistry is used to monitor and controllably oxidize the samples. Using singular value decomposition (SVD) it is possible to extract each electronic species formed during electrochemical oxidation and model the proposed species using semi empirical quantum mechanical calculations. Polyfluorene is a blue luminescent polymer of interest for its high quantum yield. The solution and solid-state conformation has shown two distinct phases. The formation of the secondary phase shows a dependence on the molecular weight. In a poor solvent, as the molecular weight increases, the secondary phase forms easier. In the solid state, the highly efficient blue emission from polyfluorene is degraded by ketone defects. The energy transfer to preexisting ketone defects is increased as the filmed is thermally ordered. Glass transitions of block copolymers are studied using synthetically novel polymers where an environmentally sensitive fluorescent reporter is placed within various regions of a self-assembled film. Different dynamics are observed within the block of the film then specifically at the interface of two blocks. / text
|
28 |
Σχεδιασμός και ανάλυση αλγορίθμων προσέγγισης με μητρώα χαμηλής τάξης / Algorithms for fast matrix computationsΖούζιας, Αναστάσιος 24 January 2012 (has links)
Στόχος της εργασίας είναι η μελέτη πιθανοτικών αλγορίθμων για προσεγγιστική επίλυση προβλημάτων του επιστημονικού υπολογισμού. Τα προβλήματα τα οποία θα μας απασχολήσουν είναι ο πολλαπλασιασμός μητρών, ο υπολογισμός της διάσπασης ιδιαζουσών τιμών (SVD) ενός μητρώου και ο υπολογισμός μιας "συμπιεσμένης" διάσπασης ενός μητρώου. / -
|
29 |
Eigenimage Processing of Frontal Chest RadiographsButler, Anthony Philip Howard January 2007 (has links)
The goal of this research was to improve the speed and accuracy of reporting by clinical radiologists. By applying a technique known as eigenimage processing to chest radiographs, abnormal findings were enhanced and a classification scheme developed. Results confirm that the method is feasible for clinical use. Eigenimage processing is a popular face recognition routine that has only recently been applied to medical images, but it has not previously been applied to full size radiographs. Chest radiographs were chosen for this research because they are clinically important and are challenging to process due to their large data content. It is hoped that the success with these images will enable future work on other medical images such as those from CT and MRI. Eigenimage processing is based on a multivariate statistical method which identifies patterns of variance within a training set of images. Specifically it involves the application of a statistical technique called principal components analysis to a training set. For this research, the training set was a collection of 77 normal radiographs. This processing produced a set of basis images, known as eigenimages, that best describe the variance within the training set of normal images. For chest radiographs the basis images may also be referred to as 'eigenchests'. Images to be tested were described in terms of eigenimages. This identified patterns of variance likely to be normal. A new image, referred to as the remainder image, was derived by removing patterns of normal variance, thus making abnormal patterns of variance more conspicuous. The remainder image could either be presented to clinicians or used as part of a computer aided diagnosis system. For the image sets used, the discriminatory power of a classification scheme approached 90%. While the processing of the training set required significant computation time, each test image to be classified or enhanced required only a few seconds to process. Thus the system could be integrated into a clinical radiology department.
|
30 |
Deep Web Collection SelectionKing, John Douglas January 2004 (has links)
The deep web contains a massive number of collections that are mostly invisible to search engines. These collections often contain high-quality, structured information that cannot be crawled using traditional methods. An important problem is selecting which of these collections to search. Automatic collection selection methods try to solve this problem by suggesting the best subset of deep web collections to search based on a query. A few methods for deep Web collection selection have proposed in Collection Retrieval Inference Network system and Glossary of Servers Server system. The drawback in these methods is that they require communication between the search broker and the collections, and need metadata about each collection. This thesis compares three different sampling methods that do not require communication with the broker or metadata about each collection. It also transforms some traditional information retrieval based techniques to this area. In addition, the thesis tests these techniques using INEX collection for total 18 collections (including 12232 XML documents) and total 36 queries. The experiment shows that the performance of sample-based technique is satisfactory in average.
|
Page generated in 0.1441 seconds