• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 80
  • 19
  • 8
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 164
  • 164
  • 146
  • 47
  • 31
  • 22
  • 18
  • 17
  • 17
  • 17
  • 17
  • 15
  • 15
  • 14
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Computational Tools and Methods for Objective Assessment of Image Quality in X-Ray CT and SPECT

Palit, Robin January 2012 (has links)
Computational tools of use in the objective assessment of image quality for tomography systems were developed for computer processing units (CPU) and graphics processing units (GPU) in the image quality lab at the University of Arizona. Fast analytic x-ray projection code called IQCT was created to compute the mean projection image for cone beam multi-slice helical computed tomography (CT) scanners. IQCT was optimized to take advantage of the massively parallel architecture of GPUs. CPU code for computing single photon emission computed tomography (SPECT) projection images was written calling upon previous research in the image quality lab. IQCT and the SPECT modeling code were used to simulate data for multimodality SPECT/CT observer studies. The purpose of these observer studies was to assess the benefit in image quality of using attenuation information from a CT measurement in myocardial SPECT imaging. The observer chosen for these studies was the scanning linear observer. The tasks for the observer were localization of a signal and estimation of the signal radius. For the localization study, area under the localization receiver operating characteristic curve (A(LROC)) was computed as A(LROC)^Meas = 0.89332 ± 0.00474 and A(LROC)^No = 0.89408 ± 0.00475, where "Meas" implies the use of attenuation information from the CT measurement, and "No" indicates the absence of attenuation information. For the estimation study, area under the estimation receiver operating characteristic curve (A(EROC)) was quantified as A(EROC)^Meas = 0.55926 ± 0.00731 and A(EROC)^No = 0.56167 ± 0.00731. Based on these results, it was concluded that the use of CT information did not improve the scanning linear observer's ability to perform the stated myocardial SPECT tasks. The risk to the patient of the CT measurement was quantified in terms of excess effective dose as 2.37 mSv for males and 3.38 mSv for females.Another image quality tool generated within this body of work was a singular value decomposition (SVD) algorithm to reduce the dimension of the eigenvalue problem for tomography systems with rotational symmetry. Agreement in the results of this reduced dimension SVD algorithm and those of a standard SVD algorithm are shown for a toy problem. The use of SVD toward image quality metrics such as the measurement and null space are also presented.
22

A Comparison of Data Transformations in Image Denoising

Michael, Simon January 2018 (has links)
The study of signal processing has wide applications, such as in hi-fi audio, television, voice recognition and many other areas. Signals are rarely observed without noise, which obstruct our analysis of signals. Hence, it is of great interest to study the detection, approximation and removal of noise.  In this thesis we compare two methods for image denoising. The methods are each based on a data transformation. Specifically, Fourier Transform and Singular Value Decomposition are utilized in respective methods and compared on grayscale images. The comparison is based on the visual quality of the resulting image, the maximum peak signal-to-noise ratios attainable for the respective methods and their computational time. We find that the methods are fairly equal in visual quality. However, the method based on the Fourier transform scores higher in peak signal-to-noise ratio and demands considerably less computational time.
23

Time Series Decomposition Using Singular Spectrum Analysis

Deng, Cheng 01 May 2014 (has links)
Singular Spectrum Analysis (SSA) is a method for decomposing and forecasting time series that recently has had major developments but it is not yet routinely included in introductory time series courses. An international conference on the topic was held in Beijing in 2012. The basic SSA method decomposes a time series into trend, seasonal component and noise. However there are other more advanced extensions and applications of the method such as change-point detection or the treatment of multivariate time series. The purpose of this work is to understand the basic SSA method through its application to the monthly average sea temperature in a point of the coast of South America, near where “EI Ni˜no” phenomenon originates, and to artificial time series simulated using harmonic functions. The output of the basic SSA method is then compared with that of other decomposition methods such as classic seasonal decomposition, X-11 decomposition using moving averages and seasonal decomposition by Loess (STL) that are included in some time series courses.
24

Berechnung kinematischer Getriebeabmessungen zur Kalibrierung von Führungsgetrieben durch Messung / Determination of kinematic dimensions of guiding mechanisms from measurement

Teichgräber, Carsten 24 June 2013 (has links) (PDF)
Führungsgetriebe die durch Servomotoren angetrieben werden, benötigen für definierte Stellungen des Abtriebsglieds eine programmierte Funktion (elektronische Kurvenscheibe). Diese leitet sich aus dem möglicherweise fehlerbehafteten kinematischen Modell des Getriebes ab (inverse Kinematik). Zur Verbesserung der Genauigkeit der Führungsbewegung wird ein Verfahren zur Justierung der Übertragungsfunktion auf Basis des Newton-Verfahrens unter Nutzung der Singulärwertzerlegung vorgestellt. Dabei werden die realen Getriebeabmessungen anhand einer Messung berechnet und werden anschließend korrigiert zur Anpassung der Übertragungsfunktion verwendet.
25

Image Compression by Using Haar Wavelet Transform and Singualr Value Decomposition

Idrees, Zunera, Hashemiaghjekandi, Eliza January 2011 (has links)
The rise in digital technology has also rose the use of digital images. The digital imagesrequire much storage space. The compression techniques are used to compress the dataso that it takes up less storage space. In this regard wavelets play important role. Inthis thesis, we studied the Haar wavelet system, which is a complete orthonormal systemin L2(R): This system consists of the functions j the father wavelet, and y the motherwavelet. The Haar wavelet transformation is an example of multiresolution analysis. Ourpurpose is to use the Haar wavelet basis to compress an image data. The method ofaveraging and differencing is used to construct the Haar wavelet basis. We have shownthat averaging and differencing method is an application of Haar wavelet transform. Afterdiscussing the compression by using Haar wavelet transform we used another method tocompress that is based on singular value decomposition. We used mathematical softwareMATLAB to compress the image data by using Haar wavelet transformation, and singularvalue decomposition.
26

A Precoding Scheme Based on Perfect Sequences without Data Identification Problem for Data-Dependent Superimposed Training

Lin, Yu-sing 25 August 2011 (has links)
In data-dependent superimposed training (DDST) system, the data sequence subtracts a data-dependent sequence before transmission. The receiver cannot correctly find the unknown term which causes an error floor at high SNR. In this thesis, we list some helpful conditions to enhance the performance for precoding design in DDST system, and analyze the major cause of data misidentification by singular value decomposition (SVD) method. Finally, we propose a precoding matrix based on [C.-P. Li and W.-C. Huang, ¡§A constructive representation for the Fourier dual of the Zadoff¡VChu sequences,¡¨ IEEE Trans. Inf. Theory, vol. 53, no. 11, pp. 4221¡Ð4224, Nov. 2007]. The precoding matrix is constructed by an inverse discrete Fourier transform (IDFT) matrix and a diagonal matrix with the elements consist of an arbitrary perfect sequence. The proposed method satisfies these conditions and simulation results show that the data identification problem is solved.
27

A Neuro-Fuzzy Approach for Classificaion

Lin, Wen-Sheng 08 September 2004 (has links)
We develop a neuro-fuzzy network technique to extract TSK-type fuzzy rules from a given set of input-output data for classification problems. Fuzzy clusters are generated incrementally from the training data set, and similar clusters are merged dynamically together through input-similarity, output-similarity, and output-variance tests. The associated membership functions are defined with statistical means and deviations. Each cluster corresponds to a fuzzy IF-THEN rule, and the obtained rules can be further refined by a fuzzy neural network with a hybrid learning algorithm which combines a recursive SVD-based least squares estimator and the gradient descent method. The proposed technique has several advantages. The information about input and output data subspaces is considered simultaneously for cluster generation and merging. Membership functions match closely with and describe properly the real distribution of the training data points. Redundant clusters are combined and the sensitivity to the input order of training data is reduced. Besides, generation of the whole set of clusters from the scratch can be avoided when new training data are considered.
28

Stability Analysis of Method of Foundamental Solutions for Laplace's Equations

Huang, Shiu-ling 21 June 2006 (has links)
This thesis consists of two parts. In the first part, to solve the boundary value problems of homogeneous equations, the fundamental solutions (FS) satisfying the homogeneous equations are chosen, and their linear combination is forced to satisfy the exterior and the interior boundary conditions. To avoid the logarithmic singularity, the source points of FS are located outside of the solution domain S. This method is called the method of fundamental solutions (MFS). The MFS was first used in Kupradze in 1963. Since then, there have appeared numerous reports of MFS for computation, but only a few for analysis. The part one of this thesis is to derive the eigenvalues for the Neumann and the Robin boundary conditions in the simple case, and to estimate the bounds of condition number for the mixed boundary conditions in some non-disk domains. The same exponential rates of Cond are obtained. And to report numerical results for two kinds of cases. (I) MFS for Motz's problem by adding singular functions. (II) MFS for Motz's problem by local refinements of collocation nodes. The values of traditional condition number are huge, and those of effective condition number are moderately large. However, the expansion coefficients obtained by MFS are scillatingly large, to cause another kind of instability: subtraction cancellation errors in the final harmonic solutions. Hence, for practical applications, the errors and the ill-conditioning must be balanced each other. To mitigate the ill-conditioning, it is suggested that the number of FS should not be large, and the distance between the source circle and the partial S should not be far, either. In the second part, to reduce the severe instability of MFS, the truncated singular value decomposition(TSVD) and Tikhonov regularization(TR) are employed. The computational formulas of the condition number and the effective condition number are derived, and their analysis is explored in detail. Besides, the error analysis of TSVD and TR is also made. Moreover, the combination of TSVD and TR is proposed and called the truncated Tikhonov regularization in this thesis, to better remove some effects of infinitesimal sigma_{min} and high frequency eigenvectors.
29

New results on the degree of ill-posedness for integration operators with weights

Hofmann, Bernd, von Wolfersdorf, Lothar 16 May 2008 (has links) (PDF)
We extend our results on the degree of ill-posedness for linear integration opera- tors A with weights mapping in the Hilbert space L^2(0,1), which were published in the journal 'Inverse Problems' in 2005 ([5]). Now we can prove that the degree one also holds for a family of exponential weight functions. In this context, we empha- size that for integration operators with outer weights the use of the operator AA^* is more appropriate for the analysis of eigenvalue problems and the corresponding asymptotics of singular values than the former use of A^*A.
30

System identification of dynamic patterns of genome-wide gene expression

Wang, Daifeng 31 January 2012 (has links)
High-throughput methods systematically measure the internal state of the entire cell, but powerful computational tools are needed to infer dynamics from their raw data. Therefore, we have developed a new computational method, Eigen-genomic System Dynamic-pattern Analysis (ESDA), which uses systems theory to infer dynamic parameters from a time series of gene expression measurements. As many genes are measured at a modest number of time points, estimation of the system matrix is underdetermined and traditional approaches for estimating dynamic parameters are ineffective; thus, ESDA uses the principle of dimensionality reduction to overcome the data imbalance. We identify degradation dynamic patterns of a genomic system using ESDA. We also combine ESDA and Principal-oscillation-pattern (POP) analysis, which has been widely used in geosciences, to identify oscillation patterns. We demonstrate the first application of POP analysis to genome-wide time-series gene-expression data. Both simulation data and real-world data are used in this study to demonstrate the applicability of ESDA to genomic data. The biological interpretations of dynamic patterns are provided. We also show that ESDA not only compares favorably with previous experimental methods and existing computational methods, but that it also provides complementary information relative to other approaches. / text

Page generated in 0.0493 seconds