• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 43
  • 15
  • 12
  • 9
  • 8
  • 7
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 112
  • 31
  • 24
  • 23
  • 18
  • 17
  • 16
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Mathematical And Numerical Studies On The Inverse Problems Associated With Propagation Of Field Correlation Through A Scattering Object

Varma, Hari M 02 1900 (has links) (PDF)
This thesis discusses the inverse problem associated with the propagation of field autocorrelation of light through a highly scattering object like tissue. In the first part of the thesis we consider the mathematical issues involved in inverting boundary measurements made from diffuse propagation of light through highly scattering objects for their optical and mechanical properties. We present the convergence analysis of the Gauss-Newton algorithm for the recovery of object properties applicable for both diffuse correlation tomography (DCT) and diffuse optical tomography (DOT). En route to this, we establish the existence of solution and Frechet differentiability of the forward propagation equation. The two cases of the delta source and the Gaussian source illuminations are considered separately and the smoothness of solution of the forward equation in these cases is established. Considering DCT as an example, we establish the feasibility of recovering the particle diffusion coefficient (DB ) through minimizing the data-model mismatch of the field autocorrelation at the boundary using the Gauss-Newton algorithm. Some numerical examples validating the theoretical results are also presented. In the second part of the thesis, we reconstruct optical absorption coefficient, µa, and particle diffusion coefficient, DB , from simulated measurements which are integrals of a quantity computed from the measured intensity and intensity autocorrelation g2(τ ) at the boundary. We also recovered the mean square displacement (MSD) distribution of particles in an inhomogeneous object from the sampled g2(τ ) measured on the boundary. From the MSD, we compute the storage and loss moduli distributions in the object. We have devised computationally easy methods to construct the sensitivity matrices which are used in the iterative reconstruction algorithms for recovering these parameters from these measurements. The results of reconstruction of inhomogeneities in µa, DB , MSD and the visco-elastic parameters, which are presented, show forth reasonably good positional and quantitative accuracy. Finally we introduce a self regularized pseudo-dynamic scheme to solve the above inverse problem, which has certain advantages over the usual minimization method employing a variant of the Newton algorithm. The computational difficulties involved in the inversion of ill-conditioned matrices arising in the nonlinear inverse DCT problem are avoided by introducing artificial dynamics and considering the solution to be the steady-state response (if it exists) of the artificially evolving dynamical system, represented by ordinary differential equations (ODE) in pseudo-time. We show that the asymptotic solution obtained through the pseudo-time marching converges to the optimal solution which minimizes a mean-square error functional, provided the Hessian of the forward equation is positive definite in the neighborhood of this optimal solution. The superior noise tolerance and regularization-insensitive nature of pseudo-dynamic strategy are proven through numerical simulations in the context of DCT.
62

Digital watermarking of still images

Ahmed, Kamal Ali January 2013 (has links)
This thesis presents novel research work on copyright protection of grey scale and colour digital images. New blind frequency domain watermarking algorithms using one dimensional and two dimensional Walsh coding were developed. Handwritten signatures and mobile phone numbers were used in this project as watermarks. In this research eight algorithms were developed based on the DCT using 1D and 2D Walsh coding. These algorithms used the low frequency coefficients of the 8 × 8 DCT blocks for embedding. A shuffle process was used in the watermarking algorithms to increase the robustness against the cropping attacks. All algorithms are blind since they do not require the original image. All algorithms caused minimum distortion to the host images and the watermarking is invisible. The watermark is embedded in the green channel of the RGB colour images. The Walsh coded watermark is inserted several times by using the shuffling process to improve its robustness. The effect of changing the Walsh lengths and the scaling strength of the watermark on the robustness and image quality were studied. All algorithms are examined by using several grey scale and colour images of sizes 512 × 512. The fidelity of the images was assessed by using the peak signal to noise ratio (PSNR), the structural similarity index measure (SSIM), normalized correlation (NC) and StirMark benchmark tools. The new algorithms were tested on several grey scale and colour images of different sizes. Evaluation techniques using several tools with different scaling factors have been considered in the thesis to assess the algorithms. Comparisons carried out against other methods of embedding without coding have shown the superiority of the algorithms. The results have shown that use of 1D and 2D Walsh coding with DCT Blocks offers significant improvement in the robustness against JPEG compression and some other image processing operations compared to the method of embedding without coding. The originality of the schemes enables them to achieve significant robustness compared to conventional non-coded watermarking methods. The new algorithms offer an optimal trade-off between perceptual distortion caused by embedding and robustness against certain attacks. The new techniques could offer significant advantages to the digital watermark field and provide additional benefits to the copyright protection industry.
63

Vodoznačení statických obrazů / Watermarking of static images

Bambuch, Petr January 2008 (has links)
The thesis deals with the security of static images. The main aim is to embed the watermark into the original data so effectively, to avoid removal of the watermark with the use of simple and fast attacks methods. With developing of the watermarking techniques the technique of attacks are improved and developed also. The main aim of the attacks is to remove and devalue the hidden watermark in the image. The goal of the thesis is to check current techniques of static image watermarking and implement two methods of watermarking, which are to be tested for robustness against attacks.
64

Neuronové sítě v algoritmech vodoznačení audio signálů / Neural networks in audio signal watermarking algorithms

Kaňa, Ondřej January 2010 (has links)
Digital watermarking is a technique for digital multimedia copyright protection. The robustness and the imperceptibility are the main requirements of the watermark. This thesis deals with watermarking audio signals using artificial neural networks. There is described audio watermarking method in the DCT domain. Method is based on human psychoacoustic model and the techniques of neural networks.
65

Aplikace metod učení slovníku pro Audio Inpainting / Applications of Dictionary Learning Methods for Audio Inpainting

Ozdobinski, Roman January 2014 (has links)
This diploma thesis discusses methods of dictionary learning to inpaint missing sections in the audio signal. There was theoretically analyzed and practically used algorithms K-SVD and INK-SVD for dictionary learning. These dictionaries have been applied to the reconstruction of audio signals using OMP (Orthogonal Matching Pursuit). Furthermore, there was proposed an algorithm for selecting the stationary segments and their subsequent use as training data for K-SVD and INK-SVD. In the practical part of thesis have been observed efficiency with training set selection from whole signal compared with algorithm for stationary segmentation used. The influence of mutual coherence on the quality of reconstruction with incoherent dictionary was also studied. With created scripts for multiple testing in Matlab, there was performed comparison of these methods on genre distinct songs.
66

Prediction of life durability in friction for wet clutches of DCT gearboxes

Sandström, Lars-Johan January 2020 (has links)
Many new cars are equipped with automatic transmissions. These gearboxes often have a dual clutch transmission (DCT) that has a built-in wet clutch. The lubricants used in these gearboxes is often very advanced because it must take care of two systems, the wet clutch and the gears. There is always a strive to make the drain intervals longer. To do this a fundamental understanding of the aging mechanisms inside a DCT must be understood. This project focuses at the aging of the lubricant and friction material inside the wet clutch. A test rig at Luleå University of Technology is redesigned to be able to age this kind of systems. The test rig contains a wet clutch from Volvo Construction Equipment and the redesign focuses mainly on decreasing the oil sump volume to 6 liter and getting the oil sump to be at 100 °C during the tests. To verify the test rig a test is done that are trying to mimic a test done on a test rig called ZF GK3. The same lubricant, friction material and grove pattern are used as in the test with the ZF GK3. Due to the difference in how the test rigs are built all parameters cannot be held the same during the test. At first the same sliding speed at the mean radius, the average power over an engagement and the oil sump temperature is kept the same. A drop in the static friction can then be seen over time. This was however not the expected behavior. The sliding speed is therefore increased which also increases the average power and transferred energy per engagement. This has also big effects on the temperature inside the clutch. A drop in the coefficient of friction can then be seen at 50 % of the sliding speed which also is seen on the test carried out on the ZF GK3. This verifies that the test rig can be used for aging of these kind of systems.The aging that takes place seems to be dependent on the temperature inside the clutch.
67

Inside or Outside: Discourse strategies of Finnish and Japanese workers in Japan

Hakalisto, Tuomas January 2021 (has links)
The aim of this cross-cultural study is to analyze discourse strategies between Finnish and Japanese participants regarding the indexing of in-group and out-group dynamics in Japanese communication. This research is going to concentrate on Finnish and Japanese people’s use of Japanese language to establish uchi/soto (inside/outside) relationships in work-related instances. This study focuses solely on the in-group and out-group dynamics and socio-pragmatic features during interactions with addressees from inside and outside the company, because in these situations the contrast between the dynamics of in-groups and out-groups is often more transparent. The data was processed and analyzed using a Discourse Completion Task (DCT) survey.This research aims to answer two questions: How different are the nuanced uses of polite expressions and the politeness strategies between the Finnish and the Japanese respondents, and could it be possible that both respondent groups index uchi and soto relationships in the same way through language use?The results showed similarities in the use of politeness strategies between both groups. Differences were found in code-switching between various politeness levels. The data only serves as an indicator for the hypothesis and gives further room for future research.
68

Digital Hardware Architectures for Exact and Approximate DCT Computation Using Number Theoretic Techniques

Edirisuriya, Amila 21 May 2013 (has links)
No description available.
69

Parallel Computation of the Interleaved Fast Fourier Transform with MPI

Mirza, Ameen Baig January 2008 (has links)
No description available.
70

Time And Space Efficient Techniques For Facial Recognition

Alrasheed, Waleed 01 January 2013 (has links)
In recent years, there has been an increasing interest in face recognition. As a result, many new facial recognition techniques have been introduced. Recent developments in the field of face recognition have led to an increase in the number of available face recognition commercial products. However, Face recognition techniques are currently constrained by three main factors: recognition accuracy, computational complexity, and storage requirements. The problem is that most of the current face recognition techniques succeed in improving one or two of these factors at the expense of the others. In this dissertation, four novel face recognition techniques that improve the storage and computational requirements of face recognition systems are presented and analyzed. Three of the four novel face recognition techniques to be introduced, namely, Quantized/truncated Transform Domain (QTD), Frequency Domain Thresholding and Quantization (FD-TQ), and Normalized Transform Domain (NTD). All the three techniques utilize the Two-dimensional Discrete Cosine Transform (DCT-II), which reduces the dimensionality of facial feature images, thereby reducing the computational complexity. The fourth novel face recognition technique is introduced, namely, the Normalized Histogram Intensity (NHI). It is based on utilizing the pixel intensity histogram of poses' subimages, which reduces the computational complexity and the needed storage requirements. Various simulation experiments using MATLAB were conducted to test the proposed methods. For the purpose of benchmarking the performance of the proposed methods, the simulation experiments were performed using current state-of-the-art face recognition techniques, namely, Two Dimensional Principal Component Analysis (2DPCA), Two-Directional Two-Dimensional Principal Component Analysis ((2D)^2PCA), and Transform Domain Two Dimensional Principal Component Analysis (TD2DPCA). The experiments were applied to the ORL, Yale, and FERET databases. The experimental results for the proposed techniques confirm that the use of any of the four novel techniques examined in this study results in a significant reduction in computational complexity and storage requirements compared to the state-of-the-art techniques without sacrificing the recognition accuracy.

Page generated in 0.0147 seconds