41 |
What are, and what are not, Inverse Laplace TransformsFordham, Edmund J., Venkataramanan, Lalitha, Mitchell, Jonathan, Valori, Andrea 11 September 2018 (has links)
Time-domain NMR, in one and higher dimensionalities, makes routine use of inversion algorithms to generate results called \T2-distributions' or joint distributions in two (or higher) dimensions of other NMR
parameters, T1, diffusivity D, pore size a, etc. These are frequently referred to as \Inverse Laplace Transforms' although the standard inversion of the Laplace Transform long-established in many textbooks of
mathematical physics does not perform (and cannot perform) the calculation of such distributions. The operations performed in the estimation of a \T2-distribution' are the estimation of solutions to a Fredholm
Integral Equation (of the First Kind), a different and more general object whose discretization results in a standard problem in linear algebra, albeit suffering from well-known problems of ill-conditioning and computational limits for large problem sizes. The Fredholm Integral Equation is not restricted to exponential kernels; the same solution algorithms can be used with kernels of completely different form. On the other hand, (true) Inverse Laplace Transforms, treated analytically, can be of real utility in solving the diffusion problems highly relevant in the subject of NMR in porous media.
|
42 |
Basic theorems of distributions and Fourier transformsLong, Na January 1900 (has links)
Master of Science / Department of Mathematics / Marianne Korten / Distribution theory is an important tool in studying partial differential equations. Distributions are linear functionals that act on a space of smooth test functions. Distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense. In particular, any locally integrable function has a distributional derivative. There are different possible choices for the space of test functions, leading to different spaces of distributions. In this report, we take a look at some basic theory of distributions and their Fourier transforms. And we also solve some typical exercises at the end.
|
43 |
The discrete cosine transformFlickner, Myron Dale January 2011 (has links)
Typescript (photocopy). / Digitized by Kansas Correctional Industries
|
44 |
New developments for imaging energetic photonsPalmer, Max John January 1997 (has links)
No description available.
|
45 |
Wavelet-based parametric spectrum estimationTsakiroglou, Evangelia January 2001 (has links)
No description available.
|
46 |
Approximated transform and quantisation for complexity-reduced high efficiency video codingSazali, Mohd January 2017 (has links)
The transform-quantisation stage is one of the most complex operations in the state-of-the-art High Efficiency Video Coding (HEVC) standard, accounting for 11–41% share of the encoding complexity. This study aims to reduce its complexity, making it suitable for dedicated hardware accelerated architectures. Adopted methods include multiplier-free approach, Multiple-Constant Multiplication architectural designs, and exploiting useful properties of the well-known Discrete Cosine Transform. Besides, an approximation scheme was introduced to represent the original HEVC transform and quantisation matrix elements with more hardware-friendly integers. Out of several derived approximation alternatives, an approximated transform matrix (T16) and its downscaled version (ST16) were further evaluated. An approximated quantisation multipliers matrix (Q) and its combination with one transform matrix (ST16 + Q) were also assessed in HEVC reference software, HM-13.0, using test video sequences of High Definition (HD) quality or higher. Their hardware architectures were designed in IEEE-VHDL language targeting a Xilinx Virtex-6 Field Programmable Gate Array technology to estimate resource savings over original HEVC transform and quantisation. T16, ST16, Q, and ST16 + Q approximated transform or/and quantisation matrices provided average Bjøntegaard-Delta bitrate differences of 1.7%, 1.7%, 0.0%, and 1.7%, respectively, in entertainment scenario and 0.7%, 0.7%, -0.1%, and 0.7%, respectively, in interactive scenario against HEVC. Conversely, around 16.9%, 20.8%, 21.2%, and 25.9% hardware savings, respectively, were attained in the number of Virtex-6 slices compared with HEVC transform or/and quantisation. The developed architecture designs achieved a 200 MHz operating frequency, enabling them to support the encoding of Quad Full HD (3840 × 2160) videos at 60 frames per second. Comparing T16 and ST16 with similar designs in the literature yields better hardware efficiency measures (0.0687 and 0.0721, respectively, in mega sample/second/slice). The presented approximated transform and quantisation matrices may be applicable in a complexity-reduced HEVC encoding on hardware platforms with non-detrimental coding performance degradations.
|
47 |
Infrared studies on the spectra and structures of novel carbon moleculesCárdenas, Rafael. January 2007 (has links) (PDF)
Thesis (Ph. D.)--Texas Christian University, 2007. / Title from dissertation title page (viewed Dec. 10, 2007). Includes abstract. Includes bibliographical references.
|
48 |
Hardware Implementation of Fast Fourier TransformTsai, Hung-Chieh 20 July 2005 (has links)
In this thesis, an FFT (Fast Fourier Transform) hardware circuit is designed for OFDM systems. A new memory table permutation deletion method, which can reduce the size of memory storing twiddle factors table, is proposed. The architecture of the FFT circuit is based on the faster split-radix algorithm with SDF (Single-path Delay Feedback) pipeline structure. The bits number of the signal is carefully selected by system simulation to meet the system requirements. Based on the simulation results, a small area FFT circuit is carried out for OFDM systems.
|
49 |
Digital Watermarking with Progressive DetectionChang, Kai-Hsiang 08 August 2000 (has links)
In this thesis, we proposed two frequency-based
watermarking algorithms. One is DCT-based method.
Embedding watermark in the multi-areas and multi-
frequency bands to ensure we can get a less
distorted watermark sequence under unintentional
circumstance. The other is DWT-based method. The
parent-children relationship and the feature of
bit-plane coding in the EZW algorithm are
exploited to embed watermark. It makes that we
can know the watermark exist or not in the
progressive transmission system. The experimental
results show that the proposed methods both can
resist unintentional attacks. The DWT-based
method also has a better progressive detection
capability.
|
50 |
Image Watermarking Using Corresponding Location RelationshipFeng, Jyh-Ming 29 August 2000 (has links)
Many existing researches on image watermarking for copyright protection need to use original image in retrieving watermark. Though it is more robust, it would cause some problems about the authorization of original image. In this thesis, we propose a method based on DCT domain without using original image. Using the property of concentrating energy in DCT transform, the energies of blocks are used for further processing. In the embedding algorithm, the DC coefficients of blocks are first collected. Then they are divided by some number to get remainders. The values of embedded data are embedded in the relationship between corresponding location of embedded data and other locations by adjusting the remainders in all locations.
Some typical watermarking attacks and noise are used to evaluate the robustness of our method. Compared with other competing algorithms, it shows that the survival rate of watermark in our method can be almost the same or even better then those methods which need original image. The error rate of the lowest quality JPEG compression can be adjusted less then 1%, when the length of embedding data is 512 bits. Our proposed method can be further improved by adjusting the values of remainders and the block size. These provide flexibility to satisfy different requirements.
|
Page generated in 0.0512 seconds