Spelling suggestions: "subject:"deembedding"" "subject:"disembedding""
231 |
Digital Watermarking Based Image and Video Quality EvaluationWang, Sha 02 April 2013 (has links)
Image and video quality evaluation is very important. In applications involving signal transmission, the Reduced- or No-Reference quality metrics are generally more practical than the Full-Reference metrics. Digital watermarking based quality evaluation emerges as a potential Reduced- or No-Reference quality
metric, which estimates signal quality by assessing the degradation of the embedded watermark. Since the watermark contains a small
amount of information compared to the cover signal, performing accurate signal quality evaluation is a challenging task. Meanwhile,
the watermarking process causes signal quality loss.
To address these problems, in this thesis, a framework for image and video quality evaluation is proposed based on semi-fragile and adaptive watermarking. In this framework, adaptive watermark embedding strength is assigned by examining the signal quality
degradation characteristics. The "Ideal Mapping Curve" is experimentally generated to relate watermark degradation to signal
degradation so that the watermark degradation can be used to estimate the quality of distorted signals.
With the proposed framework, a quantization based scheme is first implemented in DWT domain. In this scheme, the adaptive watermark
embedding strengths are optimized by iteratively testing the image degradation characteristics under JPEG compression. This iterative process provides high accuracy for quality evaluation. However, it results in relatively high computational complexity.
As an improvement, a tree structure based scheme is proposed to assign adaptive watermark embedding strengths by pre-estimating the signal degradation characteristics, which greatly improves the
computational efficiency. The SPIHT tree structure and HVS masking are used to guide the watermark embedding, which greatly reduces the signal quality loss caused by watermark embedding. Experimental results show that the tree structure based scheme can evaluate image
and video quality with high accuracy in terms of PSNR, wPSNR, JND, SSIM and VIF under JPEG compression, JPEG2000 compression, Gaussian
low-pass filtering, Gaussian noise distortion, H.264 compression and packet loss related distortion.
|
232 |
Upper Estimates for Banach SpacesFreeman, Daniel B. 2009 August 1900 (has links)
We study the relationship of dominance for
sequences and trees in Banach spaces. In the context of sequences,
we prove that domination of weakly null sequences is a uniform
property. More precisely, if $(v_i)$ is a normalized basic sequence
and $X$ is a Banach space such that every normalized weakly null
sequence in $X$ has a subsequence that is dominated by $(v_i)$, then
there exists a uniform constant $C\geq1$ such that every normalized
weakly null sequence in $X$ has a subsequence that is $C$-dominated
by $(v_i)$. We prove as well that if $V=(v_i)_{i=1}^\infty$
satisfies some general conditions, then a Banach space $X$ with
separable dual has subsequential $V$ upper tree estimates if and
only if it embeds into a Banach space with a shrinking FDD which
satisfies subsequential $V$ upper block estimates. We apply this
theorem to Tsirelson spaces to prove that for all countable ordinals
$\alpha$ there exists a Banach space $X$ with Szlenk index at most
$\omega^{\alpha \omega +1}$ which is universal for all Banach spaces
with Szlenk index at most $\omega^{\alpha\omega}$.
|
233 |
Space-time block codes with low maximum-likelihood decoding complexitySinnokrot, Mohanned Omar 12 November 2009 (has links)
In this thesis, we consider the problem of designing space-time block codes that have low maximum-likelihood (ML) decoding complexity. We present a unified framework for determining the worst-case ML decoding complexity of space-time block codes. We use this framework to not only determine the worst-case ML decoding complexity of our own constructions, but also to show that some popular constructions of space-time block codes have lower ML decoding complexity than was previously known.
Recognizing the practical importance of the two transmit and two receive antenna system, we propose the asymmetric golden code, which is designed specifically for low ML decoding complexity. The asymmetric golden code has the lowest decoding complexity compared to previous constructions of space-time codes, regardless of whether the channel varies with time.
We also propose the embedded orthogonal space-time codes, which is a family of codes for an arbitrary number of antennas, and for any rate up to half the number of antennas. The family of embedded orthogonal space-time codes is the first general framework for the construction of space-time codes with low-complexity decoding, not only for rate one, but for any rate up to half the number of transmit antennas. Simulation results for up to six transmit antennas show that the embedded orthogonal space-time codes are simultaneously lower in complexity and lower in error probability when compared to some of the most important constructions of space-time block codes with the same number of antennas and the same rate larger than one.
Having considered the design of space-time block codes with low ML decoding complexity on the transmitter side, we also develop efficient algorithms for ML decoding for the golden code, the asymmetric golden code and the embedded orthogonal space-time block codes on the receiver side. Simulations of the bit-error rate performance and decoding complexity of the asymmetric golden code and embedded orthogonal codes are used to demonstrate their attractive performance-complexity tradeoff.
|
234 |
Polymer embedding for ultrathin slicing and optical nanoscopy of thick fluorescent samples / Polymereinbettung für die Anfertigung von Ultradünnschnitten und optische Nanoskopie an dichten fluoreszierenden ProbenPunge, Annedore 28 October 2009 (has links)
No description available.
|
235 |
On Post's embedding problem and the complexity of lossy channelsChambart, Pierre 29 September 2011 (has links) (PDF)
Lossy channel systems were originally introduced to model communication protocols. It gave birth to a complexity class wich remained scarcely undersood for a long time. In this thesis we study some of the most important gaps. In particular, we bring matching upper and lower bounds for the time complexity. Then we describe a new proof tool : the Post Embedding Problem (PEP) which is a simple problem, closely related to the Post Correspondence Problem, and complete for this complexity class. Finally, we study PEP, its variants and the languages of solutions of PEP on which we provide complexity results and proof tools like pumping lemmas.
|
236 |
Digital Watermarking Based Image and Video Quality EvaluationWang, Sha 02 April 2013 (has links)
Image and video quality evaluation is very important. In applications involving signal transmission, the Reduced- or No-Reference quality metrics are generally more practical than the Full-Reference metrics. Digital watermarking based quality evaluation emerges as a potential Reduced- or No-Reference quality
metric, which estimates signal quality by assessing the degradation of the embedded watermark. Since the watermark contains a small
amount of information compared to the cover signal, performing accurate signal quality evaluation is a challenging task. Meanwhile,
the watermarking process causes signal quality loss.
To address these problems, in this thesis, a framework for image and video quality evaluation is proposed based on semi-fragile and adaptive watermarking. In this framework, adaptive watermark embedding strength is assigned by examining the signal quality
degradation characteristics. The "Ideal Mapping Curve" is experimentally generated to relate watermark degradation to signal
degradation so that the watermark degradation can be used to estimate the quality of distorted signals.
With the proposed framework, a quantization based scheme is first implemented in DWT domain. In this scheme, the adaptive watermark
embedding strengths are optimized by iteratively testing the image degradation characteristics under JPEG compression. This iterative process provides high accuracy for quality evaluation. However, it results in relatively high computational complexity.
As an improvement, a tree structure based scheme is proposed to assign adaptive watermark embedding strengths by pre-estimating the signal degradation characteristics, which greatly improves the
computational efficiency. The SPIHT tree structure and HVS masking are used to guide the watermark embedding, which greatly reduces the signal quality loss caused by watermark embedding. Experimental results show that the tree structure based scheme can evaluate image
and video quality with high accuracy in terms of PSNR, wPSNR, JND, SSIM and VIF under JPEG compression, JPEG2000 compression, Gaussian
low-pass filtering, Gaussian noise distortion, H.264 compression and packet loss related distortion.
|
237 |
A faster algorithm for torus embeddingWoodcock, Jennifer Roselynn 05 July 2007 (has links)
Although theoretically practical algorithms for torus embedding exist, they have not yet been successfully implemented and their complexity may be prohibitive to their practicality. We describe a simple exponential algorithm for embedding graphs on the torus (a surface shaped like a doughnut) and discuss how it was inspired by the quadratic time planar embedding algorithm of Demoucron, Malgrange and Pertuiset. We show that it is faster in practice than the only fully implemented alternative (also exponential) and explain how both the algorithm itself and the knowledge gained during its development might be used to solve the well-studied problem of finding the complete set of torus obstructions.
|
238 |
Flash Diffractive Imaging in Three DimensionsEkeberg, Tomas January 2012 (has links)
During the last years we have seen the birth of free-electron lasers, a new type of light source ten billion times brighter than syncrotrons and able to produce pulses only a few femtoseconds long. One of the main motivations for building these multi-million dollar machines was the prospect of imaging biological samples such as proteins and viruses in 3D without the need for crystallization or staining. This thesis contains some of the first biological results from free-electron lasers. These results include 2D images, both of whole cells and of the giant mimivirus and also con- tains a 3D density map of the mimivirus assembled from diffraction patterns from many virus particles. These are important proof-of-concept experiments but they also mark the point where free-electron lasers start to produce biologically relevant results. The most noteworthy of these results is the unexpectedly non-uniform density distribution of the internals of the mimivirus. We also present Hawk, the only open-source software toolkit for analysing single particle diffraction data. The Uppsala-developed program suite supports a wide range fo algorithms and takes advantage of Graphics Processing Units which makes it very computationally efficient. Last, the problem introduced by structural variability in samples is discussed. This includes a description of the problem and how it can be overcome, and also how it could be turned into an advantage that allows us to image samples in all of their conformational states.
|
239 |
A faster algorithm for torus embeddingWoodcock, Jennifer Roselynn 05 July 2007 (has links)
Although theoretically practical algorithms for torus embedding exist, they have not yet been successfully implemented and their complexity may be prohibitive to their practicality. We describe a simple exponential algorithm for embedding graphs on the torus (a surface shaped like a doughnut) and discuss how it was inspired by the quadratic time planar embedding algorithm of Demoucron, Malgrange and Pertuiset. We show that it is faster in practice than the only fully implemented alternative (also exponential) and explain how both the algorithm itself and the knowledge gained during its development might be used to solve the well-studied problem of finding the complete set of torus obstructions.
|
240 |
Influence of Heterogeneities on Waves of Excitation in the HeartBaig-Meininghaus, Tariq 07 September 2017 (has links)
No description available.
|
Page generated in 0.0398 seconds