• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1345
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3040
  • 532
  • 464
  • 416
  • 409
  • 358
  • 327
  • 276
  • 264
  • 222
  • 219
  • 201
  • 169
  • 161
  • 157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
671

Perceptual Criteria on Image Compression

Moreno Escobar, Jesús Jaime 01 July 2011 (has links)
Hoy en día las imágenes digitales son usadas en muchas areas de nuestra vida cotidiana, pero estas tienden a ser cada vez más grandes. Este incremento de información nos lleva al problema del almacenamiento de las mismas. Por ejemplo, es común que la representación de un pixel a color ocupe 24 bits, donde los canales rojo, verde y azul se almacenen en 8 bits. Por lo que, este tipo de pixeles en color pueden representar uno de los 224 ¼ 16:78 millones de colores. Así, una imagen de 512 £ 512 que representa con 24 bits un pixel ocupa 786,432 bytes. Es por ello que la compresión es importante. Una característica importante de la compresión de imágenes es que esta puede ser con per didas o sin ellas. Una imagen es aceptable siempre y cuando dichas perdidas en la información de la imagen no sean percibidas por el ojo. Esto es posible al asumir que una porción de esta información es redundante. La compresión de imágenes sin pérdidas es definida como deco dificar matemáticamente la misma imagen que fue codificada. En la compresión de imágenes con pérdidas se necesita identificar dos características: la redundancia y la irrelevancia de in formación. Así la compresión con pérdidas modifica los datos de la imagen de tal manera que cuando estos son codificados y decodificados, la imagen recuperada es lo suficientemente pare cida a la original. Que tan parecida es la imagen recuperada en comparación con la original es definido previamente en proceso de codificación y depende de la implementación a ser desarrollada. En cuanto a la compresión con pérdidas, los actuales esquemas de compresión de imágenes eliminan información irrelevante utilizando criterios matemáticos. Uno de los problemas de estos esquemas es que a pesar de la calidad numérica de la imagen comprimida es baja, esta muestra una alta calidad visual, dado que no muestra una gran cantidad de artefactos visuales. Esto es debido a que dichos criterios matemáticos no toman en cuenta la información visual percibida por el Sistema Visual Humano. Por lo tanto, el objetivo de un sistema de compresión de imágenes diseñado para obtener imágenes que no muestren artefactos, aunque su calidad numérica puede ser baja, es eliminar la información que no es visible por el Sistema Visual Humano. Así, este trabajo de tesis doctoral propone explotar la redundancia visual existente en una imagen, reduciendo frecuencias imperceptibles para el sistema visual humano. Por lo que primeramente, se define una métrica de calidad de imagen que está altamente correlacionada con opiniones de observadores. La métrica propuesta pondera el bien conocido PSNR por medio de una modelo de inducción cromática (CwPSNR). Después, se propone un algoritmo compresor de imágenes, llamado Hi-SET, el cual explota la alta correlación de un vecindario de pixeles por medio de una función Fractal. Hi-SET posee las mismas características que tiene un compresor de imágenes moderno, como ser una algoritmo embedded que permite la transmisión progresiva. También se propone un cuantificador perceptual(½SQ), el cual es una modificación a la clásica cuantificación Dead-zone. ½SQes aplicado a un grupo entero de pixelesen una sub-banda Wavelet dada, es decir, se aplica una cuantificación global. A diferencia de lo anterior, la modificación propuesta permite hacer una cuantificación local tanto directa como inversa pixel-por-pixel introduciéndoles una distorsión perceptual que depende directamente de la información espacial del entorno del pixel. Combinando el método ½SQ con Hi-SET, se define un compresor perceptual de imágenes, llamado ©SET. Finalmente se presenta un método de codificación de areas de la Región de Interés, ½GBbBShift, la cual pondera perceptualmente los pixeles en dichas areas, en tanto que las areas que no pertenecen a la Región de Interés o el Fondo sólo contendrán aquellas que perceptualmente sean las más importantes. Los resultados expuestos en esta tesis indican que CwPSNR es el mejor indicador de calidad de imagen en las distorsiones más comunes de compresión como son JPEG y JPEG2000, dado que CwPSNR posee la mejor correlación con la opinión de observadores, dicha opinión está sujeta a los experimentos psicofísicos de las más importantes bases de datos en este campo, como son la TID2008, LIVE, CSIQ y IVC. Además, el codificador de imágenes Hi-SET obtiene mejores resultados que los obtenidos por JPEG2000 u otros algoritmos que utilizan el fractal de Hilbert. Así cuando a Hi-SET se la aplica la cuantificación perceptual propuesta, ©SET, este incrementa su eficiencia tanto objetiva como subjetiva. Cuando el método ½GBbBShift es aplicado a Hi-SET y este es comparado contra el método MaxShift aplicado al estándar JPEG2000 y a Hi-SET, se obtienen mejores resultados perceptuales comparando la calidad subjetiva de toda la imagen de dichos métodos. Tanto la cuantificación perceptual propuesta ½SQ como el método ½GBbBShift son algoritmos generales, los cuales pueden ser aplicados a otros algoritmos de compresión de imágenes basados en Transformada Wavelet tales como el mismo JPEG2000, SPIHT o SPECK, por citar algunos ejemplos. / Nowadays, digital images are used in many areas in everyday life, but they tend to be big. This increases amount of information leads us to the problem of image data storage. For example, it is common to have a representation a color pixel as a 24-bit number, where the channels red, green, and blue employ 8 bits each. In consequence, this kind of color pixel can specify one of 224 ¼ 16:78 million colors. Therefore, an image at a resolution of 512 £ 512 that allocates 24 bits per pixel, occupies 786,432 bytes. That is why image compression is important. An important feature of image compression is that it can be lossy or lossless. A compressed image is acceptable provided these losses of image information are not perceived by the eye. It is possible to assume that a portion of this information is redundant. Lossless Image Compression is defined as to mathematically decode the same image which was encoded. In Lossy Image Compression needs to identify two features inside the image: the redundancy and the irrelevancy of information. Thus, lossy compression modifies the image data in such a way when they are encoded and decoded, the recovered image is similar enough to the original one. How similar is the recovered image in comparison to the original image is defined prior to the compression process, and it depends on the implementation to be performed. In lossy compression, current image compression schemes remove information considered irrelevant by using mathematical criteria. One of the problems of these schemes is that although the numerical quality of the compressed image is low, it shows a high visual image quality, e.g. it does not show a lot of visible artifacts. It is because these mathematical criteria, used to remove information, do not take into account if the viewed information is perceived by the Human Visual System. Therefore, the aim of an image compression scheme designed to obtain images that do not show artifacts although their numerical quality can be low, is to eliminate the information that is not visible by the Human Visual System. Hence, this Ph.D. thesis proposes to exploit the visual redundancy existing in an image by reducing those features that can be unperceivable for the Human Visual System. First, we define an image quality assessment, which is highly correlated with the psychophysical experiments performed by human observers. The proposed CwPSNR metrics weights the well-known PSNR by using a particular perceptual low level model of the Human Visual System, e.g. the Chromatic Induction Wavelet Model (CIWaM). Second, we propose an image compression algorithm (called Hi-SET), which exploits the high correlation and self-similarity of pixels in a given area or neighborhood by means of a fractal function. Hi-SET possesses the main features that modern image compressors have, that is, it is an embedded coder, which allows a progressive transmission. Third, we propose a perceptual quantizer (½SQ), which is a modification of the uniform scalar quantizer. The ½SQ is applied to a pixel set in a certain Wavelet sub-band, that is, a global quantization. Unlike this, the proposed modification allows to perform a local pixel-by-pixel forward and inverse quantization, introducing into this process a perceptual distortion which depends on the surround spatial information of the pixel. Combining ½SQ method with the Hi-SET image compressor, we define a perceptual image compressor, called ©SET. Finally, a coding method for Region of Interest areas is presented, ½GBbBShift, which perceptually weights pixels into these areas and maintains only the more important perceivable features in the rest of the image. Results presented in this report show that CwPSNR is the best-ranked image quality method when it is applied to the most common image compression distortions such as JPEG and JPEG2000. CwPSNR shows the best correlation with the judgement of human observers, which is based on the results of psychophysical experiments obtained for relevant image quality databases such as TID2008, LIVE, CSIQ and IVC. Furthermore, Hi-SET coder obtains better results both for compression ratios and perceptual image quality than the JPEG2000 coder and other coders that use a Hilbert Fractal for image compression. Hence, when the proposed perceptual quantization is introduced to Hi-SET coder, our compressor improves its numerical and perceptual e±ciency. When ½GBbBShift method applied to Hi-SET is compared against MaxShift method applied to the JPEG2000 standard and Hi-SET, the images coded by our ROI method get the best results when the overall image quality is estimated. Both the proposed perceptual quantization and the ½GBbBShift method are generalized algorithms that can be applied to other Wavelet based image compression algorithms such as JPEG2000, SPIHT or SPECK.
672

Two- and Three-Dimensional Coding Schemes for Wavelet and Fractal-Wavelet Image Compression

Alexander, Simon January 2001 (has links)
This thesis presents two novel coding schemes and applications to both two- and three-dimensional image compression. Image compression can be viewed as methods of functional approximation under a constraint on the amount of information allowable in specifying the approximation. Two methods of approximating functions are discussed: Iterated function systems (IFS) and wavelet-based approximations. IFS methods approximate a function by the fixed point of an iterated operator, using consequences of the Banach contraction mapping principle. Natural images under a wavelet basis have characteristic coefficient magnitude decays which may be used to aid approximation. The relationship between quantization, modelling, and encoding in a compression scheme is examined. Context based adaptive arithmetic coding is described. This encoding method is used in the coding schemes developed. A coder with explicit separation of the modelling and encoding roles is presented: an embedded wavelet bitplane coder based on hierarchical context in the wavelet coefficient trees. Fractal (spatial IFSM) and fractal-wavelet (coefficient tree), or IFSW, coders are discussed. A second coder is proposed, merging the IFSW approaches with the embedded bitplane coder. Performance of the coders, and applications to two- and three-dimensional images are discussed. Applications include two-dimensional still images in greyscale and colour, and three-dimensional streams (video).
673

Analysis and Design of Lossless Bi-level Image Coding Systems

Guo, Jianghong January 2000 (has links)
Lossless image coding deals with the problem of representing an image with a minimum number of binary bits from which the original image can be fully recovered without any loss of information. Most lossless image coding algorithms reach the goal of efficient compression by taking care of the spatial correlations and statistical redundancy lying in images. Context based algorithms are the typical algorithms in lossless image coding. One key probelm in context based lossless bi-level image coding algorithms is the design of context templates. By using carefully designed context templates, we can effectively employ the information provided by surrounding pixels in an image. In almost all image processing applications, image data is accessed in a raster scanning manner and is treated as 1-D integer sequence rather than 2-D data. In this thesis, we present a quadrisection scanning method which is better than raster scanning in that more adjacent surrounding pixels are incorporated into context templates. Based on quadrisection scanning, we develop several context templates and propose several image coding schemes for both sequential and progressive lossless bi-level image compression. Our results show that our algorithms perform better than those raster scanning based algorithms, such as JBIG1 used in this thesis as a reference. Also, the application of 1-D grammar based codes in lossless image coding is discussed. 1-D grammar based codes outperform those LZ77/LZ78 based compression utility software for general data compression. It is also effective in lossless image coding. Several coding schemes for bi-level image compression via 1-D grammar codes are provided in this thesis, especially the parallel switching algorithm which combines the power of 1-D grammar based codes and context based algorithms. Most of our results are comparable to or better than those afforded by JBIG1.
674

Fabrication of a New Model Hybrid Material and Comparative Studies of its Mechanical Properties

Cluff, Daniel Robert Andrew January 2007 (has links)
A novel aluminum foam-polymer hybrid material was developed by filling a 10 pore per inch (0.39 pores per millimeter), 7 % relative density Duocel® open-cell aluminum foam with a thermoplastic polymer of trade name Elvax®. The hybrid was developed to be completely recyclable and easy to process. The foam was solution treated, air quenched and then aged for various times at 180˚C and 220˚C to assess the effect of heat treatment on the mechanical properties of the foam and to choose the appropriate aging condition for the hybrid fabrication. An increase in yield strength, plateau height and energy absorbed was observed in peak-aged aluminum foam in comparison with underaged aluminum foam. Following this result, aluminum foam was utilized either at the peak-aged condition of 4 hrs at 220˚C or in the as-fabricated condition to fabricate the hybrid material. Mechanical properties of the aluminum foam-polymer hybrid and the parent materials were assed through uniaxial compression testing at static ( de/dt = 0.008s-1 ) and dynamic ( de/dt = 100s-1 ) loading rates. The damping characteristics of aluminum foam-polymer hybrid and aluminum foam were also obtained by compression-compression cyclic testing at 5 Hz. No benefit to the mechanical properties of aluminum foam or the aluminum foam-polymer hybrid was obtained by artificial aging to peakaged condition compared to as-fabricated foam. Although energy absorption efficiency is not enhanced by hybid fabrication, the aluminum foam-polymer hybrid displayed enhanced yield stress, densification stress and total energy absorbed over the parent materials. The higher densification stress was indicative that the hybrid was a better energy absorbing material at higher stress than the aluminum foam. The aluminum foam was found to be strain rate independent unlike the hybrid where the visco-elasticity of the polymer component contributed to its strain rate dependence. The damping properties of both aluminum foam and the aluminum foam-polymer hybrid materials were found to be amplitude dependant with the hybrid material displaying superior damping capability.
675

Finite Element Studies of an Embryonic Cell Aggregate under Parallel Plate Compression

Yang, Tzu-Yao January 2008 (has links)
Cell shape is important to understanding the mechanics of three-dimensional (3D) cell aggregates. When an aggregate of embryonic cells is compressed between parallel plates, the cell mass and the cells of which it is composed flatten. Over time, the cells typically move past one another and return to their original, spherical shapes, even during sustained compression, although the profile of the aggregate changes little once plate motion stops. Although the surface and interfacial tensions of cells have been attributed to driving these internal movements, measurements of these properties have largely eluded researchers. Here, an existing 3D finite element model, designed specifically for the mechanics of cell-cell interactions, is enhanced so that it can be used to investigate aggregate compression. The formulation of that model is briefly presented and enhancements made to its rearrangement algorithms discussed. Simulations run using the model show that the rounding of interior cells is governed by the ratio between the interfacial tension and cell viscosity, whereas the shape of cells in contact with the medium or the compression plates is dominated by their respective cell-medium or cell-plate surface tensions. The model also shows that as an aggregate compresses, its cells elongate more in the circumferential direction than the radial direction. Since experimental data from compressed aggregates are anticipated to consist of confocal sections, geometric characterization methods are devised to quantify the anisotropy of cells and to relate cross sections to 3D properties. The average anisotropy of interior cells as found using radial cross sections corresponds more closely with the 3D properties of the cells than data from series of parallel sections. A basis is presented for estimating cell-cell interfacial tensions from the cell shape histories they exhibit during the cell reshaping phase of an aggregate compression test.
676

Investigating Polynomial Fitting Schemes for Image Compression

Ameer, Salah 13 January 2009 (has links)
Image compression is a means to perform transmission or storage of visual data in the most economical way. Though many algorithms have been reported, research is still needed to cope with the continuous demand for more efficient transmission or storage. This research work explores and implements polynomial fitting techniques as means to perform block-based lossy image compression. In an attempt to investigate nonpolynomial models, a region-based scheme is implemented to fit the whole image using bell-shaped functions. The idea is simply to view an image as a 3D geographical map consisting of hills and valleys. However, the scheme suffers from high computational demands and inferiority to many available image compression schemes. Hence, only polynomial models get further considerations. A first order polynomial (plane) model is designed to work in a multiplication- and division-free (MDF) environment. The intensity values of each image block are fitted to a plane and the parameters are then quantized and coded. Blocking artefacts, a common drawback of block-based image compression techniques, are reduced using an MDF line-fitting scheme at blocks’ boundaries. It is shown that a compression ratio of 62:1 at 28.8dB is attainable for the standard image PEPPER, outperforming JPEG, both objectively and subjectively for this part of the rate-distortion characteristics. Inter-block prediction can substantially improve the compression performance of the plane model to reach a compression ratio of 112:1 at 27.9dB. This improvement, however, slightly increases computational complexity and reduces pipelining capability. Although JPEG2000 is not a block-based scheme, it is encouraging that the proposed prediction scheme performs better in comparison to JPEG 2000, computationally and qualitatively. However, more experiments are needed to have a more concrete comparison. To reduce blocking artefacts, a new postprocessing scheme, based on Weber’s law, is employed. It is reported that images postprocessed using this scheme are subjectively more pleasing with a marginal increase in PSNR (<0.3 dB). The Weber’s law is modified to perform edge detection and quality assessment tasks. These results motivate the exploration of higher order polynomials, using three parameters to maintain comparable compression performance. To investigate the impact of higher order polynomials, through an approximate asymptotic behaviour, a novel linear mapping scheme is designed. Though computationally demanding, the performances of higher order polynomial approximation schemes are comparable to that of the plane model. This clearly demonstrates the powerful approximation capability of the plane model. As such, the proposed linear mapping scheme constitutes a new approach in image modeling, and hence worth future consideration.
677

Joint Compression and Watermarking Using Variable-Rate Quantization and its Applications to JPEG

Zhou, Yuhan January 2008 (has links)
In digital watermarking, one embeds a watermark into a covertext, in such a way that the resulting watermarked signal is robust to a certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. In addition to the robustness, there are two other conflicting requirements a good watermarking system should meet: one is referred as perceptual quality, that is, the distortion incurred to the original signal should be small; and the other is payload, the amount of information embedded (embedding rate) should be as high as possible. To a large extent, digital watermarking is a science and/or art aiming to design watermarking systems meeting these three conflicting requirements. As watermarked signals are highly desired to be compressed in real world applications, we have looked into the design and analysis of joint watermarking and compression (JWC) systems to achieve efficient tradeoffs among the embedding rate, compression rate, distortion and robustness. Using variable-rate scalar quantization, an optimum encoding and decoding scheme for JWC systems is designed and analyzed to maximize the robustness in the presence of additive Gaussian attacks under constraints on both compression distortion and composite rate. Simulation results show that in comparison with the previous work of designing JWC systems using fixed-rate scalar quantization, optimum JWC systems using variable-rate scalar quantization can achieve better performance in the distortion-to-noise ratio region of practical interest. Inspired by the good performance of JWC systems, we then investigate its applications in image compression. We look into the design of a joint image compression and blind watermarking system to maximize the compression rate-distortion performance while maintaining baseline JPEG decoder compatibility and satisfying the additional constraints imposed by watermarking. Two watermarking embedding schemes, odd-even watermarking (OEW) and zero-nonzero watermarking (ZNW), have been proposed for the robustness to a class of standard JPEG recompression attacks. To maximize the compression performance, two corresponding alternating algorithms have been developed to jointly optimize run-length coding, Huffman coding and quantization table selection subject to the additional constraints imposed by OEW and ZNW respectively. Both of two algorithms have been demonstrated to have better compression performance than the DQW and DEW algorithms developed in the recent literature. Compared with OEW scheme, the ZNW embedding method sacrifices some payload but earns more robustness against other types of attacks. In particular, the zero-nonzero watermarking scheme can survive a class of valumetric distortion attacks including additive noise, amplitude changes and recompression for everyday usage.
678

Numerical modeling to complement wood tests

Ståhl, Martin January 2013 (has links)
Pressure tests on wood have been conducted to determine its properties. The resultswere not as expected, and it is therefore difficult to obtain the parameters of thewood. This project examines how a specific defect in the wood sample affects theresult.The pressure test is simulated with numerical modeling. In the numerical model thecube’s top side is non-parallel with the bottom side, it is in other words somewhattilted.The results from the model agreed with the findings from some pressure tests. Withthose we can easily calculate the wood's properties. For other pressure tests, otherfactors might need to be examined before we can draw any conclusions. / Tryckprover på trä har utförts för att ta reda dess egenskaper. Resultaten blev intevad som förväntades, och det blir därför svårt att få fram träets egenskaper. Dettaprojekt undersöker hur en viss defekt i träprovet påverkar resultatet.Tryckprovet simuleras med numerisk modellering. I modellen är kubens toppsida inteparallell med bottensidan, den är med andra ord något sned.Resultatet från modellen stämde med resultat från vissa tryckprover. Då kan man fåfram träets egenskaper. För andra tryckprover kan andra faktorer behöva undersökasinnan man kan dra några slutsatser.
679

Surface Light Field Generation, Compression and Rendering

Miandji, Ehsan January 2012 (has links)
We present a framework for generating, compressing and rendering of SurfaceLight Field (SLF) data. Our method is based on radiance data generated usingphysically based rendering methods. Thus the SLF data is generated directlyinstead of re-sampling digital photographs. Our SLF representation decouplesspatial resolution from geometric complexity. We achieve this by uniform samplingof spatial dimension of the SLF function. For compression, we use ClusteredPrincipal Component Analysis (CPCA). The SLF matrix is first clustered to lowfrequency groups of points across all directions. Then we apply PCA to eachcluster. The clustering ensures that the within-cluster frequency of data is low,allowing for projection using a few principal components. Finally we reconstructthe CPCA encoded data using an efficient rendering algorithm. Our reconstructiontechnique ensures seamless reconstruction of discrete SLF data. We applied ourrendering method for fast, high quality off-line rendering and real-time illuminationof static scenes. The proposed framework is not limited to complexity of materialsor light sources, enabling us to render high quality images describing the full globalillumination in a scene.
680

Dynamic modeling and Model Predictive Control of a vapor compression system

Gustavsson, Andreas January 2012 (has links)
The focus of this thesis was on the development of a dynamic modeling capability for a vapor compression system along with the implementation of advanced multivariable control techniques on the resulting model. Individual component models for a typical vapor compression system were developed based on most recent and acknowledged publications within the field of thermodynamics. Parameter properties such as pressure, temperature, enthalpy etc. for each component were connected to detailed thermodynamic tables by algorithms programmed in MATLAB, thus creating a fully dynamic environment. The separate component models were then interconnected and an overall model for the complete system was implemented in SIMULINK. An advanced control technique known as Model Predictive Control (MPC) along with an open-source QP solver was then applied on the system. The MPC-controller requires the complete state information to be available for feedback and since this is often either very expensive (requires a great number of sensors) or at times even impossible (difficult to measure), a full-state observer was implemented. The MPC-controller was designed to keep certain system temperatures within tight bands while still being able to respond to varying cooling set-points. The control architecture was successful in achieving the control objective, i.e. it was shown to be adaptable in order to reflect changes in environmental conditions. Cooling demands were met and the temperatures were successfully kept within given boundaries.

Page generated in 0.0201 seconds