• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1345
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3040
  • 532
  • 464
  • 416
  • 409
  • 358
  • 327
  • 276
  • 264
  • 222
  • 219
  • 201
  • 169
  • 161
  • 157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
681

Analysis and Design of Lossless Bi-level Image Coding Systems

Guo, Jianghong January 2000 (has links)
Lossless image coding deals with the problem of representing an image with a minimum number of binary bits from which the original image can be fully recovered without any loss of information. Most lossless image coding algorithms reach the goal of efficient compression by taking care of the spatial correlations and statistical redundancy lying in images. Context based algorithms are the typical algorithms in lossless image coding. One key probelm in context based lossless bi-level image coding algorithms is the design of context templates. By using carefully designed context templates, we can effectively employ the information provided by surrounding pixels in an image. In almost all image processing applications, image data is accessed in a raster scanning manner and is treated as 1-D integer sequence rather than 2-D data. In this thesis, we present a quadrisection scanning method which is better than raster scanning in that more adjacent surrounding pixels are incorporated into context templates. Based on quadrisection scanning, we develop several context templates and propose several image coding schemes for both sequential and progressive lossless bi-level image compression. Our results show that our algorithms perform better than those raster scanning based algorithms, such as JBIG1 used in this thesis as a reference. Also, the application of 1-D grammar based codes in lossless image coding is discussed. 1-D grammar based codes outperform those LZ77/LZ78 based compression utility software for general data compression. It is also effective in lossless image coding. Several coding schemes for bi-level image compression via 1-D grammar codes are provided in this thesis, especially the parallel switching algorithm which combines the power of 1-D grammar based codes and context based algorithms. Most of our results are comparable to or better than those afforded by JBIG1.
682

Two- and Three-Dimensional Coding Schemes for Wavelet and Fractal-Wavelet Image Compression

Alexander, Simon January 2001 (has links)
This thesis presents two novel coding schemes and applications to both two- and three-dimensional image compression. Image compression can be viewed as methods of functional approximation under a constraint on the amount of information allowable in specifying the approximation. Two methods of approximating functions are discussed: Iterated function systems (IFS) and wavelet-based approximations. IFS methods approximate a function by the fixed point of an iterated operator, using consequences of the Banach contraction mapping principle. Natural images under a wavelet basis have characteristic coefficient magnitude decays which may be used to aid approximation. The relationship between quantization, modelling, and encoding in a compression scheme is examined. Context based adaptive arithmetic coding is described. This encoding method is used in the coding schemes developed. A coder with explicit separation of the modelling and encoding roles is presented: an embedded wavelet bitplane coder based on hierarchical context in the wavelet coefficient trees. Fractal (spatial IFSM) and fractal-wavelet (coefficient tree), or IFSW, coders are discussed. A second coder is proposed, merging the IFSW approaches with the embedded bitplane coder. Performance of the coders, and applications to two- and three-dimensional images are discussed. Applications include two-dimensional still images in greyscale and colour, and three-dimensional streams (video).
683

Fabrication of a New Model Hybrid Material and Comparative Studies of its Mechanical Properties

Cluff, Daniel Robert Andrew January 2007 (has links)
A novel aluminum foam-polymer hybrid material was developed by filling a 10 pore per inch (0.39 pores per millimeter), 7 % relative density Duocel® open-cell aluminum foam with a thermoplastic polymer of trade name Elvax®. The hybrid was developed to be completely recyclable and easy to process. The foam was solution treated, air quenched and then aged for various times at 180˚C and 220˚C to assess the effect of heat treatment on the mechanical properties of the foam and to choose the appropriate aging condition for the hybrid fabrication. An increase in yield strength, plateau height and energy absorbed was observed in peak-aged aluminum foam in comparison with underaged aluminum foam. Following this result, aluminum foam was utilized either at the peak-aged condition of 4 hrs at 220˚C or in the as-fabricated condition to fabricate the hybrid material. Mechanical properties of the aluminum foam-polymer hybrid and the parent materials were assed through uniaxial compression testing at static ( de/dt = 0.008s-1 ) and dynamic ( de/dt = 100s-1 ) loading rates. The damping characteristics of aluminum foam-polymer hybrid and aluminum foam were also obtained by compression-compression cyclic testing at 5 Hz. No benefit to the mechanical properties of aluminum foam or the aluminum foam-polymer hybrid was obtained by artificial aging to peakaged condition compared to as-fabricated foam. Although energy absorption efficiency is not enhanced by hybid fabrication, the aluminum foam-polymer hybrid displayed enhanced yield stress, densification stress and total energy absorbed over the parent materials. The higher densification stress was indicative that the hybrid was a better energy absorbing material at higher stress than the aluminum foam. The aluminum foam was found to be strain rate independent unlike the hybrid where the visco-elasticity of the polymer component contributed to its strain rate dependence. The damping properties of both aluminum foam and the aluminum foam-polymer hybrid materials were found to be amplitude dependant with the hybrid material displaying superior damping capability.
684

Finite Element Studies of an Embryonic Cell Aggregate under Parallel Plate Compression

Yang, Tzu-Yao January 2008 (has links)
Cell shape is important to understanding the mechanics of three-dimensional (3D) cell aggregates. When an aggregate of embryonic cells is compressed between parallel plates, the cell mass and the cells of which it is composed flatten. Over time, the cells typically move past one another and return to their original, spherical shapes, even during sustained compression, although the profile of the aggregate changes little once plate motion stops. Although the surface and interfacial tensions of cells have been attributed to driving these internal movements, measurements of these properties have largely eluded researchers. Here, an existing 3D finite element model, designed specifically for the mechanics of cell-cell interactions, is enhanced so that it can be used to investigate aggregate compression. The formulation of that model is briefly presented and enhancements made to its rearrangement algorithms discussed. Simulations run using the model show that the rounding of interior cells is governed by the ratio between the interfacial tension and cell viscosity, whereas the shape of cells in contact with the medium or the compression plates is dominated by their respective cell-medium or cell-plate surface tensions. The model also shows that as an aggregate compresses, its cells elongate more in the circumferential direction than the radial direction. Since experimental data from compressed aggregates are anticipated to consist of confocal sections, geometric characterization methods are devised to quantify the anisotropy of cells and to relate cross sections to 3D properties. The average anisotropy of interior cells as found using radial cross sections corresponds more closely with the 3D properties of the cells than data from series of parallel sections. A basis is presented for estimating cell-cell interfacial tensions from the cell shape histories they exhibit during the cell reshaping phase of an aggregate compression test.
685

Investigating Polynomial Fitting Schemes for Image Compression

Ameer, Salah 13 January 2009 (has links)
Image compression is a means to perform transmission or storage of visual data in the most economical way. Though many algorithms have been reported, research is still needed to cope with the continuous demand for more efficient transmission or storage. This research work explores and implements polynomial fitting techniques as means to perform block-based lossy image compression. In an attempt to investigate nonpolynomial models, a region-based scheme is implemented to fit the whole image using bell-shaped functions. The idea is simply to view an image as a 3D geographical map consisting of hills and valleys. However, the scheme suffers from high computational demands and inferiority to many available image compression schemes. Hence, only polynomial models get further considerations. A first order polynomial (plane) model is designed to work in a multiplication- and division-free (MDF) environment. The intensity values of each image block are fitted to a plane and the parameters are then quantized and coded. Blocking artefacts, a common drawback of block-based image compression techniques, are reduced using an MDF line-fitting scheme at blocks’ boundaries. It is shown that a compression ratio of 62:1 at 28.8dB is attainable for the standard image PEPPER, outperforming JPEG, both objectively and subjectively for this part of the rate-distortion characteristics. Inter-block prediction can substantially improve the compression performance of the plane model to reach a compression ratio of 112:1 at 27.9dB. This improvement, however, slightly increases computational complexity and reduces pipelining capability. Although JPEG2000 is not a block-based scheme, it is encouraging that the proposed prediction scheme performs better in comparison to JPEG 2000, computationally and qualitatively. However, more experiments are needed to have a more concrete comparison. To reduce blocking artefacts, a new postprocessing scheme, based on Weber’s law, is employed. It is reported that images postprocessed using this scheme are subjectively more pleasing with a marginal increase in PSNR (<0.3 dB). The Weber’s law is modified to perform edge detection and quality assessment tasks. These results motivate the exploration of higher order polynomials, using three parameters to maintain comparable compression performance. To investigate the impact of higher order polynomials, through an approximate asymptotic behaviour, a novel linear mapping scheme is designed. Though computationally demanding, the performances of higher order polynomial approximation schemes are comparable to that of the plane model. This clearly demonstrates the powerful approximation capability of the plane model. As such, the proposed linear mapping scheme constitutes a new approach in image modeling, and hence worth future consideration.
686

Joint Compression and Watermarking Using Variable-Rate Quantization and its Applications to JPEG

Zhou, Yuhan January 2008 (has links)
In digital watermarking, one embeds a watermark into a covertext, in such a way that the resulting watermarked signal is robust to a certain distortion caused by either standard data processing in a friendly environment or malicious attacks in an unfriendly environment. In addition to the robustness, there are two other conflicting requirements a good watermarking system should meet: one is referred as perceptual quality, that is, the distortion incurred to the original signal should be small; and the other is payload, the amount of information embedded (embedding rate) should be as high as possible. To a large extent, digital watermarking is a science and/or art aiming to design watermarking systems meeting these three conflicting requirements. As watermarked signals are highly desired to be compressed in real world applications, we have looked into the design and analysis of joint watermarking and compression (JWC) systems to achieve efficient tradeoffs among the embedding rate, compression rate, distortion and robustness. Using variable-rate scalar quantization, an optimum encoding and decoding scheme for JWC systems is designed and analyzed to maximize the robustness in the presence of additive Gaussian attacks under constraints on both compression distortion and composite rate. Simulation results show that in comparison with the previous work of designing JWC systems using fixed-rate scalar quantization, optimum JWC systems using variable-rate scalar quantization can achieve better performance in the distortion-to-noise ratio region of practical interest. Inspired by the good performance of JWC systems, we then investigate its applications in image compression. We look into the design of a joint image compression and blind watermarking system to maximize the compression rate-distortion performance while maintaining baseline JPEG decoder compatibility and satisfying the additional constraints imposed by watermarking. Two watermarking embedding schemes, odd-even watermarking (OEW) and zero-nonzero watermarking (ZNW), have been proposed for the robustness to a class of standard JPEG recompression attacks. To maximize the compression performance, two corresponding alternating algorithms have been developed to jointly optimize run-length coding, Huffman coding and quantization table selection subject to the additional constraints imposed by OEW and ZNW respectively. Both of two algorithms have been demonstrated to have better compression performance than the DQW and DEW algorithms developed in the recent literature. Compared with OEW scheme, the ZNW embedding method sacrifices some payload but earns more robustness against other types of attacks. In particular, the zero-nonzero watermarking scheme can survive a class of valumetric distortion attacks including additive noise, amplitude changes and recompression for everyday usage.
687

ECG compression for Holter monitoring

Ottley, Adam Carl 11 April 2007 (has links)
Cardiologists can gain useful insight into a patient's condition when they are able to correlate the patent's symptoms and activities. For this purpose, a Holter Monitor is often used - a portable electrocardiogram (ECG) recorder worn by the patient for a period of 24-72 hours. Preferably, the monitor is not cumbersome to the patient and thus it should be designed to be as small and light as possible; however, the storage requirements for such a long signal are very large and can significantly increase the recorder's size and cost, and so signal compression is often employed. At the same time, the decompressed signal must contain enough detail for the cardiologist to be able to identify irregularities. "Lossy" compressors may obscure such details, where a "lossless" compressor preserves the signal exactly as captured.<p>The purpose of this thesis is to develop a platform upon which a Holter Monitor can be built, including a hardware-assisted lossless compression method in order to avoid the signal quality penalties of a lossy algorithm. <p>The objective of this thesis is to develop and implement a low-complexity lossless ECG encoding algorithm capable of at least a 2:1 compression ratio in an embedded system for use in a Holter Monitor. <p>Different lossless compression techniques were evaluated in terms of coding efficiency as well as suitability for ECG waveform application, random access within the signal and complexity of the decoding operation. For the reduction of the physical circuit size, a System On a Programmable Chip (SOPC) design was utilized. <p>A coder based on a library of linear predictors and Rice coding was chosen and found to give a compression ratio of at least 2:1 and as high as 3:1 on real-world signals tested while having a low decoder complexity and fast random access to arbitrary parts of the signal. In the hardware-assisted implementation, the speed of encoding was a factor of between four and five faster than a software encoder running on the same CPU while allowing the CPU to perform other tasks during the encoding process.
688

Feedstock and process variables influencing biomass densification

Shaw, Mark Douglas 17 March 2008 (has links)
Densification of biomass is often necessary to combat the negative storage and handling characteristics of these low bulk density materials. A consistent, high-quality densified product is strongly desired, but not always delivered. Within the context of pelleting and briquetting, binding agents are commonly added to comminuted biomass feedstocks to improve the quality of the resulting pellets or briquettes. Many feedstocks naturally possess such binding agents; however, they may not be abundant enough or available in a form or state to significantly contribute to product binding. Also, process parameters (pressure and temperature) and material variables (particle size and moisture content) can be adjusted to improve the quality of the final densified product.<p>Densification of ground biomass materials is still not a science, as much work is still required to fully understand how the chemical composition and physical properties, along with the process variables, impact product quality. Generating densification and compression data, along with physical and mechanical properties of a variety of biomass materials will allow for a deeper understanding of the densification process. This in turn will result in the design of more efficient densification equipment, thus improving the feasibility of using biomass for chemical and energy production.<p>Experiments were carried out wherein process (pressure and temperature) and material (particle size and moisture content) variables were studied for their effect on the densification process (compression and relaxation characteristics) and the physical quality of the resulting products (pellets). Two feedstocks were selected for the investigation; namely, poplar wood and wheat straw, two prominent Canadian biomass resources. Steam explosion pretreatment was also investigated as a potential method of improving the densification characteristics and binding capacity of the two biomass feedstocks.<p> Compression/densification and relaxation testing was conducted in a closed-end cylindrical die at loads of 1000, 2000, 3000, and 4000 N (31.6, 63.2, 94.7, and 126.3 MPa) and die temperatures of 70 and 100°C. The raw poplar and wheat straw were first ground through a hammer mill fitted with 0.8 and 3.2 mm screens, while the particle size of the pretreated poplar and wheat straw was not adjusted. The four feedstocks (2 raw and 2 pretreated) were also conditioned to moisture contents of 9 and 15% wb prior to densification. <p> Previously developed empirical compression models fitted to the data elucidated that along with particle rearrangement and deformation, additional compression mechanisms were present during compression. Also, the compressibility and asymptotic modulus of the biomass grinds were increased by increasing the die temperature and decreasing product moisture content. While particle size did not have a significant effect on the compressibility, reducing it increased the resultant asymptotic modulus value. Steam explosion pretreatment served to decrease the compressibility and asymptotic modulus of the grinds.<p>In terms of physical quality of the resulting product, increasing the applied load naturally increased the initial density of the pellets (immediately after removal from the die). Increasing the die temperature served to increase the initial pellet density, decrease the dimensional (diametral and longitudinal) expansion (after 14 days), and increase the tensile strength of the pellets. Decreasing the raw feedstock particle size allowed for the increase in initial pellet density, decrease in diametral expansion (no effect on longitudinal expansion), and increase in tensile strength of the pellets. Decreasing the moisture content of the feedstocks allowed for higher initial pellet densities, but also an increased dimensional expansion. The pretreated feedstocks generally had higher initial pellet densities than the raw grinds. Also, the pretreated feedstocks shrank in diameter and length, and had higher tensile strengths than the raw feedstocks. The high performance of the pretreated poplar and wheat straw (as compared to their raw counterparts) was attributed to the disruption of the lignocellulosic structure, and removal/hydrolysis of hemicellulose, during the steam pretreatment process which was verified by chemical and Fourier transform infrared analysis. As a result, a higher relative amount of lignin was present. Also, the removal/hydrolysis of hemicellulose would indicate that this lignin was more readily available for binding, thus producing superior pellets.
689

Modeling and validation of crop feeding in a large square baler

Remou&#x00E9;, Tyler 01 November 2007 (has links)
This study investigated the crop density in a New Holland BB960 (branch of CNH Global N.V.) large square baler as examined by crop trajectory from the precompression room to the bale chamber. This study also examined both the top and bottom plunger pressures and critical factors affecting the final top and bottom bale densities.<p>The crop trajectories (wad of crop) were measured using a high-speed camera from the side of the baler through viewing windows. The viewing windows were divided into four regions for determining the crop displacement, velocity and acceleration. Crop strain was used to evaluate the potential change in density of the crop before being compressed by the plunger. Generally, the vertical crop strain was found to be higher in the top half of the bale compared to the bottom. <p>Average strain values for side measurements were 12.8% for the top and 2.1% for the bottom. Plunger pressures were measured to compare peak pressures between the top and bottom halves of each compressed wad of crop, and to develop pressure profiles based on the plungers position. Results of comparing the mean peak plunger pressures between the top and bottom locations indicated the mean pressures were significantly higher at the top location with the exception of one particular setting. Resulting pressure profile graphs aided in qualitatively describing the compression process for both top and bottom locations.<p>A stepwise regression model was developed to examine the difference in material quantity in the top half of the bale compared to the bottom, based on bale weights. The model indicated that flake setting, stuffer ratio and number of flakes had the greatest effect on maintaining consistent bale density by comparing top to bottom halves of each bale. The R2 (coefficient of determination) value for the developed model was of 59.9%. The R2 was low although could be accounted for due to the limited number of data points in the developed model.
690

Longitudinal compression of individual pulp fibers

Dumbleton, David P. 01 January 1971 (has links)
No description available.

Page generated in 0.0181 seconds