• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1345
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3040
  • 532
  • 464
  • 416
  • 409
  • 358
  • 327
  • 276
  • 264
  • 222
  • 219
  • 201
  • 169
  • 161
  • 157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

A COMPARATIVE STUDY OF COMPRESSING ALGORITHMS TO REDUCE THE OUTPUT FILE SIZE GENERATED FROM A VHDL-AMS SIMULATOR

KALLOOR, BIJOY 03 April 2006 (has links)
No description available.
262

REDUCING MEMORY SPACE FOR COMPLETELY UNROLLED LU FACTORIZATION OF SPARSE MATRICES

THIYAGARAJAN, SANJEEV 11 October 2001 (has links)
No description available.
263

Sequential Scalar Quantization of Two Dimensional Vectors in Polar and Cartesian Coordinates

WU, HUIHUI 08 1900 (has links)
This thesis addresses the design of quantizers for two-dimensional vectors, where the scalar components are quantized sequentially. Specifically, design algorithms for unrestricted polar quantizers (UPQ) and successively refinable UPQs (SRUPQ) for vectors in polar coordinates are proposed. Additionally, algorithms for the design of sequential scalar quantizers (SSQ) for vectors with correlated components in Cartesian coordinates are devised. Both the entropy-constrained (EC) and fixed-rate (FR) cases are investigated. The proposed UPQ and SRUPQ design algorithms are developed for continuous bivariate sources with circularly symmetric densities. They are globally optimal for the class of UPQs/SRUPQs with magnitude thresholds confined to a finite set. The time complexity for the UPQ design is $O(K^2 + KP_{max})$ in the EC case, respectively $O(KN^2)$ in the FR case, where $K$ is the size of the set from which the magnitude thresholds are selected, $P_{max}$ is an upper bound for the number of phase levels corresponding to a magnitude bin, and $N$ is the total number of quantization bins. The time complexity of the SRUPQ design is $O(K^3P_{max})$ in the EC case, respectively $O(K^2N^{'2}P_{max})$ in the FR case, where $N'$ denotes the ratio between the number of bins of the fine UPQ and the coarse UPQ. The SSQ design is considered for finite-alphabet correlated sources. The proposed algorithms are globally optimal for the class of SSQs with convex cells, i.e, where each quantizer cell is the intersection of the source alphabet with an interval of the real line. The time complexity for both EC and FR cases amounts to $O(K_1^2K_2^2)$, where $K_1$ and $K_2$ are the respective sizes of the two source alphabets. It is also proved that, by applying the proposed SSQ algorithms to finite, uniform discretizations of correlated sources with continuous joint probability density function, the performance approaches that of the optimal SSQs with convex cells for the original sources as the accuracy of the discretization increases. The proposed algorithms generally rely on solving the minimum-weight path (MWP) problem in the EC case, respectively the length-constrained MWP problem or a related problem in the FR case, in a weighted directed acyclic graph (WDAG) specific to each problem. Additional computations are needed in order to evaluate the edge weights in this WDAG. In particular, in the EC-SRUPQ case, this additional work includes solving the MWP problem between multiple node pairs in some other WDAG. In the EC-SSQ (respectively, FR-SSQ) case, the additional computations consist of solving the MWP (respectively, length-constrained MWP) problem for a series of other WDAGs. / Dissertation / Doctor of Philosophy (PhD)
264

Compression Failure of Aluminum Plates Exposed to Constant Heat Flux

Fogle, Emily Johanna 01 June 2010 (has links)
Aluminum is used as a structural member in marine applications because of its low weight. One challenge is to design against failure of aluminum structures in fire. A parametric study was performed to quantify the effects of parameters on the compression failure of aluminum plates during a fire. A thermo-structural apparatus was designed to perform compression tests on aluminum samples consisting of a compression load frame, a hydraulic system, and electric heaters. The effect of dimensional variation on failure behavior was examined. Aluminum 5083 and 6082 alloys were tested with three thicknesses, two lengths and two widths. Three heat fluxes and various buckling stresses were used. Micro Vicker's hardness values were measured before and after testing to quantify the effect of heating on the strength of the aluminum. In general, lower applied stress resulted in higher failure temperature and longer time to failure. Dimensional variations had a negligible effect on failure behavior. The 5083 alloy has a minimum stress level of 50% of the buckling stress at 10kW/m2 and 10% of the buckling stress at 20kW/m2, while the 6082 alloy has a minimum stress level of 75% of the buckling stress at 10kW/m2 and 25% of the buckling stress at 20kW/m2. The 6082 failed at higher temperatures and longer failure times than the 5083. The presence of insulation on the exposed surface decreased the temperature rise, resulting in longer failure times. Vicker's hardness decreased with heating in general. The results describe the effects of parameters of the failure of aluminum. / Master of Science
265

The Ignition of Methane and Coal Dust by Air Compression - The Experimental Proof

Lin, Wei 01 May 1997 (has links)
When a large area of open gob collapses suddenly, a windblast is produced that can cause considerable damage throughout the infrastructure of a mine. In a few cases, the windblast has been accompanied by ignitions of methane and/or coal dust. Analytical and numerical analyses investigated the transient behavior of the air through the small time period during which the roof is falling. This is sufficiently short to allow adiabatic compression of the air, i.e. negligible heat transfer to rock surfaces. Controlled escape of the air via interconnecting entries limits the build-up of air pressure. However, this same phenomenum causes the potential energy of the falling strata to be concentrated into a diminishing mass of air. Computer simulations predicted that the temperature of the air would increase rapidly as the roof descends, reaching values that are capable of igniting either methane or coal dust. This thesis concentrates on a series of laboratory tests involving the compression of mixtures of air, methane and coal dust under a falling weight and while allowing controlled escape of the mixture. The transient responses on pressure and temperature sensors were recorded. In addition to an analysis of those records, the thesis highlights those conditions in which ignitions occurred. / Master of Science
266

Dictionary-based Compression Algorithms in Mobile Packet Core

Tikkireddy, Lakshmi Venkata Sai Sri January 2019 (has links)
With the rapid growth in technology, the amount of data to be transmitted and stored is increasing. The efficiency of information retrieval and storage has become a major drawback, thereby the concept of data compression has come into the picture. Data compression is a technique that effectively reduces the size of the data to save storage and speed up the transmission of the data from one place to another. Data compression is present in various formats and mainly categorized into lossy compression and lossless compression where lossless compression is often used to compress the data. In Ericsson, SGSN-MME is using one of the data compression technique namely Deflate, to compress each user data independently. Due to the compression ratio between compress and decompress speed, the deflate algorithm is not optimal for the SGSN-MME’s use case. To mitigate this problem, the deflate algorithm has to be replaced with a better compression algorithm.
267

On the compressive response of open-cell aluminum foams

Jang, Wen-yea, 1972- 27 September 2012 (has links)
This study is concerned with the mechanical behavior of open-cell aluminum foams. In particular the compressive response of aluminum foams is analyzed through careful experiments and analyses. The microstructure of foams of three different cell sizes was first analyzed using X-ray tomography. This included characterization of the polyhedral geometry of cells, establishment of the cell anisotropy and statistical distribution of ligament lengths, and measurement of the ligament cross sectional area distribution. Crushing experiments were performed on various specimen sizes in the principal directions of anisotropy. The compressive response of aluminum foams is similar to that of many other cellular materials. It starts with a linearly elastic regime that terminates into a limit load followed by an extensive stress plateau. During the plateau, the deformation localizes in the form of inclined but disorganized bands. The evolution of such localization patterns was monitored using X-ray tomography. At the end of the plateau, the response turns into a second stable branch as most cells collapse and the foam is densified. The crushing experiments are simulated numerically using several levels of modeling. The ligaments are modeled as shear-deformable beam elements and the cellular microstructure is mainly represented using the 14-sided Kelvin cell in periodic domains of various sizes. Other geometries considered include the perturbed Kelvin cell, and foams with random microstructures generated by the Surface Evolver software. All microstructures are assigned geometric characteristics that derive directly from the measurements. Unlike elastic foams, for elastic-plastic foams the prevalent instability is a limit load. The limit load can be captured using one fully periodic characteristic cell. The predicted limit stresses agree with the measured initiation stresses very well. This very good performance coupled with its simplicity make the characteristic cell model a powerful tool in metal foam mechanics. The subsequent crushing events, the stress plateau and desification were successfully reproduced using models with larger, finite size domains involving several characteristic cells. Results indicate that accurate representation of the ligament bending rigidity and the base material inelastic properties are essential whereas the randomness of the actual foam microstructure appears to play a secondary role. / text
268

Mechanical Assessment of Veterinary Orthopedic Implant Technologies: Comparative Studies of Canine Fracture Fixation and Equine Arthrodesis Devices and Techniques

Baker, Sean Travis 03 October 2013 (has links)
The Clamp-Rod Internal Fixator (CRIF) is a fracture fixation implant with growing popularity among veterinarian’s for its versatility and ease of use. Although the CRIF is currently in clinical use, relatively few reports exist describing the biomechanical properties and clinical results of this system. The objective of this study was to determine the in vitro biomechanical properties of a 5mm CRIF/rod construct to a 3.5mm Limited Contact-Dynamic Compression Plate (LC-DCP/rod) construct using a canine femoral gap model. Paired canine femora were treated with 40mm mid-diaphyseal ostectomies and randomly assigned to CRIF/rod or LC-DCP/rod. Five pairs of constructs were tested in bending and five pairs were evaluated in torsion. Single ramp to failure tests were conducted to evaluate construct stiffness, yield load, and failure mode. While CRIF/rod and LC-DCP/rod were not significantly different when evaluated in bending, LC-DCP/rod constructs are significantly more rigid than CRIF/rod constructs at higher torsional loads. Below 10degrees of twist, or 4.92Nm torque, the LC-DCP/rod and CRIF/rod were not statistically different in torsion. Catastrophic injuries of the metacarpophalangeal joint resulting in the disruption of the suspensory apparatus are the most common fatal injuries in thoroughbred racehorses. Fetlock arthrodesis is a procedure designed to mitigate suffering from injury as well as degenerative diseases affecting articulation. The objective of this study is to assess the in vitro biomechanical behavior of techniques for fetlock arthrodesis. Twelve forelimb pairs were collected from adult horses euthanized for reasons unrelated to disease of the metacarpophalangeal joint (MCP). A 14-16-hole broad 4.5mm Locking Compression Plate (LCP) was compared to a 14-16 hole broad Dynamic Compression Plate (DCP). Both constructs used a two “figure-eight” 1.25mm stainless steel wire tension band. Fatigue tests and to failure tests were conducted. There were no significant differences in stiffness between groups for fatigue tests. Stiffness increased after the first fatigue cycle for the LCP/wire (80.56+/-52.22%) and DCP/wire (56.58+/-14.85%). Above 3.5mm of axial deformation there was a statistical difference between the stiffness of the LCP/wire (3824.12+/-751.84 N/mm) and the DCP/wire (3009.65+/-718.25 N/mm) (P=0.038). The LCP/wire showed increased stiffness above 3.5mm compression compared to the DCP/wire. Under fatigue testing conditions the constructs are not statistically different.
269

RADIX 95n: Binary-to-Text Data Conversion

Jones, Greg, 1963-2017. 08 1900 (has links)
This paper presents Radix 95n, a binary to text data conversion algorithm. Radix 95n (base 95) is a variable length encoding scheme that offers slightly better efficiency than is available with conventional fixed length encoding procedures. Radix 95n advances previous techniques by allowing a greater pool of 7-bit combinations to be made available for 8-bit data translation. Since 8-bit data (i.e. binary files) can prove to be difficult to transfer over 7-bit networks, the Radix 95n conversion technique provides a way to convert data such as compiled programs or graphic images to printable ASCII characters and allows for their transfer over 7-bit networks.
270

Database Streaming Compression on Memory-Limited Machines

Bruccoleri, Damon F. 01 January 2018 (has links)
Dynamic Huffman compression algorithms operate on data-streams with a bounded symbol list. With these algorithms, the complete list of symbols must be contained in main memory or secondary storage. A horizontal format transaction database that is streaming can have a very large item list. Many nodes tax both the processing hardware primary memory size, and the processing time to dynamically maintain the tree. This research investigated Huffman compression of a transaction-streaming database with a very large symbol list, where each item in the transaction database schema’s item list is a symbol to compress. The constraint of a large symbol list is, in this research, equivalent to the constraint of a memory-limited machine. A large symbol set will result if each item in a large database item list is a symbol to compress in a database stream. In addition, database streams may have some temporal component spanning months or years. Finally, the horizontal format is the format most suited to a streaming transaction database because the transaction IDs are not known beforehand This research prototypes an algorithm that will compresses a transaction database stream. There are several advantages to the memory limited dynamic Huffman algorithm. Dynamic Huffman algorithms are single pass algorithms. In many instances a second pass over the data is not possible, such as with streaming databases. Previous dynamic Huffman algorithms are not memory limited, they are asymptotic to O(n), where n is the number of distinct item IDs. Memory is required to grow to fit the n items. The improvement of the new memory limited Dynamic Huffman algorithm is that it would have an O(k) asymptotic memory requirement; where k is the maximum number of nodes in the Huffman tree, k < n, and k is a user chosen constant. The new memory limited Dynamic Huffman algorithm compresses horizontally encoded transaction databases that do not contain long runs of 0’s or 1’s.

Page generated in 0.0543 seconds