• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1345
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3040
  • 532
  • 464
  • 416
  • 409
  • 358
  • 327
  • 276
  • 264
  • 222
  • 219
  • 201
  • 169
  • 161
  • 157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

Application of Fatigue Theories to Seismic Compression Estimation and the Evaluation of Liquefaction Potential

Lasley, Samuel James 21 August 2015 (has links)
Earthquake-induced liquefaction of saturated soils and seismic compression of unsaturated soils are major sources of hazard to infrastructure, as attested by the wholesale condemnation of neighborhoods surrounding Christchurch, New Zealand. The hazard continues to grow as cities expand into liquefaction- and seismic compression-susceptible areas hence accurate evaluation of both hazards is essential. The liquefaction evaluation procedure presented herein is based on dissipated energy and an SPT liquefaction/no-liquefaction case history database. It is as easy to implement as existing stress-based simplified procedures. Moreover, by using the dissipated energy of the entire loading time history to represent the demand, the proposed procedure melds the existing stress-based and strain-based liquefaction procedures in to a new, robust method that is capable of evaluating liquefaction susceptibility from both earthquake and non-earthquake sources of ground motion. New relationships for stress reduction coefficient (r_d) and number of equivalent cycles ($n_{eq}$) are also presented herein. The r_d relationship has less bias and uncertainty than other common stress reduction coefficient relationships, and both the $n_{eq}$ and $r_d$ relationships are proposed for use in active tectonic and stable continental regimes. The $n_{eq}$ relationship proposed herein is based on an alternative application of the Palmgren-Miner damage hypothesis, shifting from the existing high-cycle, low-damage fatigue framework to a low-cycle framework more applicable to liquefaction analyses. Seismic compression is the accrual of volumetric strains caused by cyclic loading, and presented herein is a "non-simplified" model to estimate seismic compression. The proposed model is based on a modified version of the Richart-Newmark non-linear cumulative damage hypothesis, and was calibrated from the results of drained cyclic simple shear tests. The proposed model can estimate seismic compression from any arbitrary strain time history. It is more accurate than other "non-simplified" seismic compression estimation models over a greater range of volumetric strains and can be used to compute number-of-equivalent shear strain cycles for use in "simplified" seismic compression models, in a manner consistent with seismic compression phenomenon. / Ph. D.
482

Enabling Approximate Storage through Lossy Media Data Compression

Worek, Brian David 08 February 2019 (has links)
Memory capacity, bandwidth, and energy all continue to present hurdles in the quest for efficient, high-speed computing. Recognition, mining, and synthesis (RMS) applications in particular are limited by the efficiency of the memory subsystem due to their large datasets and need to frequently access memory. RMS applications, such as those in machine learning, deliver intelligent analysis and decision making through their ability to learn, identify, and create complex data models. To meet growing demand for RMS application deployment in battery constrained devices, such as mobile and Internet-of-Things, designers will need novel techniques to improve system energy consumption and performance. Fortunately, many RMS applications demonstrate inherent error resilience, a property that allows them to produce acceptable outputs even when data used in computation contain errors. Approximate storage techniques across circuits, architectures, and algorithms exploit this property to improve the energy consumption and performance of the memory subsystem through quality-energy scaling. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data. / MS / Computer memory systems present challenges in the quest for more powerful overall computing systems. Computer applications with the ability to learn from large sets of data in particular are limited because they need to frequently access the memory system. These applications are capable of intelligent analysis and decision making due to their ability to learn, identify, and create complex data models. To meet growing demand for intelligent applications in smartphones and other Internet connected devices, designers will need novel techniques to improve energy consumption and performance. Fortunately, many intelligent applications are naturally resistant to errors, which means they can produce acceptable outputs even when there are errors in inputs or computation. Approximate storage techniques across computer hardware and software exploit this error resistance to improve the energy consumption and performance of computer memory by purposefully reducing data precision. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data.
483

Effective compression therapy

Vowden, Kath, Vowden, Peter January 2012 (has links)
No
484

Wavelet-based Image Compression Using Human Visual System Models

Beegan, Andrew Peter 22 May 2001 (has links)
Recent research in transform-based image compression has focused on the wavelet transform due to its superior performance over other transforms. Performance is often measured solely in terms of peak signal-to-noise ratio (PSNR) and compression algorithms are optimized for this quantitative metric. The performance in terms of subjective quality is typically not evaluated. Moreover, the sensitivities of the human visual system (HVS) are often not incorporated into compression schemes. This paper develops new wavelet models of the HVS and illustrates their performance for various scalar wavelet and multiwavelet transforms. The performance is measured quantitatively (PSNR) and qualitatively using our new perceptual testing procedure. Our new HVS model is comprised of two components: CSF masking and asymmetric compression. CSF masking weights the wavelet coefficients according to the contrast sensitivity function (CSF)---a model of humans' sensitivity to spatial frequency. This mask gives the most perceptible information the highest priority in the quantizer. The second component of our HVS model is called asymmetric compression. It is well known that humans are more sensitive to luminance stimuli than they are to chrominance stimuli; asymmetric compression quantizes the chrominance spaces more severely than the luminance component. The results of extensive trials indicate that our HVS model improves both quantitative and qualitative performance. These trials included 14 observers, 4 grayscale images and 10 color images (both natural and synthetic). For grayscale images, although our HVS scheme lowers PSNR, it improves subjective quality. For color images, our HVS model improves both PSNR and subjective quality. A benchmark for our HVS method is the latest version of the international image compression standard---JPEG2000. In terms of subjective quality, our scheme is superior to JPEG2000 for all images; it also outperforms JPEG2000 by 1 to 3 dB in PSNR. / Master of Science
485

Predicting the Failure of Aluminum Exposed to Simulated Fire and Mechanical Loading Using Finite Element Modeling

Arthur, Katherine Marie 10 June 2011 (has links)
The interest in the use of aluminum as a structural material in marine applications has increased greatly in recent years. This increase is primarily due to the low weight of aluminum compared to other structural materials as well as its ability to resist corrosion. However, a critical issue in the use of any structural material for naval applications is its response to fire. Past experience has shown that finite element programs can produce accurate predictions of failure of structural components. Parameter studies conducted within finite element programs are often easier to implement than corresponding studies conducted experimentally. In this work, the compression-controlled failures of aluminum plates subjected to an applied mechanical load and an applied heat flux (to simulate fire) were predicted through the use of finite element analysis. Numerous studies were completed on these finite element models. Thicknesses of the plates were varied as well as the applied heat flux and the applied compressive stresses. The effect of surface emissivity along with the effect of insulation on the exposed surface of the plate was also studied. The influence of the initial imperfection of the plates was also studied. Not only were the physical conditions of the model varied but the element type of both the solid and shell models as well as the mesh density were also varied. Two different creep laws were used to curve fit raw creep data to understand the effects of creep in the buckling failure of the aluminum plates. These predictions were compared with experiments (from a previous study) conducted on aluminum plates of approximately 800mm in length, 200mm in width, 6-9mm in thickness and clamped at both ends to create fixed boundary conditions. A hydraulic system and a heater were used to apply the compressive load and the heat flux respectively. Comparisons between predicted and experimental results reveal that finite element analysis can accurately predict the compression-controlled failure of aluminum plates subjected to simulated fire. However, under certain combinations of the applied heat flux and compressive stress, the mesh density as well as the choice of element may have a significant impact on the results. Also, it is undetermined which creep curve-fitting model produces the most accurate results due to the influence of other parameters such as the initial imperfection. / Master of Science
486

Open-Source Parameterized Low-Latency Aggressive Hardware Compressor and Decompressor for Memory Compression

Jearls, James Chandler 16 June 2021 (has links)
In recent years, memory has shown to be a constraining factor in many workloads. Memory is an expensive necessity in many situations, from embedded devices with a few kilobytes of SRAM to warehouse-scale computers with thousands of terabytes of DRAM. Memory compression has existed in all major operating systems for many years. However, while faster than swapping to a disk, memory decompression adds latency to data read operations. Companies and research groups have investigated hardware compression to mitigate these problems. Still, open-source low-latency hardware compressors and decompressors do not exist; as such, every group that studies hardware compression must re-implement. Importantly, because the devices that can benefit from memory compression vary so widely, there is no single solution to address all devices' area, latency, power, and bandwidth requirements. This work intends to address the many issues with hardware compressors and decompressors. This work implements hardware accelerators for three popular compression algorithms; LZ77, LZW, and Huffman encoding. Each implementation includes a compressor and decompressor, and all designs are entirely parameterized. There are a total of 22 parameters between the designs in this work. All of the designs are open-source under a permissive license. Finally, configurations of the work can achieve decompression latencies under 500 nanoseconds, much closer than existing works to the 255 nanoseconds required to read an uncompressed 4 KB page. The configurations of this work accomplish this while still achieving compression ratios comparable to software compression algorithms. / Master of Science / Computer memory, the fast, temporary storage where programs and data are held, is expensive and limited. Compression allows for data and programs to be held in memory in a smaller format so they take up less space. This work implements a hardware design for compression and decompression accelerators to make it faster for the programs using the compressed data to access it. This work includes three hardware compressor and decompressor designs that can be easily modified and are free for anyone to use however they would like. The included designs are orders of magnitude smaller and less expensive than the existing state of the art, and they reduce the decompression time by up to 6x. These smaller areas and latencies result in a relatively small reduction in compression ratios: only 13% on average across the tested benchmarks.
487

Case Study: Settlement at Nepal Hydropower Dam during the 2014-2015 Gorkha Earthquake Sequence

Vuper, Ailie Marie 30 March 2021 (has links)
The Tamakoshi Dam in Nepal experienced 19 cm of settlement due to three earthquakes that took place from December 14, 2014 to May 12, 2015. This settlement caused massive damage and halted construction and was believed to have been caused by seismic compression. Seismic compression is the accrual of contractive volumetric strain in sandy soils during earthquake shaking for cases where the generated excess pore water pressures are low. The purpose of this case study is to investigate the settlements of the dam intake block relative to the right abutment block of the dam during the three earthquakes. Towards this end, soil profiles for the dam were developed from the boring logs and suites of ground motions were selected and scaled to be representative of the shaking at the base of the dam for the two of the three earthquakes which were well documented. Equivalent linear analysis was completed for the suites of ground motions to produce shear strain time histories which were then utilized in the Jiang et al. (2020) proposed procedure for seismic compression prediction. The results were found to not align with the settlement that was observed in the field, so post-liquefaction consolidation was also considered to be a possible cause of the settlement. The results from that analysis also showed that consideration of post-liquefaction consolidation did not yield settlements representative of those observed in the field. More detailed studies are recommended to assess the settlements that were observed at the dam site, particularly analyses that take into account below and above grade topographic effects on the ground motions and settlements at the ground surface. / Master of Science / The Tamakoshi Dam in Nepal experienced 19 cm of settlement due to three earthquakes that took place from December 14, 2014 to May 12, 2015. This settlement caused massive damage and halted construction and was believed to have been caused by seismic compression. Seismic compression is the accrual of contractive volumetric strain in sandy soils during earthquake shaking for cases where the generated excess pore water pressures are low. The purpose of this case study is to investigate the settlements of the dam intake block relative to the right abutment block of the dam during the three earthquakes. Representative soil profiles were developed based on data collected from the site for analysis of the settlement. Two approaches were used to compute predicted settlement, one which considered only seismic compression as the cause of settlement and a hybrid method that considered both seismic compression and post-liquefaction consolidation. Both approaches predicted settlement values that were less than what was observed in the field. It was found that the ground motion prediction equations used in the analysis were not representative of the tectonic setting in Nepal and thus was the main reason for the under-prediction. The relevance of this research lies in using methodology developed in academia to analyze a real world event and draw conclusions about the methodology's applicability.
488

Développement d'une nouvelle technique de compression pour les codes variables à fixes quasi-instantanés

Haddad, Fatma 24 April 2018 (has links)
Pas toutes les techniques de compression des données adoptent le principe de dictionnaire pour représenter ses mots de code. Le dictionnaire est un ensemble de mots de code associés aux symboles sources lors de l’opération d’encodage. La correspondance entre le mot de code et le symbole source dépend de l’algorithme de compression adopté. Généralement, chaque algorithme construit son dictionnaire selon un ensemble de propriétés. Parmi ces propriétés nous avons celle de préfixe. Elle est primordiale pour les codes de type fixe à variable (FV) tels que l’algorithme de Huffman et celui de Shannon-Fano. Par contre, la propriété préfixe est optionnelle pour les codes de longueur variable à fixe (VF). Donc cela peut causer le but de pouvoir construire un dictionnaire plus performant, comme le cas des codes quasi-instantanés. Dans cette optique, Yamamoto et Yokoo ont éliminé cette condition pour créer un dictionnaire meilleur que celui de Tunstall. Les dictionnaires proposés par Yamamoto et Yokoo sont appelés les codes VF quasi-instantanés ou en anglais almost instantaneous VF codes. En s’appuyant sur leurs contributions, nous avons déduit que leur technique peut fournir dans certains cas des codes variables à fixes sous-optimaux, d’où notre suggestion de correctifs à leurs algorithmes pour en améliorer l’efficacité. Aussi nous proposons un autre mécanisme pour construire des codes VF en utilisant le principe de la programmation dynamique. / Various techniques of data compression use a dictionary to represent their codewords. A dictionary is a set of codewords associated with the source symbols during the encoding operation. The correspondence between the codeword and the symbol source depends on the compression algorithm. Usually, the prefix property is key for the fixed-to-variable type codes FV as demonstrated in the Huffman and the Shannon-Fano algorithms. However, such a property may be eliminated for fixed-length codes in order to build a more efficient dictionary. In this context, Yamamoto and Yokoo excluded this condition to create a dictionary better than Tunstall. This new dictionary is called instantaneous variable-to-fixed code. Based on their contributions, we have deduced that their technique can provide, in some cases, suboptimal variable-to-fixed codes. Hence, we suggested to improve their algorithms. Also, we proposed another mechanism for building optimal AIVF codes by adopting the principle of dynamic programming.
489

Utilization-adaptive Memory Architectures

Panwar, Gagandeep 14 June 2024 (has links)
DRAM contributes significantly to a server system's cost and global warming potential. To make matters worse, DRAM density scaling has not kept up with the scaling in logic and storage technologies. An effective way to reduce DRAM's monetary and environmental cost is to increase its effective utilization and extract the best possible performance in all utilization scenarios. To this end, this dissertation proposes Utilization-adaptive Memory Architectures that enhance the memory controller with the ability to adapt to current memory utilization and implement techniques to boost system performance. These techniques fall under two categories: (i) The techniques under Utilization-adaptive Hardware Memory Replication target the scenario where memory is underutilized and aim to boost performance versus a conventional system without replication, and (ii) The techniques under Utilization-adaptive Hardware Memory Compression target the scenario where memory utilization is high and aim to significantly increase memory capacity while closing the performance gap versus a conventional system that has sufficient memory and does not require compression. / Doctor of Philosophy / A computer system's memory stores information for the system's immediate use (e.g., data and instructions for in-use programs). The performance and capacity of the dominant memory technology – Dynamic Random Access Memory (DRAM) – has not kept up with advancements in computing devices such as CPUs. Furthermore, DRAM significantly contributes to a server's carbon footprint because a server can have over a thousand DRAM chips – substantially more than any other type of chip. DRAM's manufacturing cycle and lifetime energy use make it the most carbon-unfriendly component on today's servers. To reduce the environmental impact of DRAM, an intuitive way is to increase its utilization. To this end, this dissertation explores Utilization-adaptive Memory Architectures which enable the memory controller to adapt to the system's current memory through a variety of techniques such as: (i) Utilization-adaptive Hardware Memory Replication which copies in-use data to free memory and uses the extra copy to improve performance, and (ii) Utilization-adaptive Hardware Memory Compression which uses dense representation for data to save memory and allows the system to run applications that require more memory than the physically installed memory. Compared to conventional systems that do not feature these techniques, these techniques improve performance for different memory utilization scenarios ranging from low to high.
490

Short phosphate glass fiber - PLLA composite to promote bone mineralization

Melo, P., Tarrant, E., Swift, Thomas, Townshend, A., German, M., Ferreira, A-M., Gentile, P., Dalgarno, K. 01 July 2019 (has links)
Yes / The clinical application of composites seeks to exploit the mechanical and chemical properties of materials which make up the composite, and in researching polymer composites for biomedical applications the aim is usually to enhance the bioactivity of the polymer, while maintaining the mechanical properties. To that end, in this study medical grade Poly(L-lactic) acid (PLLA) has been reinforced with short phosphate-based glass fibers (PGF). The materials were initially mixed by melting PLLA granules with the short fibers, before being extruded to form a homogenous filament, which was pelletized and used as feedstock for compression moulding. As made the composite materials had a bending strength of 51 MPa ± 5, and over the course of eight weeks in PBS the average strength of the composite material was in the range 20–50 MPa. Human mesenchymal stromal cells were cultured on the surfaces of scaffolds, and the metabolic activity, alkaline phosphatase production and mineralization monitored over a three week period. The short fiber filler made no significant difference to cell proliferation or differentiation, but had a clear and immediate osteoinductive effect, promoting mineralization by cells at the material surface. It is concluded that the PLLA/PGF composite material offers a material with both the mechanical and biological properties for potential application to bone implants and fixation, particularly where an osteoinductive effect would be valuable. / funded in part by the EPSRC Centre for Doctoral Training in Additive Manufacturing and 3D Printing (EP/L01534X/1), the EPSRC Centre for Innovative Manufacture in Medical Devices (EP/K029592/1), and Glass Technology Services Ltd., Sheffield, UK.

Page generated in 0.0512 seconds