• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1345
  • 397
  • 363
  • 185
  • 104
  • 47
  • 36
  • 31
  • 26
  • 22
  • 22
  • 16
  • 14
  • 13
  • 13
  • Tagged with
  • 3040
  • 532
  • 464
  • 416
  • 409
  • 358
  • 327
  • 276
  • 264
  • 222
  • 219
  • 201
  • 169
  • 161
  • 157
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
661

Compression Mechanics of Powders and Granular Materials Probed by Force Distributions and a Micromechanically Based Compaction Equation

Mahmoodi, Foad January 2012 (has links)
The internal dynamics of powder systems under compression are as of yet not fully understood, and thus there is a necessity for approaches that can help in further clarifying and enhancing the level of understanding on this subject. To this end, the internal dynamics of powder systems under compression were probed by means of force distributions and a novel compaction equation. The determination of force distributions hinged on the use of carbon paper as a force sensor, where the imprints transferred from it onto white paper where converted through calibration into forces. Through analysis of these imprints, it was found that the absence of friction and bonding capacity between the particles composing the powder bed had no effect on how the applied load was transferred through the system. Additionally, it was found that pellet strength had a role to play in the homogeneity of force distributions, where, upon the occurrence of fracture, force distributions became less homogenous. A novel compaction equation was derived and tested on a series of systems composed of pellets with differing mechanical properties. The main value of the equation lay in its ability to predict compression behavior from single particle properties, and the agreement was especially good when a compact of zero porosity was formed. The utility of the equation was tested in two further studies, using a series of pharmaceutically relevant powder materials. It was established that the A parameter of the equation was a measure of the deformability of the powder material, much like the Heckel 1/K parameter, and can be used as a means to rank powders according to deformability, i.e. to establish plasticity scale. The equation also provided insights into the dominating compression mechanisms through an invariance that could be exploited to determine the point, at which the powder system became constrained, i.e. the end of rearrangement. Additionally, the robustness of the equation was demonstrated through fruitful analysis of a set of diverse materials. In summary, this thesis has provided insights and tools that can be translated into more efficient development and manufacturing of medicines in the form of tablets.
662

Thermoset polymers and coatings subjected to high compressive loads

Ståhlberg, Daniel January 2006 (has links)
This study describes the mechanical response of thermoset polymers under high compressive loads. The study is divided into two parts. The first part focuses on the behaviour of a powder coating when used in a clamping force joint and how the properties vary when the chemical and physical structure of the coating is changed. The second part discusses the fundamental understanding of the behaviour of thermoset polymers with small thickness-to-width ratio subjected to compressive stresses, the aim being to develop mathematical material models for viscoelastic materials under high compressive loads. In the first part polyester powder coatings were used with variations in molecular weight, number of functional groups of the resin, amount and type of filler and thickness of the coating. The coatings were subjected to conventional tests for coatings and polymers and also to specially designed tests developed to study the behaviour of powder coatings in clamping force joints. The high compressive loads in a clamping force joint put high demands on the relaxation and creep resistance of the coating and the study shows the importance of crosslink density, filler content, and also coating thickness in order to achieve the desired mechanical properties of a coating. A high reactivity of the resin, facilitating a high crosslink density and hence a high Tg, is the most important property of the coating. A film with high crosslink density shows increase in relaxation time and in apparent yield strength under compression, and also an increase in relaxation modulus and storage modulus in tension at temperatures above Tg. Addition of fillers reduces the deformation during compression and tension, but also induces a lower strain at break and hence a more brittle coating. The reinforcing effect of the fillers is pronounced when increasing the crosslink density of the coating, especially in the compression tests. The effect is evident in compression even at low amounts of fillers, where the relaxation time and resistance to deformation are strongly increased. The combination of high crosslink density and addition of fillers is therefore desirable since fillers then can be used moderately in order to achieve a reinforcing effect in compression while minimising embrittlement. The study also showed that increased coating thickness will give rise to defects in the coating, especially voids and blisters due to evaporation of water formed during the curing of the polyester powder coating. These defects will give rise to stress concentrations and increased plastic deformations in the coating, impairing the properties of the clamping force joint. The results from relaxation tests in tension were used to create a micromechanical model. This model was used in finite element modelling to estimate the loss of clamping force in a screw joint and to correlate with the experimental results of the powder coatings. In the second part of the study a well-defined free radically cured vinyl ester resin was used and studied in six different geometries in order to determine the dependence of apparent mechanical properties on the particular size and shape of a sample when it is subjected to high compressive loads. Variation of the specimen thickness, boundary conditions and loading conditions reveals that the geometry of the sample has a significant effect on the mechanical performance of the polymer. The apparent modulus and the yield strength increase dramatically when the thickness-to-width ratio of the sample is reduced, whereas they decrease when the friction between the sample and the compression plate is reduced. The creep strain rate decreases when the thickness of the material is reduced and it decreases even more when the amount of material surrounding the compressed part of the sample is increased. Creep and strain recovery tests on large specimens were used to develop a mathematical model including non-linear viscoelastic and viscoplastic response of a thermoset vinyl ester. The model was used in FEM calculations where the experimental results were compared with the calculated results in order to model the trends of the material response when varying the sample geometry. / QC 20100921
663

Development and initial evaluation of wireless self-monitoring pneumatic compression sleeves for preventing deep vein thrombosis in surgical patients

Cheung, William Ka Wai 05 1900 (has links)
This thesis describes the successful development and initial evaluation of a proof-of-concept wireless monitoring system for improving the effectiveness and safety of pneumatic compression therapy to help prevent deep vein thrombosis (DVT). In the development, an important objective was to make feasible the practical and commercial deployment of such improved therapy systems in future, by focusing on a cost-effective design and implementation. Over the years, pneumatic compression has been shown to be an effective solution for the prevention of DVT. However, different problems and complications related to the use of commercial pneumatic compression de-vices that typically include automatic pressure controllers and pneumatic compression sleeves have been reported. For example, one study reported a high percentage of improperly applied or nonfunctional pneumatic compression devices in routine usage. Technical problems, non-compliance, and human error were identified as the causes behind the failed therapies. Also, it was reported that dedicated in-service instruction did not improve the proper use of the pneumatic compression controllers and sleeves. In another study, significant unanticipated variations between expected and delivered pneumatic compression therapy were reported: expected therapy delivered only an average of 77.8% of the time during the therapy, and much of the time key values related to the outcome of the therapy were found to have variations great than 10%. Specific hazards have also been reported. For example, one patient developed acute compartment syndrome after wearing a pair of pneumatic compression sleeves with faulty pressure release valves. In another case, epidural analgesia masked a malfunction resulting from a reversed connection between four-way plastic tubing of the sleeves and the controller, exposing a patient to a hazardous pressure of around 300mmHg,blocking all blood flow for a prolonged period of time. Newer models of pneumatic compression sleeves and controllers from various manufacturers claim to improve therapy by, for example, increasing the peak blood flow velocity. However, there is no evidence in the published literature to support such claims. A published review of the literature from1970-2002 reached the conclusion that the most important factors in im-proving therapy with pneumatic compression devices, particularly during and after surgery, were the degree of conformance of delivered therapy to the prescribed therapy, patient compliance, and the appropriateness of the site of compression. The inability to monitor delivered therapy and patient compliance remains a problem in efforts to improve pneumatic compression therapy. The above-described problems were addressed in the successful development of the innovative prototype described in this thesis. This wireless monitoring system should improve the effectiveness and safety of pneumatic compression therapy. Also, innovative aspects of the system design allow for cost-effective integration into existing commercial controllers and sleeves. For example, an innovative and potentially patentable usage and reprocess indicator was developed for pneumatic compression sleeves to significantly improve their safety and to reduce their cost of use per patient.
664

Graph Theory for the Discovery of Non-Parametric Audio Objects

Srinivasa, Christopher 28 July 2011 (has links)
A novel framework based on cluster co-occurrence and graph theory for structure discovery is applied to audio to find new types of audio objects which enable the compression of an input signal. These new objects differ from those found in current object coding schemes as their shape is not restricted by any a priori psychoacoustic knowledge. The framework is novel from an application perspective, as it marks the first time that graph theory is applied to audio, and with regards to theoretical developments, as it involves new extensions to the areas of unsupervised learning algorithms and frequent subgraph mining methods. Tests are performed using a corpus of audio files spanning a wide range of sounds. Results show that the framework discovers new types of audio objects which yield average respective overall and relative compression gains of 15.90% and 23.53% while maintaining a very good average audio quality with imperceptible changes.
665

Depth Map Compression Based on Platelet Coding and Quadratic Curve Fitting

Wang, Han 26 October 2012 (has links)
Due to the fast development in 3D technology during recent decades, many approaches in 3D representation technologies have been proposed worldwide. In order to get an accurate information to render a 3D representation, more data need to be recorded compared to normal video sequence. In this case, how to find an efficient way to transmit the 3D representation data becomes an important part in the whole 3D representation technology. Recent years, many coding schemes based on the principle of encoding the depth have been proposed. Compared to the traditional multiview coding schemes, those new proposed schemes can achieve higher compression efficiency. Due to the development of depth capturing technology, the accuracy and quality of the reconstructed depth image also get improved. In this thesis we propose an efficient depth data compression scheme for 3D images. Our proposed depth data compression scheme is platelet based coding using Lagrangian optimization, quadtree decomposition and quadratic curve fitting. We study and improve the original platelet based coding scheme and achieve a compression improvement of 1-2 dB compared to the original platelet based scheme. The experimental results illustrate the improvement provided by our scheme. The quality of the reconstructed results of our proposed curve fitting based platelet coding scheme are better than that of the original scheme.
666

Probing Collective Multi-electron Effects with Few Cycle Laser Pulses

Shiner, Andrew 15 March 2013 (has links)
High Harmonic Generation (HHG) enables the production of bursts of coherent soft x-rays with attosecond pulse duration. This process arrises from the nonlinear interaction between intense infrared laser pulses and an ionizing gas medium. Soft x-ray photons are used for spectroscopy of inner-shell electron correlation and exchange processes, and the availability of attosecond pulse durations will enable these processes to be resolved on their natural time scales. The maximum or cutoff photon energy in HHG increases with both the intensity as well as the wavelength of the driving laser. It is highly desirable to increase the harmonic cutoff as this will allow for the generation of shorter attosecond pulses, as well as HHG spectroscopy of increasingly energetic electronic transitions. While the harmonic cutoff increases with laser wavelength, there is a corresponding decrease in harmonic yield. The first part of this thesis describes the experimental measurement of the wavelength scaling of HHG efficiency, which we report as lambda^(-6.3) in xenon, and lambda^(-6.5) in krypton. To increase the HHG cutoff, we have developed a 1.8 um source, with stable carrier envelope phase and a pulse duration of <2 optical cycles. The 1.8 um wavelength allowed for a significant increase in the harmonic cutoff compared to equivalent 800 nm sources, while still maintaing reasonable harmonic yield. By focusing this source into neon we have produced 400 eV harmonics that extend into the x-ray water window. In addition to providing a source of photons for a secondary target, the HHG spectrum caries the signature of the electronic structure of the generating medium. In krypton we observed a Cooper minimum at 85 eV, showing that photoionization cross sections can be measured with HHG. Measurements in xenon lead to the first clear observation of electron correlation effects during HHG, which manifest as a broad peak in the HHG spectrum centred at 100 eV. This thesis also describes several improvements to the HHG experiment including the development of an ionization detector for measuring laser intensity, as well as an investigation into the role of laser mode quality on HHG phase matching and efficiency.
667

Feedstock and process variables influencing biomass densification

Shaw, Mark Douglas 17 March 2008
Densification of biomass is often necessary to combat the negative storage and handling characteristics of these low bulk density materials. A consistent, high-quality densified product is strongly desired, but not always delivered. Within the context of pelleting and briquetting, binding agents are commonly added to comminuted biomass feedstocks to improve the quality of the resulting pellets or briquettes. Many feedstocks naturally possess such binding agents; however, they may not be abundant enough or available in a form or state to significantly contribute to product binding. Also, process parameters (pressure and temperature) and material variables (particle size and moisture content) can be adjusted to improve the quality of the final densified product.<p>Densification of ground biomass materials is still not a science, as much work is still required to fully understand how the chemical composition and physical properties, along with the process variables, impact product quality. Generating densification and compression data, along with physical and mechanical properties of a variety of biomass materials will allow for a deeper understanding of the densification process. This in turn will result in the design of more efficient densification equipment, thus improving the feasibility of using biomass for chemical and energy production.<p>Experiments were carried out wherein process (pressure and temperature) and material (particle size and moisture content) variables were studied for their effect on the densification process (compression and relaxation characteristics) and the physical quality of the resulting products (pellets). Two feedstocks were selected for the investigation; namely, poplar wood and wheat straw, two prominent Canadian biomass resources. Steam explosion pretreatment was also investigated as a potential method of improving the densification characteristics and binding capacity of the two biomass feedstocks.<p> Compression/densification and relaxation testing was conducted in a closed-end cylindrical die at loads of 1000, 2000, 3000, and 4000 N (31.6, 63.2, 94.7, and 126.3 MPa) and die temperatures of 70 and 100°C. The raw poplar and wheat straw were first ground through a hammer mill fitted with 0.8 and 3.2 mm screens, while the particle size of the pretreated poplar and wheat straw was not adjusted. The four feedstocks (2 raw and 2 pretreated) were also conditioned to moisture contents of 9 and 15% wb prior to densification. <p> Previously developed empirical compression models fitted to the data elucidated that along with particle rearrangement and deformation, additional compression mechanisms were present during compression. Also, the compressibility and asymptotic modulus of the biomass grinds were increased by increasing the die temperature and decreasing product moisture content. While particle size did not have a significant effect on the compressibility, reducing it increased the resultant asymptotic modulus value. Steam explosion pretreatment served to decrease the compressibility and asymptotic modulus of the grinds.<p>In terms of physical quality of the resulting product, increasing the applied load naturally increased the initial density of the pellets (immediately after removal from the die). Increasing the die temperature served to increase the initial pellet density, decrease the dimensional (diametral and longitudinal) expansion (after 14 days), and increase the tensile strength of the pellets. Decreasing the raw feedstock particle size allowed for the increase in initial pellet density, decrease in diametral expansion (no effect on longitudinal expansion), and increase in tensile strength of the pellets. Decreasing the moisture content of the feedstocks allowed for higher initial pellet densities, but also an increased dimensional expansion. The pretreated feedstocks generally had higher initial pellet densities than the raw grinds. Also, the pretreated feedstocks shrank in diameter and length, and had higher tensile strengths than the raw feedstocks. The high performance of the pretreated poplar and wheat straw (as compared to their raw counterparts) was attributed to the disruption of the lignocellulosic structure, and removal/hydrolysis of hemicellulose, during the steam pretreatment process which was verified by chemical and Fourier transform infrared analysis. As a result, a higher relative amount of lignin was present. Also, the removal/hydrolysis of hemicellulose would indicate that this lignin was more readily available for binding, thus producing superior pellets.
668

ECG compression for Holter monitoring

Ottley, Adam Carl 11 April 2007
Cardiologists can gain useful insight into a patient's condition when they are able to correlate the patent's symptoms and activities. For this purpose, a Holter Monitor is often used - a portable electrocardiogram (ECG) recorder worn by the patient for a period of 24-72 hours. Preferably, the monitor is not cumbersome to the patient and thus it should be designed to be as small and light as possible; however, the storage requirements for such a long signal are very large and can significantly increase the recorder's size and cost, and so signal compression is often employed. At the same time, the decompressed signal must contain enough detail for the cardiologist to be able to identify irregularities. "Lossy" compressors may obscure such details, where a "lossless" compressor preserves the signal exactly as captured.<p>The purpose of this thesis is to develop a platform upon which a Holter Monitor can be built, including a hardware-assisted lossless compression method in order to avoid the signal quality penalties of a lossy algorithm. <p>The objective of this thesis is to develop and implement a low-complexity lossless ECG encoding algorithm capable of at least a 2:1 compression ratio in an embedded system for use in a Holter Monitor. <p>Different lossless compression techniques were evaluated in terms of coding efficiency as well as suitability for ECG waveform application, random access within the signal and complexity of the decoding operation. For the reduction of the physical circuit size, a System On a Programmable Chip (SOPC) design was utilized. <p>A coder based on a library of linear predictors and Rice coding was chosen and found to give a compression ratio of at least 2:1 and as high as 3:1 on real-world signals tested while having a low decoder complexity and fast random access to arbitrary parts of the signal. In the hardware-assisted implementation, the speed of encoding was a factor of between four and five faster than a software encoder running on the same CPU while allowing the CPU to perform other tasks during the encoding process.
669

Modeling and validation of crop feeding in a large square baler

Remou&#x00E9;, Tyler 01 November 2007
This study investigated the crop density in a New Holland BB960 (branch of CNH Global N.V.) large square baler as examined by crop trajectory from the precompression room to the bale chamber. This study also examined both the top and bottom plunger pressures and critical factors affecting the final top and bottom bale densities.<p>The crop trajectories (wad of crop) were measured using a high-speed camera from the side of the baler through viewing windows. The viewing windows were divided into four regions for determining the crop displacement, velocity and acceleration. Crop strain was used to evaluate the potential change in density of the crop before being compressed by the plunger. Generally, the vertical crop strain was found to be higher in the top half of the bale compared to the bottom. <p>Average strain values for side measurements were 12.8% for the top and 2.1% for the bottom. Plunger pressures were measured to compare peak pressures between the top and bottom halves of each compressed wad of crop, and to develop pressure profiles based on the plungers position. Results of comparing the mean peak plunger pressures between the top and bottom locations indicated the mean pressures were significantly higher at the top location with the exception of one particular setting. Resulting pressure profile graphs aided in qualitatively describing the compression process for both top and bottom locations.<p>A stepwise regression model was developed to examine the difference in material quantity in the top half of the bale compared to the bottom, based on bale weights. The model indicated that flake setting, stuffer ratio and number of flakes had the greatest effect on maintaining consistent bale density by comparing top to bottom halves of each bale. The R2 (coefficient of determination) value for the developed model was of 59.9%. The R2 was low although could be accounted for due to the limited number of data points in the developed model.
670

Model-Based JPEG2000 rate control methods

Aulí Llinàs, Francesc 05 December 2006 (has links)
Aquesta recerca està centrada en l'escalabilitat qualitativa de l'estàndard de compressió d'imatges JPEG2000. L'escalabilitat qualitativa és una característica fonamental que permet el truncament de la tira de bits a diferents punts sense penalitzar la qualitat de la imatge recuperada. L'escalabilitat qualitativa és també fonamental en transmissions d'imatges interactives, ja que permet la transmissió de finestres d'interès a diferents qualitats. El JPEG2000 aconsegueix escalabilitat qualitativa a partir del mètode de control de factor de compressió utilitzat en el procés de compressió, que empotra capes de qualitat a la tira de bits. En alguns escenaris, aquesta arquitectura pot causar dos problemàtiques: per una banda, quan el procés de codificació acaba, el número i distribució de les capes de qualitat és permanent, causant una manca d'escalabilitat qualitativa a tires de bits amb una o poques capes de qualitat. Per altra banda, el mètode de control de factor de compressió construeix capes de qualitat considerant la optimització de la raó distorsió per l'àrea completa de la imatge, i això pot provocar que la distribució de les capes de qualitat per la transmissió de finestres d'interès no sigui adequada. Aquesta tesis introdueix tres mètodes de control de factor de compressió que proveeixen escalabilitat qualitativa per finestres d'interès, o per tota l'àrea de la imatge, encara que la tira de bits contingui una o poques capes de qualitat. El primer mètode està basat en una simple estratègia d'entrellaçat (CPI) que modela la raó distorsió a partir d'una aproximació clàssica. Un anàlisis acurat del CPI motiva el segon mètode, basat en un ordre d'escaneig invers i una concatenació de passades de codificació (ROC). El tercer mètode es beneficia dels models de raó distorsió del CPI i ROC, desenvolupant una novedosa aproximació basada en la caracterització de la raó distorsió dels blocs de codificació dins una subbanda (CoRD). Els resultats experimentals suggereixen que tant el CPI com el ROC són capaços de proporcionar escalabilitat qualitativa a tires de bits, encara que continguin una o poques capes de qualitat, aconseguint un rendiment de codificació pràcticament equivalent a l'obtingut amb l'ús de capes de qualitat. Tot i això, els resultats del CPI no estan ben balancejats per les diferents raons de compressió i el ROC presenta irregularitats segons el corpus d'imatges. CoRD millora els resultats de CPI i ROC i aconsegueix un rendiment ben balancejat. A més, CoRD obté un rendiment de compressió una mica millor que l'aconseguit amb l'ús de capes de qualitat. La complexitat computacional del CPI, ROC i CoRD és, a la pràctica, negligible, fent-los adequats per el seu ús en transmissions interactives d'imatges. / This work is focused on the quality scalability of the JPEG2000 image compression standard. Quality scalability is an important feature that allows the truncation of the code-stream at different bit-rates without penalizing the coding performance. Quality scalability is also fundamental in interactive image transmissions to allow the delivery of Windows of Interest (WOI) at increasing qualities. JPEG2000 achieves quality scalability through the rate control method used in the encoding process, which embeds quality layers to the code-stream. In some scenarios, this architecture might raise two drawbacks: on the one hand, when the coding process finishes, the number and bit-rates of quality layers are fixed, causing a lack of quality scalability to code-streams encoded with a single or few quality layers. On the other hand, the rate control method constructs quality layers considering the rate¬distortion optimization of the complete image, and this might not allocate the quality layers adequately for the delivery of a WOI at increasing qualities. This thesis introduces three rate control methods that supply quality scalability for WOIs, or for the complete image, even if the code-stream contains a single or few quality layers. The first method is based on a simple Coding Passes Interleaving (CPI) that models the rate-distortion through a classical approach. An accurate analysis of CPI motivates the second rate control method, which introduces simple modifications to CPI based on a Reverse subband scanning Order and coding passes Concatenation (ROC). The third method benefits from the rate-distortion models of CPI and ROC, developing an approach based on a novel Characterization of the Rate-Distortion slope (CoRD) that estimates the rate-distortion of the code¬blocks within a subband. Experimental results suggest that CPI and ROC are able to supply quality scalability to code-streams, even if they contain a single or few quality layers, achieving a coding performance almost equivalent to the one obtained with the use of quality layers. However, the results of CPI are unbalanced among bit-rates, and ROC presents an irregular coding performance for some corpus of images. CoRD outperforms CPI and ROC achieving well-balanced and regular results and, in addition, it obtains a slightly better coding performance than the one achieved with the use of quality layers. The computational complexity of CPI, ROC and CoRD is negligible in practice, making them suitable to control interactive image transmissions.

Page generated in 0.0192 seconds