• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • Tagged with
  • 8
  • 8
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Transient thermoelectric supercooling| Isosceles current pulses from a response surface perspective and the performance effects of pulse cooling a heat generating mass

Piggott, Alfred J., III 02 February 2016 (has links)
<p> With increased public interest in protecting the environment, scientists and engineers aim to improve energy conversion efficiency. Thermoelectrics offer many advantages as thermal management technology. When compared to vapor compression refrigeration, above approximately 200 to 600 watts, cost in dollars per watt as well as <i>COP</i> are not advantageous for thermoelectrics. The goal of this work was to determine if optimized pulse supercooling operation could improve cooling capacity or efficiency of a thermoelectric device. The basis of this research is a thermal-electrical analogy based modeling study using SPICE. Two models were developed. The first model, a standalone thermocouple with no attached mass to be cooled. The second, a system that includes a module attached to a heat generating mass. With the thermocouple study, a new approach of generating response surfaces with characteristic parameters was applied. The current pulse height and pulse on-time was identified for maximizing Net Transient Advantage, a newly defined metric. The corresponding pulse height and pulse on-time was utilized for the system model. Along with the traditional steady state starting current of <i>I<sub>max</sub>, I<sub>opt</sub></i> was employed. The pulse shape was an isosceles triangle. For the system model, metrics new to pulse cooling were <i>Q<sub>c</sub></i>, power consumption and COP. The effects of optimized current pulses were studied by changing system variables. Further studies explored time spacing between pulses and temperature distribution in the thermoelement. It was found net <i>Q<sub> c</sub></i> over an entire pulse event can be improved over <i> I<sub>max</sub></i> steady operation but not over steady <i>I<sub> opt</sub></i> operation. <i>Q<sub>c</sub></i> can be improved over <i>I<sub>opt</sub></i> operation but only during the early part of the pulse event. COP is reduced in transient pulse operation due to the different time constants of <i>Q<sub>c</sub></i> and <i> P<sub>in</sub>.</i> In some cases lower performance interface materials allow more <i>Q<sub>c</sub></i> and better <i>COP</i> during transient operation than higher performance interface materials. Important future work might look at developing innovative ways of biasing Joule heat to <i>T<sub>h</sub>.</i></p>
2

Characterization of the Shock Wave Structure in Water

Teitz, Emilie Maria 05 May 2017 (has links)
<p> The scientific community is interested in furthering the understanding of shock wave structures in water, given its implications in a wide range of applications; from researching how shock waves penetrate unwanted body tissues to studying how humans respond to blast waves. Shock wave research on water has existed for over five decades. Previous studies have investigated the shock response of water at pressures ranging from 1 to 70 GPa using flyer plate experiments. This report differs from previously published experiments in that the water was loaded to shock pressures ranging from 0.36 to 0.70 GPa. The experiment also utilized tap water rather than distilled water as the test sample.</p><p> Flyer plate experiments were conducted in the Shock Physics Laboratory at Marquette University to determine the structure of shock waves within water. A 12.7 mm bore gas gun fired a projectile made of copper, PMMA, or aluminum at a stationary target filled with tap water. Graphite break pins in a circuit determined the initial projectile velocity prior to coming into contact with the target. A Piezoelectric timing pin (PZT pin) at the front surface of the water sample determined the arrival of the leading wave and a Photon Doppler Velocimeter (PDV) measured particle velocity from the rear surface of the water sample. The experimental results were compared to simulated data from a Eulerian Hydrocode called CTH [1]. The experimental results differed from the simulated results with deviations believed to be from experimental equipment malfunctions. The main hypothesis being that the PZT pin false triggered, resulting in measured lower than expected shock velocities. The simulated results were compared to published data from various authors and was within range.</p>
3

Design and analysis of a personnel blast shield for different explosives applications

Lozano, Eduardo 09 November 2016 (has links)
<p> The use of explosives brings countless benefits to our everyday lives in areas such as mining, oil and gas exploration, demolition, and avalanche control. However, because of the potential destructive power of explosives, strict safety procedures must be an integral part of any explosives operation. </p><p> The goal of this work is to provide a solution to protect against the hazards that accompany the general use of explosives, specifically in avalanche control. For this reason, a blast shield was designed and tested to protect the Colorado Department of Transportation personnel against these unpredictable effects. This document will develop a complete analysis to answer the following questions: what are the potential hazards from the detonation of high explosives, what are their effects, and how can we protect ourselves against them. To answer these questions theoretical, analytical, and numerical calculations were performed. Finally, a full blast shield prototype was tested under different simulated operational environments proving its effectiveness as safety device. The Colorado Department of Transportation currently owns more than fifteen shields that are used during every operation involving explosive materials.</p>
4

Studies of Dimensional Metrology with X-Ray Cat Scan

Villarraga-Gomez, Herminso 31 August 2018 (has links)
<p> X-ray computed tomography (CT)&mdash;more commonly known as CAT scan&mdash;has recently evolved from the world of medical imaging and nondestructive evaluation to the field of dimensional metrology; the CT technique can now be used to measure a specimen&rsquo;s geometrical dimensions (of both internal and external features). As a result, CT presently contributes to the areas of dimensional inspection and geometric analysis for technology companies that produce manufactured parts for a variety of industries such as automotive, aerospace, medical devices, electronics, metalworking, injection molding plastics, composite materials, ceramics, and 3D printing or additive manufacturing. While dimensional accuracy is not crucial for medical diagnoses or other qualitative analyses, accurate dimensional quantification is the essence of X-ray CT metrology. Despite increasing advances in this technology, the current state of the art of CT metrology still confronts challenges when trying to estimate measurement uncertainties, mainly due to the plethora of influencing factors contributing to the CT measurement process. Gradual progress has occurred over the last decade toward a better understanding of some of these influencing factors that were illuminated by a series of collaborative research initiatives between a collective of several universities and institutions (predominantly located in the European Union) committed to the advancement and development of industrial CT scanning as a measuring technology. In an effort to further understand phenomenologically the role of variables affecting the precision and accuracy of CT dimensional measurements, this dissertation presents a series of experimental studies that evaluate the performance of cone-beam CT measurements, and their uncertainty estimates, in comparison with reference measurements generally obtained from tactile coordinate measurement machines (CMMs). In some cases, the results are contrasted against simulations performed in Matlab software (to compute fan-beam projection data) and an additional simulation tool called &ldquo;Dreamcaster&rdquo; (for ray casting and Radon-space analysis). The main CT variables investigated were: temperature in the X-ray CT enclosure, number of projections for a CT scan, workpiece tilt orientation, sample magnification, material thickness influences, software post-filtration, threshold determination, and measurement strategies. For dimensions of geometric features ranging from 0.5 mm to 65 mm, a comparison between dimensional CT and CMM measurements, performed at optimized conditions, typically resulted in differences of approximately 5 &micro;m or less for data associated with dimensional lengths (length, width, height, and diameters) and around 5 to 50 &micro;m for data associated with measurements of form, while expanded uncertainties computed for the CT measurements ranged from 1 to over 50 &micro;m. Methods for estimating measurement uncertainty of CT scanning are also assessed in this work. Special attention is paid to the current state of measurement comparisons (in the field of dimensional X-ray CT) by presenting a comprehensive study of metrics used for proficiency testing, including rigorous tests of statistical consistency (null-hypothesis testing) performed with Monte Carlo simulation, and particularly applied to results from two recent CT interlaboratory comparisons. This latter study contributes to the knowledge of methods for performance assessment in measurement comparisons. In particular, it is shown that the use of the En-metric in the current state of CT interlaboratory comparisons could be difficult to interpret when used to evaluate performance and/or statistical consistency of CT measurement sets.</p><p>
5

Multiphase flows with digital and traditional microfluidics

Nilsson, Michael A 01 January 2013 (has links)
Multi-phase fluid systems are an important concept in fluid mechanics, seen every day in how fluids interact with solids, gases, and other fluids in many industrial, medical, agricultural, and other regimes. In this thesis, the development of a two-dimensional digital microfluidic device is presented, followed by the development of a two-phase microfluidic diagnostic tool designed to simulate sandstone geometries in oil reservoirs. In both instances, it is possible to take advantage of the physics involved in multiphase flows to affect positive outcomes in both. In order to make an effective droplet-based digital microfluidic device, one must be able to precisely control a number of key processes including droplet positioning, motion, coalescence, mixing, and sorting. For planar or open microfluidic devices, many of these processes have yet to be demonstrated. A suitable platform for an open system is a superhydrophobic surface, as suface characteristics are critical. Great efforts have been spent over the last decade developing hydrophobic surfaces exhibiting very large contact angles with water, and which allow for high droplet mobility. We demonstrate that sanding Teflon can produce superhydrophobic surfaces with advancing contact angles of up to 151° and contact angle hysteresis of less than 4°. We use these surfaces to characterize droplet coalescence, mixing, motion, deflection, positioning, and sorting. This research culminates with the presentation of two digital microfluidic devices: a droplet reactor/analyzer and a droplet sorter. As global energy usage increases, maximizing oil recovery from known reserves becomes a crucial multiphase challenge in order to meet the rising demand. This thesis presents the development of a microfluidic sandstone platform capable of quickly and inexpensively testing the performance of fluids with different rheological properties on the recovery of oil. Specifically, these microfluidic devices are utilized to examine how shear-thinning, shear-thickening, and viscoelastic fluids affect oil recovery. This work begins by looking at oil displacement from a microfluidic sandstone device, then investigates small-scale oil recovery from a single pore, and finally investigates oil displacement from larger scale, more complex microfluidic sandstone devices of varying permeability. The results demonstrate that with careful fluid design, it is possible to outperform current commercial additives using the patent-pending fluid we developed. Furthermore, the resulting microfluidic sandstone devices can reduce the time and cost of developing and testing of current and new enhanced oil recovery fluids.
6

Theory of pulse forming network design for acceleration waveform time domain replication

Timpson, Erik Joseph 01 October 2016 (has links)
<p> A Pulse Forming Network (PFN) was built and optimized using an Algorithm based on theory and experimental data. The target load for the PFN was a Helical Electromagnetic Launcher. The target application of the launcher is Environmental Testing &mdash; mechanical shock &mdash; time domain replication. The new Algorithm that was used combines time, frequency, and energy domain methods to restrict the solution space before optimization. As in many other applications, the final optimization was done though experimental trial and error. The PFN ultimately met the repeatability and uncertainty targets specified by environmental engineers.</p>
7

Phonon Scattering and Confinement in Crystalline Films

Parrish, Kevin D. 31 October 2017 (has links)
<p> The operating temperature of energy conversion and electronic devices affects their efficiency and efficacy. In many devices, however, the reference values of the thermal properties of the materials used are no longer applicable due to processing techniques performed. This leads to challenges in thermal management and thermal engineering that demand accurate predictive tools and high fidelity measurements. The thermal conductivity of strained, nanostructured, and ultra-thin dielectrics are predicted computationally using solutions to the Boltzmann transport equation. Experimental measurements of thermal diffusivity are performed using transient grating spectroscopy.</p><p> The thermal conductivities of argon, modeled using the Lennard-Jones potential, and silicon, modeled using density functional theory, are predicted under compressive and tensile strain from lattice dynamics calculations. The thermal conductivity of silicon is found to be invariant with compression, a result that is in disagreement with previous computational efforts. This difference is attributed to the more accurate force constants calculated from density functional theory. The invariance is found to be a result of competing effects of increased phonon group velocities and decreased phonon lifetimes, demonstrating how the anharmonic contribution of the atomic potential can scale differently than the harmonic contribution.</p><p> Using three Monte Carlo techniques, the phonon-boundary scattering and the subsequent thermal conductivity reduction are predicted for nanoporous silicon thin films. The Monte Carlo techniques used are free path sampling, isotropic ray-tracing, and a new technique, modal ray-tracing. The thermal conductivity predictions from all three techniques are observed to be comparable to previous experimental measurements on nanoporous silicon films. The phonon mean free paths predicted from isotropic ray-tracing, however, are unphysical as compared to those predicted by free path sampling. Removing the isotropic assumption, leading to the formulation of modal ray-tracing, corrects the mean free path distribution. The effect of phonon line-of-sight is investigated in nanoporous silicon films using free path sampling. When the line-of-sight is cut off there is a distinct change in thermal conductivity versus porosity. By analyzing the free paths of an obstructed phonon mode, it is concluded that the trend change is due to a hard upper limit on the free paths that can exist due to the nanopore geometry in the material.</p><p> The transient grating technique is an optical contact-less laser based experiment for measuring the in-plane thermal diffusivity of thin films and membranes. The theory of operation and physical setup of a transient grating experiment is detailed. The procedure for extracting the thermal diffusivity from the raw experimental signal is improved upon by removing arbitrary user choice in the fitting parameters used and constructing a parameterless error minimizing procedure.</p><p> The thermal conductivity of ultra-thin argon films modeled with the Lennard-Jones potential is calculated from both the Monte Carlo free path sampling technique and from explicit reduced dimensionality lattice dynamics calculations. In these ultra-thin films, the phonon properties are altered in more than a perturbative manner, referred to as the confinement regime. The free path sampling technique, which is a perturbative method, is compared to a reduced dimensionality lattice dynamics calculation where the entire film thickness is taken as the unit cell. Divergence in thermal conductivity magnitude and trend is found at few unit cell thick argon films. Although the phonon group velocities and lifetimes are affected, it is found that alterations to the phonon density of states are the primary cause of the deviation in thermal conductivity in the confinement regime.</p><p>
8

Infrared thermography as applied to thermal testing of power systems circuit boards

Miles, Jonathan James 01 January 1994 (has links)
All operational electronic equipment dissipates some amount of energy in the form of infrared radiation. Faulty electronic components on a printed circuit board can be categorized as hard (functional) or soft (latent functional). Hard faults are those which are detected during a conventional manufacturing electronic test process. Soft failures, in contrast, are those which are undetectable through conventional testing, but which manifest themselves after a product has been placed into service. Such field defective modules ultimately result in operational failure and subsequently enter a manufacturer's costly repair process. While thermal imaging systems are being used increasingly in the electronic equipment industry as a product-testing tool, applications have primarily been limited to product design or repair processes, with minimal use in a volume manufacturing environment. Use of thermal imaging systems in such an environment has mostly been limited to low-volume products or random screening of high-volume products. Thermal measurements taken in a manufacturing environment are often taken manually, thus defeating their capability of rapid data acquisition and constraining their full potential in a high-volume manufacturing process. Integration of a thermal measurement system with automated testing equipment is essential for optimal use of expensive infrared measurement tools in a high-volume manufacturing environment. However, such a marriage presents problems with respect to both existing manufacturing test processes and infrared measurement techniques. Methods are presented in this dissertation to test automatically for latent faults, those which elude detection during conventional electronic testing, on printed circuit boards. These methods are intended for implementation in a volume manufacturing environment and involve the application of infrared imaging tools. Successful incorporation of infrared testing into existing test processes requires that: PASS/FAIL criteria be established; a procedure for dealing with variable radiation heat transfer properties across a printed circuit board be developed; and a thermally-controlled enclosure in which testing is performed be provided. These tasks are addressed and positive results are presented. Testing procedures and software developed to perform analyses are described. The feasibility of an infrared test process is demonstrated. A description of acquired experimental data, results, and analyses designed to verify measurement and fault analysis techniques are also presented. There are a number of phenomena which are known to contribute undesirably, and often unpredictably, to results. Methods for reducing random error in results and suggestions for establishing PASS/FAIL criteria and improving measurement techniques are addressed.

Page generated in 0.1307 seconds