241 |
Design and calibration of a high temperature continuous run electric arc wind tunnelGrossmann, William 15 July 2010 (has links)
The purpose of this thesis project was to design, construct and evaluate the performance of a high temperature continuous-run electric arc wind tunnel. A pilot model of such a facility was designed assuming that equilibrium air was the working gas. The pilot model facility was constructed and consisted of the following components; arc chamber, stagnation chamber, nozzle section, test and diffusor sections.
In the arc chamber, the air passes through the positive column of an electric arc there-by raising its stagnation temperature before entering the stagnation chamber. Also included in the design and construction were water cooling and waste disposal systems, air supply and vacuum systems, and electric arc power supply system and control.
An examination of tests performed in the electric-arc facility showed that a low density supersonic flow with a stagnation temperature of approximately 10,000 F could be produced. The power level for this flow was 36 kw; however, with an expected increase of power to 72 kw the stagnation temperature should be raised to 15,000 F. Since no valid technique for measuring temperatures of this magnitude has been perfected to the author's knowledge, these temperatures were calculated according to a method as outlined in the present thesis.
The present facility will present an opportunity for study in such topic areas as (1) Aerodynamic Ablation, (2) Magnetoaerodynamic studies and (3) Qualitative studies of chemically reacting gas flows. / Master of Science
|
242 |
A dual-junction thermocouple probe for compensated temperature measurement in reacting flowsScattergood, Thomas R. 12 September 2009 (has links)
This project was conducted in an effort to develop an inexpensive and reliable means of acquiring spatially and temporally resolved temperature data in turbulent reacting flows. Such a system would allow for increased understanding of turbulent reacting flows without the need for a costly and complex optical system. The system under study uses the responses of two thermocouples of different sizes placed in close proximity in order to determine instantaneous temperatures. This is in contrast to previous work in which compensation is performed on a single thermocouple junction in order to correct for the error caused by its heat capacity. The compensation technique that was developed did not require knowledge of the physical properties of the flow or the physical properties of the thermocouples. It did require the measured junction temperatures, the temperature gradients, and the ratio of the time constants of each thermocouple, which is related to the ratio of the diameters of the two junctions.
Computer models were used to demonstrate the compensation technique itself and were used to show how this method is affected by such factors as diameter ratio, noise, size of the junctions, and the digital resolution of the voltage-to-temperature conversion. Dual junction probes were constructed and tested in non-reactive and reactive environments. Non-reactive experiments were used to calibrate the probe diameter ratio and compensation of thermocouples that were heated with a laser and then cooled appeared successful, with errors of 5% or less in the corrected temperatures. Data was taken in the exhaust duct of a step combustor and the compensated temperatures from this turbulent, combusting environment appeared realistic. Some non-physical temperatures were produced which resulted in the elimination of around 37% of the total data set. Non-physical temperatures were inconclusively attributed to a combination of spatial separation of the thermocouples, conduction losses, and to poor response of the junctions due to their size and heat capacity. Best results were obtained when the thermocouples were exposed to a non-reacting jet of heated air. In this situation, the response amplitude of the thermocouples was relatively large and the response frequency relatively low in comparison to the reacting experiments. In this case, the corrected temperature curve appeared to be physically realistic and properly in phase with the thermocouple signals. Around 10% of the data was discarded using error elimination techniques. It was decided that a workable system which was limited by the size and spatial separation of the thermocouples had been achieved. / Master of Science
|
243 |
Optimum magnification and perspective for non-single-lens-reflex camera systemsPigion, Richard G. January 1989 (has links)
The primary purpose of this investigation was to determine how magnification and perspective alter a person's judgements of pleasantness for images recorded in photographic prints. The magnification and perspective of four scenes were varied by recording each scene with each of six camera-lens focal lengths (28 mm, 35 mm, 50 mm, 70 mm, 85 mm, and 105 mm) from each of six distances. The distances revolved around a reference distance depending on scene content. The scenes were selected to be typical of the types of one person or multi-person scenes recorded most often by consumers of non-single-lens-reflex (non-SLR) 35 mm camera systems. The psychophysical scaling technique of magnitude estimation was used to assess each subjects' degree of pleasantness for each print in a single stimulus presentation format. The subjects were actual consumers of non-SLR camera systems.
The results indicate that a wide variety of lens/distance combinations was found to produce pleasing images for each scene. Specifically, the combination of lens/distance which was representative of currently available non-SLR camera systems was almost always among the highest rated images for each scene. This result indicates that these consumers are quite pleased with the images they currently receive. These results are most easily explained using the theory of a compensation mechanism of picture perception. Suggestions for future research include the study of range effects, different methods of assessment, and attempts at understanding the emotional impact of photographs and how it relates to judgements of pleasantness. / Ph. D.
|
244 |
Spatially Resolved Equivalence Ratio Measurements Using Tomographic Reconstruction of OH*/CH* ChemiluminescenceGiroux, Thomas Joseph III 27 July 2020 (has links)
Thermoacoustic instabilities in gas turbine operation arise due to unsteady fluctuations in heat release coupled with acoustic oscillations, often caused by varying equivalence ratio perturbations within the flame field. These instabilities can cause irreparable damage to critical turbine components, requiring an understanding of the spatial/temporal variations in equivalence ratio values to predict flame response. The technique of computed tomography for flame chemiluminescence emissions allows for 3D spatially resolved flame measurements to be acquired using a series of integral projections (camera images). High resolution tomography reconstructions require a selection of projection angles around the flame, while captured chemiluminescence of radical species intensity fields can be used to determine local fuel-air ratios.
In this work, a tomographic reconstruction algorithm program was developed and utilized to reconstruct the intensity fields of CH* and OH*, and these reconstructions were used to quantify local equivalence ratios in an acoustically forced flame. A known phantom function was used to verify and validate the tomography algorithm, while convergence was determined by subsequent monitoring of selected iterative criteria. A documented method of camera calibration was also reproduced and presented here, with suggestions provided for future calibration improvement. Results are shown to highlight fluctuating equivalence ratio trends while illustrating the effectiveness of the developed tomography technique, providing a firm foundation for future study regarding heat release phenomena. / Master of Science / Acoustic sound amplification occurs in the combustion chamber of a gas turbine due to the machine ramping up in operation. These loud sound oscillations continue to grow larger and can damage the turbine machinery and even threaten the safety of the operator. Because of this, many researchers have attempted to understand and predict this behavior in hopes of ending them altogether. One method of studying these sound amplifications is looking at behaviors in the turbine combustion flame so as to potentially shed light on how these large disturbances form and accumulate. Both heat release rate (the steady release of energy in the form of heat from a combustion flame) and equivalence ratio (the mass ratio of fuel to air burned in a combustion process) have proven viable in illustrating oscillatory flame behavior, and can be visualized using chemiluminescence imaging paired with computed tomography.
Chemiluminescence imaging is used to obtain intensity fields of species from high resolution camera imaging, while computed tomography techniques are capable of reconstructing these images into a three-dimensional volume to represent and visualize the combustion flame. These techniques have been shown to function effectively in previous literature and were further implemented in this work. A known calibration technique from previous work was carried out along with reconstructing a defined phantom function to show the functionality of the developed tomography algorithm. Results illustrate the effectiveness of the tomographic reconstruction technique and highlight the amplified acoustic behavior of a combustion flame in a high noise environment.
|
245 |
CALIBRATION OF THE MU2E ABSOLUTE MOMENTUM SCALE USING POSITIVE PION DECAYS TO POSITRON AND ELECTRON NEUTRINOXiaobing Shi (18421551) 22 April 2024 (has links)
<p dir="ltr">The Mu2e experiment will search for neutrinoless, coherent conversion of a muon into an</p><p dir="ltr">electron in the field of an aluminum nucleus (μ−N ! e−N) at the sensitivity level of 10−17.</p><p dir="ltr">This conversion process is an example of Charged Lepton Flavor Violation (CLFV), which</p><p dir="ltr">has never been observed experimentally before. The Mu2e experiment tracker is designed</p><p dir="ltr">to accurately detect the 105 MeV/c conversion electron (CE) momentum in a uniform 1 T</p><p dir="ltr">magnetic field. The mono-energetic positrons (e+) at 69.8 MeV from the decay of positively charged</p><p dir="ltr">pions (p+) that have stopped in the aluminum stopping target are investigated as a</p><p dir="ltr">calibration source to measure the accuracy of absolute momentum scale. The backgrounds</p><p dir="ltr">for the calibration arise from μ+ decay-in-flight (DIF) backgrounds and other stopped p+</p><p dir="ltr">decays that produce reconstructed e+ tracks mimicking a signal trajectory originating from</p><p dir="ltr">the stopping target. The most significant background is the μ-DIF background. Therefore,</p><p dir="ltr">we identified the need for a momentum degrader placed at the entry of the Detector Solenoid,</p><p dir="ltr">to increase the pion stops in the stopping target and suppress the μ-DIF background. The</p><p dir="ltr">material of the degrader is chosen to be titanium (Ti). The thickness of degrader is optimized</p><p dir="ltr">by the pion stops efficiency to muon flux efficiency ratio and the 4mm Ti degrader is the</p><p dir="ltr">optimized one. The calibration signal and backgrounds are simulated with the 3mm and</p><p dir="ltr">4mm Ti degrader. The ratio of S/B is used as a figure of merit, S/B ? 1.85 for the 3mm Ti</p><p dir="ltr">degrader and S/B ? 2.93 for the 4mm Ti degrader. The 4mm Ti degrader performs better</p><p dir="ltr">than the 3mm Ti degrader in terms of S/B ratio. By fitting the reconstructed momentum</p><p dir="ltr">spectra of signal and backgrounds, we extract the signal distribution peak and width of</p><p dir="ltr">x0 = 69.268 ± 0.013 MeV/c and ? = 0.324 ± 0.009 MeV/c (with the 3mm Ti degrader),</p><p dir="ltr">x0 = 69.263 ± 0.013 MeV/c and ? = 0.299 ± 0.009 MeV/c (with the 4mm Ti degrader).</p><p dir="ltr">We also show that the peak shifts by backgrounds for both degraders are within 100 keV/c</p><p dir="ltr">momentum scale accuracy requirement.</p>
|
246 |
On the 3 M's of Epidemic Forecasting: Methods, Measures, and MetricsTabataba, Farzaneh Sadat 06 December 2017 (has links)
Over the past few decades, various computational and mathematical methodologies have been proposed for forecasting seasonal epidemics. In recent years, the deadly effects of enormous pandemics such as the H1N1 influenza virus, Ebola, and Zika, have compelled scientists to find new ways to improve the reliability and accuracy of epidemic forecasts. The improvement and variety of these prediction methods are undeniable. Nevertheless, many challenges remain unresolved in the path of forecasting the outbreaks using surveillance data. Obtaining the clean real-time data has always been an obstacle. Moreover, the surveillance data is usually noisy and handling the uncertainty of the observed data is a major issue for forecasting algorithms. Correct modeling assumptions regarding the nature of the infectious disease is another dilemma. Oversimplified models could lead to inaccurate forecasts, whereas more complicated methods require additional computational resources and information. Without those, the model may not be able to converge to a unique optimum solution. Through the last decade, there has been a significant effort towards achieving better epidemic forecasting algorithms. However, the lack of standard, well-defined evaluating metrics impedes a fair judgment on the proposed methods.
This dissertation is divided into two parts. In the first part, we present a Bayesian particle filter calibration framework integrated with an agent-based model to forecast the epidemic trend of diseases like flu and Ebola. Our approach uses Bayesian statistics to estimate the underlying disease model parameters given the observed data and handle the uncertainty in the reasoning. An individual-based model with different intervention strategies could result in a large number of unknown parameters that should be properly calibrated. As particle filter could collapse in very large-scale systems (curse-of-dimensionality problem), achieving the optimum solution becomes more challenging. Our proposed particle filter framework utilizes machine learning concepts to restrain the intractable search space. It incorporates a smart analyzer in the state dynamics unit that examines the predicted and observed data using machine learning techniques to guide the direction and amount of perturbation of each parameter in the searching process.
The second part of this dissertation focuses on providing standard evaluation measures for evaluating epidemic forecasts. We present an end-to-end framework that introduces epidemiologically relevant features (Epi-features), error measures, and ranking schema as the main modules of the evaluation process. Lastly, we provide the evaluation framework as a software package named Epi-Evaluator and demonstrate the potentials and capabilities of the framework by applying it to the output of different forecasting methods. / PHD / Epidemics impose substantial costs to societies by deteriorating the public health and disrupting economic trends. In recent years, the deadly effects of wide-spread pandemics such as H1N1, Ebola, and Zika, have compelled scientists to find new ways to improve the reliability and accuracy of epidemic forecasts. The reliable prediction of future pandemics and providing efficient intervention plans for health care providers could prevent or control disease propagations. Over the last decade, there has been a significant effort towards achieving better epidemic forecasting algorithms. The mission, however, is far from accomplished. Moreover, there has been no significant leap towards standard, well-defined evaluating metrics and criteria for a fair performance judgment between the proposed methods.
This dissertation is divided into two parts. In the first part, we present a Bayesian particle filter calibration framework integrated with an agent-based model to forecast the epidemic trend of diseases like flu and Ebola. We model the disease propagation via a large scale agent-based model that simulates the disease spread across the contact network of people. The contact network consists of millions of nodes and is constructed based on demographic information of individuals achieved from the census data. The agent-based model’s configurations are mostly unknown parameters that should be properly calibrated. We present a Bayesian particle filter calibration approach to estimate the underlying disease model parameters given the observed data and handle the uncertainty in the reasoning. As particle filter could collapse in very large-scale systems, achieving the optimum solution becomes more challenging. Our proposed particle filter framework utilizes machine learning concepts to restrain the intractable search space. It incorporates a smart analyzer unit that examines the predicted and observed data using machine learning techniques to guide the direction and amount of perturbation of each parameter in the searching process.
The second part of this dissertation focuses on providing standard evaluation measures for evaluating and comparing epidemic forecasts. We present a framework that introduces epidemiologically relevant features (Epi-features), error measures, and ranking schema as the main modules of the evaluation process. Lastly, we provide the evaluation framework as a software package named Epi-Evaluator and demonstrate the potentials and capabilities of the framework by applying it to the output of different forecasting methods.
|
247 |
DESIGN AND CALIBRATION OF AN AIRBORNE MULTICHANNEL SWEPT-TUNED SPECTRUM ANALYZERHamory, Philip J., Diamond, John K., Bertelrud, Arild 10 1900 (has links)
International Telemetering Conference Proceedings / October 25-28, 1999 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper describes the design and calibration of a four-channel, airborne, swept-tuned
spectrum analyzer used in two hypersonic flight experiments for characterizing dynamic
data up to 25 kHz. Built mainly from commercially available analog function modules, the
analyzer proved useful for an application with limited telemetry bandwidth, physical
weight and volume, and electrical power. The authors discuss considerations that affect the
frequency and amplitude calibrations, limitations of the design, and example flight data.
|
248 |
Calibration and validation of high frequency radar for ocean surface current mappingKim, Kyung Cheol 06 1900 (has links)
Approved for public release, distribution is unlimited / High Frequency (HF) radar backscatter instruments are being developed and tested in the marine science and defense science communities for their abilities to sense surface parameters remotely in the coastal ocean over large areas. In the Navy context, the systems provide real-time mapping of ocean surface currents and waves critical for characterizing and forecasting the battle space environment. In this study, the performance of a network of four CODAR (Coastal Ocean Dynamics Application Radar) SeaSonde HF radars, using the Multiple Signal Classification (MUSIC) algorithm for direction finding, is described for the period between July to September 2003. Comparisons are made in Monterey Bay with moored velocity observations, with four radar baseline pairs, and with velocity observations from sixteen drifter deployments. All systems measure ocean surface current and all vector currents are translated into radial current components in the direction of the various radar sites. Measurement depths are 1 m for the HF radar-derived currents, 12 to 20 m for the ADCP bin nearest to the surface at the M1 mooring site, and 8 m for the drifter-derived velocity estimates. Comparisons of HF radar-M1 mooring buoy, HF radar-HF radar (baseline), and HF radar-drifter data yield improvements of - 1.7 to 16.7 cm/s rms differences and -0.03 to 0.35 correlation coefficients when measured antenna patterns are used. The mooring comparisons and the radar-to-radar baseline comparisons indicate angular shifts of 10Ê» to 30Ê» for radial currents produced using ideal antenna patterns and 0Ê» to 15Ê» angular shifts for radial currents produced using measured patterns. The comparisons with drifter-derived radial currents indicate that these angular biases are not constant across all look directions, even though the local antenna pattern distortions were taken into account through the use of measured antenna patterns. In particular, data from the SCRZ and MLNG radar sites show varied pointing errors across the range of angles covered. / Lieutenant Commander, Republic of Korea Navy
|
249 |
Large volume artefact for calibration of multi-sensor projected fringe systemsTarvaz, Tahir January 2015 (has links)
Fringe projection is a commonly used optical technique for measuring the shapes of objects with dimensions of up to about 1 m across. There are however many instances in the aerospace and automotive industries where it would be desirable to extend the benefits of the technique (e.g., high temporal and spatial sampling rates, non-contacting measurements) to much larger measurement volumes. This thesis describes a process that has been developed to allow the creation of a large global measurement volume from two or more independent shape measurement systems. A new 3-D large volume calibration artefact, together with a hexapod positioning stage, have been designed and manufactured to allow calibration of volumes of up to 3 x 1 x 1 m3. The artefact was built from carbon fibre composite tubes, chrome steel spheres, and mild steel end caps with rare earth rod magnets. The major advantage over other commonly used artefacts is the dimensionally stable relationship between features spanning multiple individual measurement volumes, thereby allowing calibration of several scanners within a global coordinate system, even when they have non-overlapping fields of view. The calibration artefact is modular, providing the scalability needed to address still larger measurement volumes and volumes of different geometries. Both it and the translation stage are easy to transport and to assemble on site. The artefact also provides traceabitity for calibration through independent measurements on a mechanical CMM. The dimensions of the assembled artefact have been found to be consistent with those of the individual tube lengths, demonstrating that gravitational distortion corrections are not needed for the artefact size considered here. Deformations due to thermal and hygral effects have also been experimentally quantified. The thesis describes the complete calibration procedure: large volume calibration artefact design, manufacture and testing; initial estimation of the sensor geometry parameters; processing of the calibration data from manually selected regions-of-interest (ROI) of the artefact features; artefact pose estimation; automated control point selection, and finally bundle adjustment. An accuracy of one part in 17 000 of the global measurement volume diagonal was achieved and verified.
|
250 |
Développement et applications environnementales des échantillonneurs passifs pour la surveillance des écosystèmes aquatiques / Improvement and field application of passive samplers for monitoring of aquatic ecosystemsBelles, Angel 14 December 2012 (has links)
Pour une meilleure compréhension en gestion de la qualité de l’environnement le dosage des contaminants dans les différents compartiments naturels reste un premier pas vers l’élucidation de la dynamique des polluants et de leurs impacts sur les écosystèmes. Cependant les stratégies d’échantillonnage usuellement utilisées n’ont pas changé depuis l’avènement de la chimie analytique. Ces techniques consistent en général à prélever une certaine quantité de l’échantillon (eau, air, solide) afin d’en extraire les substances d’intérêt pour les doser. La question de la représentativité de telles pratiques se pose alors ; en effet pour un site donné la contamination peut être très variable au cours du temps et sur de faibles distances. La compréhension fine de la contamination d’un milieu en utilisant de telles techniques impose alors la multiplication des prélèvements dans le temps et l’espace.Depuis les années 80 mais surtout depuis le début des années 2000, des outils d’échantillonnage passif ont été mis au point dans de nombreux domaines permettant d’avoir un suivi de la contamination intégré dans le temps à moindre coût. Ces nouvelles approches consistent à prélever l’échantillon en continu et in-situ sans apport d’énergie, fournissant ainsi une valeur moyenne de la contamination.Afin de pouvoir utiliser ces dispositifs, un certain nombre de développements en laboratoire doivent être au préalable menés afin de déterminer les constantes cinétiques nécessaires pour déduire la contamination du milieu échantillonné à partir des résidus séquestrés par les échantillonneurs. Ainsi, dans le cadre de ces travaux, une sélection d’échantillonneurs existants ont été testés et adaptés en laboratoire puis évalués en conditions réelles sur divers sites environnementaux.Les développements en laboratoire ont eu pour objet de mettre au point différentes configurations d’outils dans le but d’être applicables au plus grand nombre de molécules et ce de la manière la plus quantitative possible. A titre d’exemple, des dispositifs adaptés ont été mis au point pour l’échantillonnage de molécules très polaires qui auparavant n’étaient pas efficacement échantillonnées par les dispositifs existants. Sur site, les outils d’échantillonnage ont principalement été mis en œuvre dans le cadre de programmes de recherche plus vastes et ont à ce titre pu être testés sur de grands terrains d’étude (Bassin d’Arcachon et Estuaire de la Gironde) et être comparés aux techniques d’échantillonnage ponctuels qui font actuellement référence. Les résultats fournis par les outils sont proches de ceux obtenus par échantillonnage ponctuel. Cependant l’aspect quantitatif apparaît probablement encore améliorable soit par l’usage de nouveaux composés référence de performance soit par mise au point de dispositifs plus robustes et faiblement impactés dans leurs performances par les conditions environnementales. / For a better understanding and management of the environmental quality, contaminant analysis in the various compartments is a natural first step in the understanding of the dynamics of pollutants and of their impacts on ecosystems. However sampling strategies commonly used have not changed since the advent of analytical chemistry. These techniques in general consist of taking a certain amount of sample (water, air, solid) to extract the substances of interest to assay. The issue of representativeness of such sampling practices arises since for a given site the contamination can vary over time and over short distances. Detailed understanding of the contamination of an ecosystem using such sampling techniques requires the multiplication of samples over time and space.Since the 80’s and more especially the beginning of 2000, passive sampling tools have been developed in many areas. They provide an integrated monitoring of contamination over time at low cost. These new approaches are based on the fact that the sample are taken continuously in-situ and without energy supply, thus providing an average value of the contamination.To use these devices, a number of laboratory developments must first be conducted to determine the kinetic constants to deduce the necessary characteristics of the environmental contamination. Thus, as part of this work, a selection of existing samplers has been tested and adapted in laboratory experiments and evaluated in real conditions at various environmental sites.Laboratory developments have been conducted to develop different configuration tools in order to be used for a wide range of pollutants with the best quantitative capacity. For example, suitable devices have been developed for sampling highly polar molecules which previously were not strongly sampled by existing devices.On-site, sampling tools were mainly implemented in the framework of broader research programs and consequently have been tested during large field studies (Bassin d'Arcachon, Gironde Estuary) to compare their performance to grab sampling techniques. The results provided by the tools are similar to those obtained by grab sampling. However, the quantitative aspect appears still improvable either by the use of new performance reference compounds or by using devices more robust and slightly affected in their performance by environmental conditions.
|
Page generated in 0.0766 seconds