• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 16
  • 14
  • 8
  • 7
  • 6
  • 5
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 149
  • 149
  • 23
  • 23
  • 23
  • 22
  • 18
  • 18
  • 15
  • 14
  • 14
  • 14
  • 13
  • 13
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Manufacturing and Characterization of Gold-Black and Prediction and Measurement of its Directional Spectral Absorptivity

Munir, Nazia Binte 26 January 2021 (has links)
Gold-black has emerged as a popular absorptive coating for thermal radiation detectors in aerospace applications. The performance and accuracy of thermal radiation detectors largely depends on the surface optical properties of the absorptive coating. If the absorptivity of the layer is directional or wavelength dependent, then so will be the detector gain itself. This motivates our interest in the manufacture, physical characterization, and study of the wavelength and polarization sensitivity of the directional spectral absorptivity of gold-black. A first-principle model based on lossy antenna theory is presented to predict the polarization dependent directional spectral absorptivity of gold-black in the visible and near infrared. Results for normal spectral absorptivity are in good agreement with measurements reported in the literature. However, suitable experimental data were not available to validate the theory for directional spectral absorptivity. Therefore, an experimental campaign to fabricate and measure the directional spectral behavior of gold-black had to be undertaken to validate the first-principle model. New in-plane bidirectional reflectance distribution function (BRDF) measurements for two thicknesses (~4 μm and ~8 μm) of gold-black laid down on a gold mirror substrate are reported in the visible (532 nm) and near-infrared (800 and 850 nm) for p- and s-polarizations. The investigation is then extended to a three-layer sample, which is shown to exhibit off-specular reflectivity. Described are processes for laying down gold-black coatings and for measuring their in-plane BRDF as a function of thickness, wavelength, and polarization state. A novel method for retrieving the directional absorptivity from in-plane BRDF measurements is presented. The influence of polarization on directional absorptivity is shown to follow our earlier theory except at large incident zenith angles, where an unanticipated mirage effect is observed. / Doctor of Philosophy / Instruments called thermal radiation detectors play an important role in monitoring the global climate from space. Gold-black is often used as an absorptive coating to enhance the performance of these instruments. Users need to know how gold-black coatings influence instrument performance. In general, coating properties depend on the wavelength and direction of incident radiation, as well as on an optical phenomenon called polarization. This dissertation investigates the relationship between the creation of gold-black coatings and their performance. A physical model is postulated for predicting the optical behavior of gold-black in the visible and near infrared. The model produces results that are in good agreement with measurements reported in the literature. However, suitable directional measurements were not available to validate the theory. Therefore, an experimental campaign was mounted to fabricate gold-black coatings and measure their optical behavior in order to validate the mathematical model. We observed the optical behavior of several of our gold-black samples of various thickness and over a range of wavelengths. We also studied a three-layer sample which was found to exhibit an unexpected behavior called off-specular reflectivity. Described are processes for creating gold-black coatings and for measuring and explaining their optical performance. During the course of this investigation an unanticipated mirage effect was observed for the first time.
12

Short-Range Magnetic Correlations, Spontaneous Magnetovolume Effect, and Local Distortion in Magnetic Semiconductor MnTe

Baral, Raju 19 December 2022 (has links)
The antiferromagnetic semiconductor MnTe has recently attracted significant interest as a potential high-performance thermoelectric material. Its promising thermoelectric properties are due in large part to short-range magnetic correlations in the paramagnetic state, which enhance the thermopower through the paramagnon drag effect. Using magnetic pair distribution function (mPDF) analysis of neutron total scattering data, we present a detailed, real-space picture of the short-range magnetic correlation in MnTe, offering a deeper view into the paramagnon drag effect and the nature of the correlated paramagnetic state. We confirm the presence of nanometer-scale antiferromagnetic correlations far into the paramagnetic state, show the evolution of the local magnetic order parameter across the N\'eel temperature T_N=307 K, and discover a spatially anisotropic magnetic correlation length. By combing our mPDF analysis with traditional atomic PDF analysis, we also gain detailed knowledge of the magnetostructural response in MnTe. We observed a spontaneous volume contraction of nearly 1\%, the largest spontaneous magnetovolume effect reported so far for any antiferromagnetic system. The lattice strain scales linearly with the local magnetic order parameter, in contrast to the quadratic scaling observed for the conventional magnetostriction properties of this technologically relevant material. Using neutron and X-ray PDF analysis, we also investigated the local distortion on MnTe and Mn-based systems, MnS and MnO as a function of temperature. Such local distortion on MnTe increases with the rise in temperature and becomes more pronounced at 500 K.
13

On gravity : a study of analytical and computational approaches to problem solving in collisionless systems

Barber, Jeremy A. January 2014 (has links)
I present an overview of the tools and methods of gravitational dynamics motivated by a variety of dynamics problems. Particular focus will be given to the development of dynamic phase-space configurations as well as the distribution functions of collisionless systems. Chapter 1 is a short review of the descriptions of a gravitational system examining Poisson's equations, the probability distribution of particles, and some of the most popular model groups before working through the challenges of introducing anisotropy into a model. Chapter 2 covers the work of Barber2014b which looks at the relations between quantities in collisionless systems. Analytical methods are employed to describe a model that can violate the GDSAI, a well-known result connecting the density slope to the velocity anisotropy. We prove that this inequality cannot hold for non-separable systems and discuss the result in the context of stability theorems. Chapter 3 discusses the background for theories of gravity beyond Newton and Einstein. It covers the `dark sector' of modern astrophysics, motivates the development of MOND, and looks at some small examples of these MONDian theories in practice. Chapter 4 discusses how to perform detailed numerical simulations covering code methods for generating initial conditions and simulating them accurately in both Newtonian and MONDian approaches. The chapter ends with a quick look at the future of N-body codes. Chapters 5 and 6 contain work from Barber 2012 and Barber 2014a which look at the recent discovery of an attractor in the phase-space of collisionless systems and present a variety of results to demonstrate the robustness of the feature. Attempts are then made to narrow down the necessary and sufficient conditions for the effect while possible mechanisms are discussed. Finally, the epilogue is a short discussion on how best to communicate scientific ideas to others in a lecturing or small group setting. Particular focus is given to ideas of presentation and the relative importance of formality versus personality.
14

Measuring the Mass of a Galaxy: An evaluation of the performance of Bayesian mass estimates using statistical simulation

Eadie, Gwendolyn 27 March 2013 (has links)
This research uses a Bayesian approach to study the biases that may occur when kinematic data is used to estimate the mass of a galaxy. Data is simulated from the Hernquist (1990) distribution functions (DFs) for velocity dispersions of the isotropic, constant anisotropic, and anisotropic Osipkov (1979) and Merritt (1985) type, and then analysed using the isotropic Hernquist model. Biases are explored when i) the model and data come from the same DF, ii) the model and data come from the same DF but tangential velocities are unknown, iii) the model and data come from different DFs, and iv) the model and data come from different DFs and the tangential velocities are unknown. Mock observations are also created from the Gauthier (2006) simulations and analysed with the isotropic Hernquist model. No bias was found in situation (i), a slight positive bias was found in (ii), a negative bias was found in (iii), and a large positive bias was found in (iv). The mass estimate of the Gauthier system when tangential velocities were unknown was nearly correct, but the mass profile was not described well by the isotropic Hernquist model. When the Gauthier data was analysed with the tangential velocities, the mass of the system was overestimated. The code created for the research runs three parallel Markov Chains for each data set, uses the Gelman-Rubin statistic to assess convergence, and combines the converged chains into a single sample of the posterior distribution for each data set. The code also includes two ways to deal with nuisance parameters. One is to marginalize over the nuisance parameter at every step in the chain, and the other is to sample the nuisance parameters using a hybrid-Gibbs sampler. When tangential velocities, v(t), are unobserved in the analyses above, they are sampled as nuisance parameters in the Markov Chain. The v(t) estimates from the Markov chains did a poor job of estimating the true tangential velocities. However, the posterior samples of v(t) proved to be useful, as the estimates of the tangential velocities helped explain the biases discovered in situations (i)-(iv) above. / Thesis (Master, Physics, Engineering Physics and Astronomy) -- Queen's University, 2013-03-26 17:23:14.643
15

A measurement of the W boson charge asymmetry with the ATLAS detector

Whitehead, Samuel Robert January 2012 (has links)
Uncertainties on the parton distribution functions (PDFs), in particular those of the valence quarks, can be constrained at LHC energies using the charge asymmetry in the production of W<sup>&plusmn;</sup> bosons. This thesis presents a measurement of the electron channel, lepton charge asymmetry using 497 pb<sup>-1</sup> of data recorded with the ATLAS detector in 2011. The measurement is included in PDF fits using the machinery of HERAPDF and is found to have some constraining power beyond that of existing W charge asymmetry measurements.
16

Characterization of the Near Plume Region of Hexaboride and Barium Oxide Hollow Cathodes operating on Xenon and Iodine

Taillefer, Zachary R 24 January 2018 (has links)
The use of electric propulsion for spacecraft primary propulsion, attitude control and station-keeping is ever-increasing as the technology matures and is qualified for flight. In addition, alternative propellants are under investigation, which have the potential to offer systems-level benefits that can enable particular classes of missions. Condensable propellants, particularly iodine, have the potential to significantly reduce the propellant storage system volume and mass. Some of the most widely used electric thrusters are electrostatic thrusters, which require a thermionic hollow cathode electron source to ionize the propellant for the main discharge and for beam neutralization. Failure of the hollow cathode, which often needs to operate for thousands of hours, is one of the main life-limiting factors of an electrostatic propulsion system. Common failure modes for hollow cathodes include poisoning or evaporation of the thermionic emitter material and erosion of electrodes due to sputtering. The mechanism responsible for the high energy ion production resulting in sputtering is not well understood, nor is the compatibility of traditional thermionic hollow cathodes with alternative propellants such as iodine. This work uses both an emissive probe and Langmuir probe to characterize the near-plume of several hollow cathodes operating on both xenon and iodine by measuring the plasma potential, plasma density, electron temperature and electron energy distribution function (EEDF). Using the EEDF the reaction rate coefficients for relevant collisional processes are calculated. A low current (< 5 A discharge current) hollow cathode with two different hexaboride emitters, lanthanum hexaboride (LaB6) and cerium hexaboride (CeB6), was operated on xenon propellant. The plasma potential, plasma density, electron temperature, EEDF and reaction rate coefficients were measured for both hexaboride emitter materials at a single cathode orifice diameter. The time-resolved plasma potential measurements showed low frequency oscillations (<100 kHz) of the plasma potential at low cathode flow rates (<4 SCCM) and spot mode operation between approximately 5 SCCM and 7 SCCM. The CeB6 and LaB6 emitters behave similarly in terms of discharge power (keeper and anode voltage) and plasma potential, based on results from a cathode with a 0.020�-diameter. Both emitters show almost identical operating conditions corresponding to the spot mode regime, reaction rates, as well as mean and RMS plasma potentials for the 0.020� orifice diameter at a flow rate of 6 SCCM and the same discharge current. The near-keeper region plasma was also characterized for several cathode orifice diameters using the CeB6 emitter over a range of propellant flow rates. The spot-plume mode transition appears to occur at lower flow rates as orifice size is increased, but has a minimum flow rate for stable operation. For two orifice diameters, the EEDF was measured in the near-plume region and reaction rate coefficients calculated for several electron- driven collisional processes. For the cathode with the larger orifice diameter (0.040�), the EEDFs show higher electron temperatures and drift velocities. The data for these cathodes also show lower reaction rate coefficients for specific electron transitions and ionization. To investigate the compatibility of a traditional thermionic emitter with iodine propellant, a low-power barium oxide (BaO) cathode was operated on xenon and iodine propellants. This required the construction and demonstration of a low flow rate iodine feed system. The cathode operating conditions are reported for both propellants. The emitter surface was inspected using a scanning electron microscope after various exposures to xenon and iodine propellants. The results of the inspection of the emitter surface are presented. Another low current (< 5 A), BaO hollow cathode was operated on xenon and iodine propellants. Its discharge current and voltage, and plume properties are reported for xenon and iodine with the cathode at similar operating conditions for each. The overall performance of the BaO cathode on iodine was comparable to xenon. The cathode operating on iodine required slightly higher power for ignition and discharge maintenance compared to xenon, as evident by the higher keeper and anode potentials. Plasma properties in the near- plume region were measured using an emissive probe and single Langmuir probe. For both propellants, the plasma density, electron energy distribution function (EEDF), electron temperature, select reaction rate coefficients and time-resolved plasma potentials are reported. For both propellants the cathode operated the same keeper (0.25 A) and discharge current (3.1 A), but the keeper and anode potentials were higher with iodine; 27 V and 51 V for xenon, and 30 V and 65 V for iodine, respectively. For xenon, the mean electron energy and electron temperature were 7.5 eV and 0.7 eV, with bulk drift energy of 6.6 eV. For iodine, the mean electron energy and electron temperature were 6.3 eV and 1.3 eV, with a bulk drift energy of 4.2 eV. A literature review of relevant collisional processes and associated cross sections for an iodine plasma is also presented.
17

Electron Energy Distribution Measurements in the Plume Region of a Low Current Hollow Cathode

Behlman, Nicholas James 12 January 2010 (has links)
A hollow cathode is an electron source used in a number of different electric thrusters for space propulsion. One important component of the device that helps initiate and sustain the discharge is called the keeper electrode. Cathode keeper erosion is one of the main limiting factors in the lifetime of electric thrusters. Sputtering due to high-energy ion bombardment is believed to be responsible for keeper erosion. Existing models of the cathode plume, including the OrCa2D code developed at Jet Propulsion Laboratory, do not predict these high-energy ions and experimental measurement of the electron energy distribution function (EEDF) could provide useful information for the development of a high fidelity model of the plume region. Understanding of the mechanism by which these high-energy ions are produced could lead to improvements in the design of hollow cathodes. The primary focus of this work is to determine the EEDF in the cathode plume. A single Langmuir probe is used to measure the current-voltage (I-V) characteristic of the plasma plume from a low current hollow cathode in the region downstream of the keeper orifice. The EEDF is obtained using the Druyvesteyn procedure (based on interpretation of the second derivative of the I-V curve), and parameters such as electron temperature, plasma density and plasma potential are also obtained. The dependence of the EEDF and other parameters on the radial position in the plume is examined. Results show that the EEDF deviates from the Maxwellian distribution, and is more accurately described by the Druyvesteyn distribution directly downstream of the cathode. Off-axis measurements of the EEDF indicate the presence of fast electrons, most likely due to the anode geometry. The cathode used in these tests is representative of the cathode used in a 200W class Hall thruster. Data is presented for a hollow cathode operating on argon gas for two cases with different discharge currents.
18

The Power of Categorical Goodness-Of-Fit Statistics

Steele, Michael C., n/a January 2003 (has links)
The relative power of goodness-of-fit test statistics has long been debated in the literature. Chi-Square type test statistics to determine 'fit' for categorical data are still dominant in the goodness-of-fit arena. Empirical Distribution Function type goodness-of-fit test statistics are known to be relatively more powerful than Chi-Square type test statistics for restricted types of null and alternative distributions. In many practical applications researchers who use a standard Chi-Square type goodness-of-fit test statistic ignore the rank of ordinal classes. This thesis reviews literature in the goodness-of-fit field, with major emphasis on categorical goodness-of-fit tests. The continued use of an asymptotic distribution to approximate the exact distribution of categorical goodness-of-fit test statistics is discouraged. It is unlikely that an asymptotic distribution will produce a more accurate estimation of the exact distribution of a goodness-of-fit test statistic than a Monte Carlo approximation with a large number of simulations. Due to their relatively higher powers for restricted types of null and alternative distributions, several authors recommend the use of Empirical Distribution Function test statistics over nominal goodness-of-fit test statistics such as Pearson's Chi-Square. In-depth power studies confirm the views of other authors that categorical Empirical Distribution Function type test statistics do not have higher power for some common null and alternative distributions. Because of this, it is not sensible to make a conclusive recommendation to always use an Empirical Distribution Function type test statistic instead of a nominal goodness-of-fit test statistic. Traditionally the recommendation to determine 'fit' for multivariate categorical data is to treat categories as nominal, an approach which precludes any gain in power which may accrue from a ranking, should one or more variables be ordinal. The presence of multiple criteria through multivariate data may result in partially ordered categories, some of which have equal ranking. This thesis proposes a modification to the currently available Kolmogorov-Smirnov test statistics for ordinal and nominal categorical data to account for situations of partially ordered categories. The new test statistic, called the Combined Kolmogorov-Smirnov, is relatively more powerful than Pearson's Chi-Square and the nominal Kolmogorov-Smirnov test statistic for some null and alternative distributions. A recommendation is made to use the new test statistic with higher power in situations where some benefit can be achieved by incorporating an Empirical Distribution Function approach, but the data lack a complete natural ordering of categories. The new and established categorical goodness-of-fit test statistics are demonstrated in the analysis of categorical data with brief applications as diverse as familiarity of defence programs, the number of recruits produced by the Merlin bird, a demographic problem, and DNA profiling of genotypes. The results from these applications confirm the recommendations associated with specific goodness-of-fit test statistics throughout this thesis.
19

On two-sample data analysis by exponential model

Choi, Sujung 01 November 2005 (has links)
We discuss two-sample problems and the implementation of a new two-sample data analysis procedure. The proposed procedure is based on the concepts of mid-distribution, design of score functions, components, comparison distribution, comparison density and exponential model. Assume that we have a random sample X1, . . . ,Xm from a continuous distribution F(y) = P(Xi y), i = 1, . . . ,m and a random sample Y1, . . . ,Yn from a continuous distribution G(y) = P(Yi y), i = 1, . . . ,n. Also assume independence of the two samples. The two-sample problem tests homogeneity of two samples and formally can be stated as H0 : F = G. To solve the two-sample problem, a number of tests have been proposed by statisticians in various contexts. Two typical tests are the two-sample t?test and the Wilcoxon's rank sum test. However, since they are testing differences in locations, they do not extract more information from the data as well as a test of the homogeneity of the distribution functions. Even though the Kolmogorov-Smirnov test statistic or Anderson-Darling tests can be used for the test of H0 : F = G, those statistics give no indication of the actual relation of F to G when H0 : F = G is rejected. Our goal is to learn why it was rejected. Our approach gives an answer using graphical tools which is a main property of our approach. Our approach is functional in the sense that the parameters to be estimated are probability density functions. Compared with other statistical tools for two-sample problems such as the t-test or the Wilcoxon rank-sum test, density estimation makes us understand the data more fully, which is essential in data analysis. Our approach to density estimation works with small sample sizes, too. Also our methodology makes almost no assumptions on two continuous distributions F and G. In that sense, our approach is nonparametric. Our approach gives graphical elements in two-sample problem where exist not many graphical elements typically. Furthermore, our procedure will help researchers to make a conclusion as to why two populations are different when H0 is rejected and to give an explanation to describe the relation between F and G in a graphical way.
20

A probabilistic pricing model for a company's projects / En probabilistisk prissättningsmodell för ett företags projekt

Malmquist, Daniel January 2012 (has links)
The company’s pricing is often highly impacted by the estimation of competitors’ project costs, which also is the main scope in this degree project. The purpose is to develop a pricing model dealing with uncertainties, since this is a main issue in the current pricing process. A pre-study has been performed, followed by a model implementation. An analysis of the model was then made, before conclusions were drawn. Project cost estimation foremost, but also probability distribution functions and pricing as a general concept, were investigated in the mainly literary pre-study. Two suitable methods for project cost estimation were identified; Monte Carlo simulation and Hierarchy Probability Cost Analysis. These lead to a theoretical project cost estimation model. A model was implemented in Matlab. It treats project cost estimation, but no other pricing aspects. The model was developed based on the theoretical one to the extent possible. Project costs were broken down in sub costs which were included in a Monte Carlo simulation. Competitors’ project costs were estimated using this technique. To analyse the model’s accuracy was difficult. It differs from the theoretical one in terms of how probability distribution functions and correlations are estimated. These problems depend on projects with shifting characteristics and limited data and time. A solid framework has been created though. Improvement possibilities exist, e.g. more accurate estimates and a model handling other pricing aspects. The major threat is that nobody maintains the model. Anyway, estimates are not more than just estimates. The model should therefore be viewed as a helpful tool, not an answer. / Företagets prissättning påverkas ofta till stor del av estimeringen av konkurrenters projektkostnader, vilket också är huvudområdet i detta examensarbete. Syftet är att utveckla en prissättningsmodell som hanterar osäkerheter, då detta är ett stort problem i rådande prissättningsprocess. En förstudie har utförts, följt av en modellimplementation. En analys av modellen gjordes sedan, innan slutsatser drogs. Projektkostnadsestimering främst, men även sannolikhetsfunktioner och prissättning som ett allmänt koncept, undersöktes i den i huvudsak litterära förstudien. Två lämpliga metoder för projektkostnadsestimering identifierades; Monte Carlo-simulering och Hierarchy Probability Cost Analysis. Dessa ledde till en teoretisk modell för projektkostnadsestimering. En modell implementerades i Matlab. Den behandlar projektkostnadsestimering, men inga andra prissättningsaspekter. Modellen utvecklades baserat på den teoretiska i möjlig utsträckning. Projektkostnader bröts ner i delkostnader som estimerades för konkurrenterna. Dessa ingick i en Monte Carlo-simulering. Konkurrenters projektkostnader estimerades med hjälp av denna teknik. Att analysera modellens noggrannhet var svårt. Den skiljer sig från den teoretiska beträffande hur sannolikhetsfunktioner och korrelationer estimeras. Dessa problem beror på projekt med skiftande karaktärsdrag samt begränsad data och tid. Ett solitt ramverk har dock skapats. Förbättringsmöjligheter finns, t.ex. noggrannare estimat och en modell som behandlar andra prissättningsaspekter. Det huvudsakliga hotet är att ingen underhåller modellen. Hur som helst är estimat inte mer än estimat. Modellen ska därför ses som ett hjälpverktyg, inte ett facit.

Page generated in 0.1181 seconds