• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 780
  • 271
  • 229
  • 97
  • 36
  • 30
  • 30
  • 30
  • 30
  • 30
  • 30
  • 24
  • 17
  • 16
  • 14
  • Tagged with
  • 1918
  • 807
  • 350
  • 287
  • 172
  • 148
  • 147
  • 142
  • 136
  • 132
  • 128
  • 112
  • 100
  • 96
  • 89
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
301

A TRACE/PARCS Coupling, Uncertainty Propagation and Sensitivity Analysis Methodology for the IAEA ICSP on Numerical Benchmarks for Multi-Physics Simulation of Pressurized Heavy Water Reactor Transients

Groves, Kai January 2020 (has links)
The IAEA ICSP on Numerical Benchmarks for Multiphysics Simulation of Pressurized Heavy Water Reactor Transients was initiated in 2016 to facilitate the development of a set of open access, standardized, numerical test problems for postulated accident scenarios in a CANDU styled Reactor. The test problems include a loss of coolant accident resulting from an inlet header break, a loss of flow accident caused by a single pump trip, and a loss of regulation accident due to inadvertently withdrawn adjusters. The Benchmark was split into phases, which included stand-alone physics and thermal-hydraulics transients, coupled steady state simulations, and coupled transients. This thesis documents the results that were generated through an original TRACE/PARCS coupling methodology that was developed specifically for this work. There is a strong emphasis on development methods and step by step verification throughout the thesis, to provide a framework for future research in this area. In addition to the Benchmark results, additional studies on propagation of fundamental nuclear data uncertainty, and sensitivity analysis of coupled transients are reported in this thesis. Two Phenomena and Key Parameter Identification and Ranking Tables were generated for the loss of coolant accident scenario, to provide feedback to the Benchmark Team, and to add to the body of work on uncertainty/sensitivity analysis of CANDU style reactors. Some important results from the uncertainty analysis work relate to changes in the uncertainty of figures of merit such as integrated core power, and peak core power magnitude and time, between small and large break loss of coolant accidents. The analysis shows that the mean and standard deviation of the integrated core power and maximum integrated channel power, are very close between a 30% header break and a 60% header break, despite the peak core power being much larger in the 60% break case. Furthermore, it shows that there is a trade off between the uncertainty in the time of the peak core power, and the magnitude of the peak core power, with smaller breaks showing a smaller standard deviation in the magnitude of the peak core power, but a larger standard deviation in when this power is reached during the transient, and vice versa for larger breaks. From the results of the sensitivity analysis study, this thesis concludes that parameters related to coolant void reactivity and shutoff rod timing and effectiveness have the largest impact on loss of coolant accident progressions, while parameters that can have a large impact in other transients or reactor designs, such as fuel temperature reactivity feedback and control device incremental cross sections, are less important. / Thesis / Master of Science (MSc) / This thesis documents McMaster’s contribution to an International Atomic Energy Agency Benchmark on Pressurized Heavy Water Reactors that closely resemble the CANDU design. The Benchmark focus is on coupling of thermal-hydraulics and neutron physics codes, and simulation of postulated accident scenarios. This thesis contains some select results from the Benchmark, comparing the results generated by McMaster to other participants. This thesis also documents additional work that was performed to propagate fundamental nuclear data uncertainty through the coupled transient calculations and obtain an estimate of the uncertainty in key figures of merit. This work was beyond the scope of the Benchmark and is a unique contribution to the open literature. Finally, sensitivity studies were performed on one of the accident scenarios defined in the Benchmark, the loss of coolant accident, to determine which input parameters have the largest contribution to the variability of key figures of merit.
302

Effects of various levels of calcium and boron nutrition on flax.

Laganière, Jacques. January 1966 (has links)
No description available.
303

Experimental studies bearing on the nature of silicate melts and their role in trace element geochemistry.

Watson, Edward Bruce January 1976 (has links)
Thesis. 1976. Ph.D.--Massachusetts Institute of Technology. Dept. of Earth and Planetary Sciences. / Microfiche copy available in Archives and Science. / Bibliography: leaves 147-157. / Ph.D.
304

Geochemistry of alkaline-earth elements in the Amazon River

Hao, Weimin January 1979 (has links)
Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Earth and Planetary Science, 1979. / MICROFICHE COPY AVAILABLE IN ARCHIVES AND LINDGREN. / Bibliography: leaves 47-52. / by Wei Min Hao. / M.S.
305

A Radiative Model for the Study of the Feedback Mechanism between Photolytic Aerosols and Solar Radiation

Santa Maria Iruzubieta, Maria 17 December 2001 (has links)
Since the early 70's chemistry and transport models (ChTMs) have been proposed and improved. Tropospheric ChTMs for trace species are detailed numerical formulations intended to represent the atmospheric system as a whole, accounting for all the individual processes and phenomena that influence climate changes. The development of computer resources and the retrieval of emission inventories and observational data of the species of interest have enhanced the model evolution towards three-dimensional global models that account for more complicated chemical mechanisms, wet and dry deposition phenomena, and interactions and feedback mechanisms between meteorology and atmospheric chemistry. The purpose of this study is to ascertain the sensitivity of the solar radiative field in the atmosphere to absorption and scattering by aerosols. This effort is preliminary to the study of feedback mechanisms between photolytic processes that create and destroy aerosols and the radiation field itself. In this study, a cloud of water-soluble aerosols, randomly distributed in space within hypothetical 1-cm cubes of atmosphere, is generated. A random radius is assigned to each aerosol according to a lognormal size distribution function. The radiative field characterization is analyzed using a Mie scattering code to determine the scattering phase function and the absorption and scattering coefficients of sulfate aerosols, and a Monte Carlo ray-trace code is used to evaluate the radiative exchange. The ultimate goal of the effort is to create a tool to analyze the vertical distribution of absorption by aerosols in order to determine whether or not feedback between photolytic processes and the radiation field needs to be included in a Third Generation Chemistry and Transport model. / Master of Science
306

Analysis of the Effect of the August 2017 Eclipse on the Ionosphere Using a Ray-trace Algorithm

Moses, Magdalina Louise 05 August 2019 (has links)
The total solar eclipse over the continental United States on August 21, 2017 offered a unique opportunity to study the dependence of the ionospheric density and morphology on incident solar radiation. Unique responses may be witnessed during eclipses, including changes in radio frequency (RF) propagation at high frequency (HF). Such changes in RF propagation were observed by the Super Dual Auroral Radar Network (SuperDARN) radars in Christmas Valley, Oregon and in Fort Hays, Kansas during the 2017 eclipse. At each site, the westward looking radar observed an increase in slant range of the backscattered signal during the eclipse onset followed by a decrease after totality. In order to investigate the underlying processes governing the ionospheric response to the eclipse, we employ the HF propagation toolbox (PHaRLAP), created by Dr. Manuel Cervera, to simulate SuperDARN data for different models of the eclipsed ionosphere. Thus, by invoking different hypotheses and comparing simulated results to SuperDARN measurements, we can study the underlying processes governing the ionosphere and improve our model of the ionospheric responses to an eclipse. This thesis presents three studies using this method: identification of the cause of the increase in slant range observed by SuperDARN during the eclipse; evaluation of different eclipse obscuration models; and quantification of the effect of the neutral wind velocity on the simulated eclipse data. / Master of Science / The ionosphere is the charged layer of the upper atmosphere, which is generated and sustained by sunlight ionizing neutral particles to form a plasma. In the absence of sunlight, ions and electrons can recombine into neutral particles. The total solar eclipse over the continental United States on August 21, 2017 offered a unique opportunity to study the dependence of the ionospheric density and plasma motion on sunlight as the period of the eclipse is much shorter than night. Observations of the ionosphere during past eclipses indicate that unique ionospheric behavior may be witnessed during eclipses, including changes in radio wave propagation for radio waves in the high frequency (HF) regime. Such changes in radio propagation were observed by the Super Dual Auroral Radar Network (SuperDARN) ionospheric HF radars in Christmas Valley, Oregon and in Fort Hays, Kansas during the 2017 eclipse. At each site, the westward looking radar observed an increase in distance that the radio waves traveled before they were reflected back to the radar during the eclipse onset followed by a decrease in this distance after totality. In order to investigate the mechanisms that produce these observed effects, we employed the HF propagation toolbox (PHaRLAP), created by Dr. Manuel Cervera, to simulate radio propagation and generate simulated SuperDARN data for different models of the eclipsed ionosphere. Thus, different models can be tested by comparing simulated data to measured data. Hence, we can study the underlying processes governing the ionosphere and improve our model of the ionospheric responses to an eclipse. This thesis presents three studies using this method to: identify the cause of the increase in the distance radio waves traveled during the eclipse; evaluate different models of change in eclipse magnitude over time; and investigate the effect of the neutral wind velocity on the simulated eclipse data.
307

Novel Architectures for Trace Buffer Design to facilitate Post-Silicon Validation and Test

Pandit, Shuchi 29 June 2014 (has links)
Post-Silicon validation is playing an increasingly important role as more chips are failing in the functional mode due to either manufacturing defects escaped during scan-based tests or design bugs missed during pre-silicon validation. Critical to the diagnosis engineer is the ability to observe as many relevant internal signal values as possible during debug. To do so, trace buffers have been proposed for enhancing the observability of internal signals during post-silicon debug. Trace Buffers are used to trace (record the values of) the internal signals in real-time when chip is in its normal operation. However, existing trace buffer architectures trace very few signals for a large number of cycles. Thus, even with a good subset of signals traced, one often still cannot restore all the relevant values in the circuit. In this work, we propose two different flexible trace buffer architectures that can restore the values for all signals by making the trace signals configurable. In addition, the buffer space can also be shared among different traced signals which makes the architectures highly flexible. As compared to conventional trace buffer architectures, the new architectures have comparable area overhead but offer the ability to restore all signals in the circuit. For cases of less than 100% restoration, the ability of circuit invariants to improve the signal restoration is explored. A promising direction for the future work is provided where targeted invariants may lead to better restoration scenario during post-silicon validation. / Master of Science
308

Next-Generation Earth Radiation Budget Instrument Concepts

Coffey, Katherine Leigh 11 May 1998 (has links)
The current effort addresses two issues important to the research conducted by the Thermal Radiation Group at Virginia Tech. The first research topic involves the development of a method which can properly model the diffraction of radiation as it enters an instrument aperture. The second topic involves the study of a potential next-generation space-borne radiometric instrument concept. Presented are multiple modeling efforts to describe the diffraction of monochromatic radiant energy passing through an aperture for use in the Monte-Carlo ray-trace environment. Described in detail is a deterministic model based upon Heisenberg's uncertainty principle and the particle theory of light. This method is applicable to either Fraunhofer or Fresnel diffraction situations, but is incapable of predicting the secondary fringes in a diffraction pattern. Also presented is a second diffraction model, based on the Huygens-Fresnel principle with a correcting obliquity factor. This model is useful for predicting Fraunhofer diffraction, and can predict the secondary fringes because it keeps track of phase. NASA is planning for the next-generation of instruments to follow CERES (Clouds and the Earth's Radiant Energy System), an instrument which measures components of the Earth's radiant energy budget in three spectral bands. A potential next-generation concept involves modification of the current CERES instrument to measure in a larger number of wavelength bands. This increased spectral partitioning would be achieved by the addition of filters and detectors to the current CERES geometry. The capacity of the CERES telescope to serve for this purpose is addressed in this thesis. / Master of Science
309

A Monte Carlo ray trace tool for predicting contrast in naval scenes including the effects of polarization

Maniscalco, Joseph 30 December 2002 (has links)
The survivability of U.S. warships has become a higher priority than ever before. Two ways to improve survivability are to either avoid damage, or to continue to operate after damage has been incurred. This thesis concentrates on the first line of defense, which involves the first of these two approaches. Specifically, this thesis evaluates the extent of threat due to optical contrast with the ocean background. As part of this effort, an MCRT tool was created that allows the user to vary the shape and surface properties of a ship. A reverse MCRT was performed in order to reduce the processing time required to get accurate results. Using this MCRT tool, the user can determine the theoretical contrast with the ocean surface that would be seen at any viewing angle with and without a polarization filter. The contrast due to differential polarization and a change in viewing angle is estimated to determine the extent of threat. These results can be determined for both daytime and nighttime conditions by specifying if the ray trace is in the infrared or visible light range. The location of the sun for daytime conditions, and the temperature of the surfaces for nighttime conditions, can all be adjusted by the user. In order to get an accurate estimation of the signal power coming from the ocean surface, a great deal of time and effort was spent modeling the ocean surface. Many studies have been done concerning the slope statistics of an ocean surface, some more informative than others. This thesis takes two of the most complete studies and brings them together to get accurate slope statistics in both along-wind and crosswind directions. An original idea by the author was used to give a typical shape to the waves of the simulated ocean surface. The surface properties of the ship were determined using Fresnel's equations and the complex index of refraction of water at the particular wavelengths of interest. / Master of Science
310

Robust Parameter Inversion Using Stochastic Estimates

Munster, Drayton William 10 January 2020 (has links)
For parameter inversion problems governed by systems of partial differential equations, such as those arising in Diffuse Optical Tomography (DOT), even the cost of repeated objective function evaluation can be overwhelming. Despite the linear (in the state variable) nature of the DOT problem, the nonlinear parameter inversion process is dominated by the computational burden of solving a large linear system for each source and frequency. To compute the Jacobian for use in Newton-type methods, an adjoint solve is required for each detector and frequency. When a three-dimensional tomography problem may have nearly 1,000 sources and detectors, the computational cost of an optimization routine is a large burden. While techniques from model order reduction can partially alleviate the computational cost, obtaining error bounds in parameter space is typically not feasible. In this work, we examine two different remedies based on stochastic estimates of the objective function. In the first manuscript, we focus on maximizing the efficiency of using stochastic estimates by replacing our objective function with a surrogate objective function computed from a reduced order model (ROM). We use as few as a single sample to detect a misfit between the full-order and surrogate objective functions. Once a sufficiently large difference is detected, it is necessary to update the ROM to reduce the error. We propose a new technique for improving the ROM with very few large linear solutions. Using this techniques, we observe a reduction of up to 98% in the number of large linear solutions for a three-dimensional tomography problem. In the second manuscript, we focus on establishing a robust algorithm. We propose a new trust region framework that replaces the objective function evaluations with stochastic estimates of the improvement factor and the misfit between the model and objective function gradients. If these estimates satisfy a fixed multiplicative error bound with a high, but fixed, probability, we show that this framework converges almost surely to a stationary point of the objective function. We derive suitable bounds for the DOT problem and present results illustrating the robust nature of these estimates with only 10 samples per iteration. / Doctor of Philosophy / For problems such as medical imaging, the process of reconstructing the state of a system from measurement data can be very expensive to compute. The ever increasing need for high accuracy requires very large models to be used. Reducing the computational burden by replacing the model with a specially constructed smaller model is an established and effective technique. However, it can be difficult to determine how well the smaller model matches the original model. In this thesis, we examine two techniques for estimating the quality of a smaller model based on randomized combinations of sources and detectors. The first technique focuses on reducing the computational cost as much as possible. With the equivalent of a single randomized source, we show that this estimate is an effective measure of the model quality. Coupled with a new technique for improving the smaller model, we demonstrate a highly efficient and robust method. The second technique prioritizes robustness in its algorithm. The algorithm uses these randomized combinations to estimate how the observations change for different system states. If these estimates are accurate with a high probability, we show that this leads to a method that always finds a minimum misfit between predicted values and the observed data.

Page generated in 0.0513 seconds