391 |
A TRACE/PARCS Coupling, Uncertainty Propagation and Sensitivity Analysis Methodology for the IAEA ICSP on Numerical Benchmarks for Multi-Physics Simulation of Pressurized Heavy Water Reactor TransientsGroves, Kai January 2020 (has links)
The IAEA ICSP on Numerical Benchmarks for Multiphysics Simulation of Pressurized Heavy Water Reactor Transients was initiated in 2016 to facilitate the development of a set of open access, standardized, numerical test problems for postulated accident scenarios in a CANDU styled Reactor. The test problems include a loss of coolant accident resulting from an inlet header break, a loss of flow accident caused by a single pump trip, and a loss of regulation accident due to inadvertently withdrawn adjusters. The Benchmark was split into phases, which included stand-alone physics and thermal-hydraulics transients, coupled steady state simulations, and coupled transients. This thesis documents the results that were generated through an original TRACE/PARCS coupling methodology that was developed specifically for this work. There is a strong emphasis on development methods and step by step verification throughout the thesis, to provide a framework for future research in this area. In addition to the Benchmark results, additional studies on propagation of fundamental nuclear data uncertainty, and sensitivity analysis of coupled transients are reported in this thesis. Two Phenomena and Key Parameter Identification and Ranking Tables were generated for the loss of coolant accident scenario, to provide feedback to the Benchmark Team, and to add to the body of work on uncertainty/sensitivity analysis of CANDU style reactors. Some important results from the uncertainty analysis work relate to changes in the uncertainty of figures of merit such as integrated core power, and peak core power magnitude and time, between small and large break loss of coolant accidents. The analysis shows that the mean and standard deviation of the integrated core power and maximum integrated channel power, are very close between a 30% header break and a 60% header break, despite the peak core power being much larger in the 60% break case. Furthermore, it shows that there is a trade off between the uncertainty in the time of the peak core power, and the magnitude of the peak core power, with smaller breaks showing a smaller standard deviation in the magnitude of the peak core power, but a larger standard deviation in when this power is reached during the transient, and vice versa for larger breaks. From the results of the sensitivity analysis study, this thesis concludes that parameters related to coolant void reactivity and shutoff rod timing and effectiveness have the largest impact on loss of coolant accident progressions, while parameters that can have a large impact in other transients or reactor designs, such as fuel temperature reactivity feedback and control device incremental cross sections, are less important. / Thesis / Master of Science (MSc) / This thesis documents McMaster’s contribution to an International Atomic Energy Agency Benchmark on Pressurized Heavy Water Reactors that closely resemble the CANDU design. The Benchmark focus is on coupling of thermal-hydraulics and neutron physics codes, and simulation of postulated accident scenarios. This thesis contains some select results from the Benchmark, comparing the results generated by McMaster to other participants. This thesis also documents additional work that was performed to propagate fundamental nuclear data uncertainty through the coupled transient calculations and obtain an estimate of the uncertainty in key figures of merit. This work was beyond the scope of the Benchmark and is a unique contribution to the open literature. Finally, sensitivity studies were performed on one of the accident scenarios defined in the Benchmark, the loss of coolant accident, to determine which input parameters have the largest contribution to the variability of key figures of merit.
|
392 |
Learning-Based Pareto Optimal Control of Large-Scale Systems with Unknown Slow DynamicsTajik Hesarkuchak, Saeed 10 June 2024 (has links)
We develop a data-driven approach to Pareto optimal control of large-scale systems, where decision makers know only their local dynamics. Using reinforcement learning, we design a control strategy that optimally balances multiple objectives. The proposed method achieves near-optimal performance and scales well with the total dimension of the system. Experimental results demonstrate the effectiveness of our approach in managing multi-area power systems. / Master of Science / We have developed a new way to manage complex systems—like power networks—where each part only knows about its own behavior. By using a type of artificial intelligence known as reinforcement learning, we've designed a method that can handle multiple goals at once, ensuring that the entire system remains stable and works efficiently, no matter how large it gets. Our tests show that this method is particularly effective in coordinating different sections of power systems to work together smoothly. This could lead to more efficient and reliable power distribution in large networks.
|
393 |
Spatio-temporal patterns of soil resources following disturbance in a 40-year-old slash pine (pinus elliottii Engelm.) forest in the Coastal Plain of South CarolinaGuo, Dali 06 November 2001 (has links)
There has been an increased interest in characterizing and interpreting ecological heterogeneity over space and time in the past two decades. This is mainly due to the renewed recognition of the significance of heterogeneity in ecological theories. However, studies that have combined both spatial and temporal aspects of heterogeneity have been rare. A unified approach to define and quantify heterogeneity has also been lacking. Designed to overcome these problems, this study was conducted in a 40-year-old Pinus elliottii Engelm. forest at the Savannah River Site near Aiken, SC, USA with the following specific objectives: 1) to characterize the spatial patterns of soil and forest floor variables (moisture, pH, soil available nitrogen and phosphate, forest floor and soil carbon and nitrogen), 2) to examine the dynamics of these spatial patterns in response to two types of disturbance: whole-tree harvesting and girdling, and 3) to evaluate some of the current methods for quantifying ecological heterogeneity.
In response to both disturbance treatments, spatial heterogeneity measured by sample variance showed a marked "increase and then decline" temporal pattern in soil moisture, soil available nitrogen and phosphorus. Similar patterns were not found in total soil C and N, and total litter C and N. Harvesting resulted in greater and more drastic changes in the variations of soil nutrients and water than did girdling. Despite the popularity of semivariogram analysis in recent ecological studies, the technique did not provide consistent results on patterns of heterogeneity in our system. A simulation experiment demonstrated that semivariogram analysis may suffer from many problems when it is used to characterize patchiness, one form of heterogeneity.
The results from this study have a number of implications. First, spatial patterns of soil resources are high dynamic. The dynamics of patterns in soil resources may partly account for the weak correlation between vegetation and soil observed in ecological literature. Second, heterogeneity may be most effectively quantified by first identifying quantifiable components and then quantifying these components individually. A common pattern can be sought by comparing patterns of different components of heterogeneity for a given ecological property and by comparing patterns of different ecological variables for a given component of heterogeneity. Third, compared to surveys, field manipulative experiments can provide information that link patterns with ecological processes. As such, this study adds to ecological literature valuable information on temporal changes of soil heterogeneity following disturbance and conceptual advances in the quantification of ecological heterogeneity. / Ph. D.
|
394 |
A morphological and systematic study of the first and second instars of the Kermesidae in the Nearctic region (Homoptera: Coccoidea)Baer, Ronald G. 08 June 2010 (has links)
The Cocco idea or scale insects are among the most important pests of sylvicultural, ornamental, agricultural and greenhouse plantings throughout the world. There are 21 families comprising 6,000 species. Direct injury occurs from the withdrawal of sap while feeding and from the production of galls. Some scale insects are host specific while others are polyphagous. They feed on many different parts of plants including the roots, trunk, stems, leaves, buds and fruit. / Ph. D.
|
395 |
Full Scale Investigation of Bilge Keel Effectiveness at Forward SpeedGrant, David J. 03 June 2008 (has links)
Ship motions in a seaway have long been of great importance, and today with advanced hull forms and higher speeds they are as important as ever. While one can now often adequately predict heave, pitch, sway, yaw and even surge, roll motions are much more difficult. Roll is the one motion that is very dependent upon viscous effects of the fluid. Recently, at David Taylor Model Basin, there have been model experiments where the bilge keels were instrumented in order to directly measure their damping force upon the vessel. To build upon this work and to validate it when applied to full scale vessels, a trial using the Italian naval vessel Nave Bettica was performed.
The objective of this thesis is to describe the experiment, present and analyze the results, and offer some conclusions based upon these results. The process of instrumenting the port bilge keel using strain gages and correlating their output to pressures and total forces is described. Selected results for different forward speeds are presented, with full results in the appendices. Particle image velocimetry (PIV) was also performed during the test and was used to measure the flow field in a three foot by three foot area under the aft end of the same bilge keel. Selected image series are presented, as is a methodology for using these images to calculate the center of pressure and the corresponding results. / Master of Science
|
396 |
Design of an Experimental Mine Simulator for the Development of a Procedure for Utilization Multiple Tracer Gases in Underground MinesBowling, John Robert Reid 01 June 2011 (has links)
An experimental mine simulator was constructed which will be used to conduct tracer gas experiments in the laboratory. The test apparatus simulates a mine in a tabular deposit and is modular and simple and can be easily rearranged to represent a variety of mine geometries. The apparatus is appropriate for the use of tracer gases by being both airtight and open-circuit (exhausting to the atmosphere) and by maintaining turbulent flow throughout the model, ensuring the tracer gas is fully dispersed.
The model features ports for injection and sampling of tracer gases, which represent boreholes present in an actual mine. The model is designed, in part, for the practice of tracer gas release and sampling methods in the laboratory. Valves on the apparatus represent ventilation controls, such as stoppings or regulators, or changing resistances in a mine, such an increase in resistance due to a roof fall or a decrease in resistance due to stoppings being destroyed. The relative resistances of airways can be changed by changing the status of the valves to represent different states of the ventilation controls.
The mine simulator should serve as a tool for identifying and investigating novel tracer gases, developing a procedure for performing ventilation surveys using multiple tracer gases, and eventually developing a method for remotely inferring ventilation changes using tracer gases. / Master of Science
|
397 |
Obsessive Compulsive Self-Syntonicity of Symptoms Scale: Development, Reliability and ValidityVan Kirk, Nathaniel Peter 11 June 2010 (has links)
One of the difficulties encountered by therapists working with individuals with obsessive-compulsive (OC) symptoms/disorder is the resistance of OC symptoms to change. A factor that may affect the prognosis for such individuals is the extent to which their symptoms result in positive or negative functional consequences. The current study describes the development, reliability, and validity of a new scale — the Obsessive Compulsive Self-Syntonicity of Symptoms Scale (OCSSSS) - that measures the positive and negative functional consequences of OC symptoms. Items were generated by the principal investigator and major professor after examining the research literature for OC and related disorders. Fifty-four items were generated, which yielded a robust, seven component structure through principal components analysis. Items were rated on a 1-5 Likert scale (1=extremely inconsistent - 5=extremely consistent) with an option of "not applicable" (coded as "0"). Higher scores on the OCSSSS indicated more perceived functional consequences of OC symptoms in an individual's daily life. The sample consisted of 634 students, who responded to an advertisement describing general examples of OC symptoms and who completed several measures online, including: 1) Yale Brown Obsessive Compulsive Scale, Self Report (Y-BOCS-SR); 2)Obsessive Compulsive Inventory Revised (OCI-R); 3) University of Rhode Island Change Assessment (URICA); 4) self-report of frequency of prior/current therapy; evaluation of the effectiveness of prior therapy; evaluation of the expected effectiveness of future therapy; and willingness to participate in future therapy; and 5) social/work adjustment. The OCSSSS was reliable and most items correlated with total score. The OCSSSS's total score and individual component scores were correlated with symptom severity, symptom type, subjective evaluations of treatment experiences, willingness to participate in treatment, avoidance, work and social adjustment, and stage of change. Regression analyses indicated the OCSSSS significantly predicts stage of change, controlling for symptom severity. / Master of Science
|
398 |
Wal Mart: New stratedy in urban scale for American downtownsLee, Jong Bum 01 June 2007 (has links)
Wal Mart, which contains nation's largest private employers (1.2 million workers), sellers of retail goods($280 billion), and owner of corporate real estate(911 million square feet), is now locally and globally getting more dominant. Its simple but straight forward strategy is enough to make customers' pockets open and buy goods. However, these strategies are also getting bumped into the risks for high quality oriented markets which have been targeted for upper income class. Furthermore, on the reason that maintains enormously occupied box storage concept outside the town, Wal Mart is blamed for making the town more boundless and uncharacteristic as well as killing many retail shops in downtown. Architecturally. reconsidering Wal Mart to be more distinctively balancing to proper urban issues and practical designs is now more imperative than ever. Therefore, this thesis work will be new grope of possibilities of that Wal Mart can be futurally constructive being more friendly approached to the town and communities in a revitalization view of rural towns. / Master of Architecture
|
399 |
On a method of multiple-presentation scaling of successive intervalsMcGahey, Mary Beth January 1958 (has links)
This thesis discusses a multiple-presentation scaling technique, by a somewhat modified successive interval approach. If a subject gives two or more different responses to the same stimulus in repeated presentations this may be considered as an "uncertainty" or lack of the subject's ability to discriminate between adjacent points on the rating scale. This "interval of uncertainty" is assumed to be a constant, and the scale points are estimated, on the basis of this assumption, by a least-squares technique.
Analysis and explicit computational procedures have been developed for the case of two, three, and four presentations. Numerical illustrations have been added for each case.
The thesis also includes a discussion on the combination of scales for different subjects if their intervals of uncertainty are different. Finally, it proposes methods of testing for bias between presentations, and a randomized procedure to correct for such bias if its presence is indicated by the test. / Master of Science
|
400 |
Operationalizing Scale in Watershed-based Stormwater ManagementAdams, Erica Elaine 07 June 2011 (has links)
Watershed-based stormwater management (WSM) has been proposed as more effective for stormwater management than traditional methods of controlling stormwater, which are carried out based on jurisdictional lines at the parcel-scale. Because WSM considers the watershed as a total unit, this method is considered to be more effective in reducing problems associated with stormwater management including environmental degradation and flooding. However, larger watersheds encompass smaller watersheds, and therefore WSM can be implemented at a wide range of scales. There has been little research on what scale is most appropriate, and more specifically, only a modest amount of work has taken stakeholder opinion into account.
The specific objectives of this study are to determine: 1) if watershed scale is an important factor in WSM, 2) whether stakeholder opinion has an effect on the appropriate scale used in WSM, and 3) what scale is most appropriate for WSM, if scale is an important factor. To meet these objectives, we delineated sub-watersheds within a watershed in southwestern Virginia, surveyed stakeholders within the watershed on their opinions of stormwater management methods, and compared the results at both watershed scales using statistical tests and decisions support software. The results of this study have important implications for geographic scale in WSM as well as the use of qualitative data in determining appropriate geographic scale in matters of implementation in the field of planning. / Master of Science
|
Page generated in 0.1327 seconds