• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 677
  • 231
  • 110
  • 46
  • 42
  • 20
  • 20
  • 16
  • 11
  • 8
  • 7
  • 6
  • 4
  • 3
  • 3
  • Tagged with
  • 1735
  • 1735
  • 1735
  • 459
  • 399
  • 356
  • 227
  • 226
  • 194
  • 177
  • 172
  • 158
  • 155
  • 149
  • 148
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Liquid Sodium Stratication Prediction and Simulation in a Two-Dimensional Slice

Langhans, Robert Florian 28 March 2017 (has links)
In light of rising global temperatures and energy needs, nuclear power is uniquely positioned to offer carbon-free and reliable electricity. In many markets, nuclear power faces strong headwinds due to competition with other fuel sources and prohibitively high capital costs. Small Modular Reactors (SMRs), such as the proposed Advanced Fast Reactor (AFR) 100, have gained popularity in recent years as they promise economies of scale, reduced capital costs, and flexibility of deployment. Fast sodium reactors commonly feature an upper plenum with a large inventory of sodium. When temperatures change due to transients, stratification can occur. It is important to understand the stratification behavior of these large volumes because stratification can counteract natural circulation and fatigue materials. This work features steady-state and transient simulations of thermal stratification and natural circulation of liquid sodium in a simple rectangular slice using a commercial CFD code (ANSYS FLUENT). Different inlet velocities and their effect on stratification are investigated by changing the inlet geometry. Stratification was observed in the two cases with the lowest inlet velocities. An approach for tracking the stratification interface was developed that focuses on temperature gradients rather than differences. Other authors have developed correlations to predict stratification in three dimensional enclosures. However, these correlations predict stratified conditions for all simulations even the ones that did not stratify. The previous models are modified to reflect the two-dimensional nature of the flow in the enclosure. The results align more closely with the simulations and correctly predict stratification in the investigated cases. / Master of Science
332

Improved Design Method for Cambered Stepped Hulls with High Deadrise

Bay, Raymond James 18 June 2019 (has links)
Eugene Clement created a design method for swept-back cambered step hulls with deadrise. The cambered step is designed to carry 90% of the planing vessels weight with the remaining 10% being support by a stern mounted hydrofoil. The method requires multiple design iterations in order to achieve an optimal design. Clement stated that the method was not suitable for cambered planing surfaces with high deadrise angles greater than 15 degrees. The goal of this thesis is to create a design procedure for swept-back cambered planing surfaces with high deadrise angles that does not require multiple iterations to obtain an optimal design. Computational fluid dynamics (CFD) program STAR CCM+ is used to generate a database for performance characteristics for a wide range of designs varying deadrise angle, load requirements, trim angle, and different camber values. The simulations are first validated with experimental data for two different cambered steps designed by Stefano Brizzolara and tested in the tow tank at the United States Naval Academy. A series of validation studies utilizing fixed and overset meshes led to a final simulation set up with an overset mesh that allowed for accurate prediction of drag, trim moment, wetted keel length, and the wake profile aft of the cambered planing surface. The database is fitted such that the final equations for optimal design values such as camber, trim angle, drag (shear and pressure), wetted keel length, wetted surface area, and trim moment are in terms of deadrise angle and lift. The optimized design equations are validated with CFD simulation. / Master of Science / Eugene Clement developed a new design method to improve the performance of ultra-fast planing crafts. A planing craft uses the force generated from the flow of water over the bottom to lift the vessel without the use of the static buoyancy force that classic boat designs rely on. Clement wanted to improve the performance of the planing vessel by reducing the total drag force caused by the flow of water on the bottom of the vessel. Clement's design method involves reducing the wetted surface area which reduces drag. Reducing the wetted surface area would normally cause the lifting force on the vessel to reduce, but with the addition of curvature in the smaller wetted surface area, the lifting force would remain the same. Clement's new design method requires multiple iterations to obtain an optimal design. The method limits the angle of the vessels bottom relative to horizontal to under 15 degree. The goal of this thesis is to create a new design method for planing vessels with bottoms that have an incline of 15 degrees or more relative to horizontal. The design method is created using Computational Fluid Dynamics (CFD) solver to model the planing surface moving through water. The CFD solver is validated with experimental test performed at the United States Naval Academy. The improved design method uses equations that can predict the forces and other design characteristics based on the desired vessel weight and seakeeping requirements.
333

Numerical Investigation of Various Heat Transfer Performance Enhancement Configurations for Energy Harvesting Applications

Deshpande, Samruddhi Aniruddha 09 August 2016 (has links)
Conventional understanding of quality of energy suggests that heat is a low grade form of energy. Hence converting this energy into useful form of work was assumed difficult. However, this understanding was challenged by researchers over the last few decades. With advances in solar, thermal and geothermal energy harvesting, they believed that these sources of energy had great potential to operate as dependable avenues for electrical power. In recent times, waste heat from automobiles, oil and gas and manufacturing industries were employed to harness power. Statistics show that US alone has a potential of generating 120,000 GWh/year of electricity from oil , gas and manufacturing industries, while automobiles can contribute upto 15,900 GWh/year. Thermoelectric generators (TEGs) can be employed to capture some of this otherwise wasted heat and to convert this heat into useful electrical energy. This field of research as compared to gas turbine industry has emerged recently over past 30 decades. Researchers have shown that efficiency of these TEGs modules can be improved by integrating heat transfer augmentation features on the hot side of these modules. Gas turbines employ advanced technologies for internal and external cooling. These technologies have applications over wide range of applications, one of which is thermoelectricity. Hence, making use of gas turbine technologies in thermoelectrics would surely improve the efficiency of existing TEGs. This study makes an effort to develop innovative technologies for gas turbine as well as thermoelectric applications. The first part of the study analyzes heat transfer augmentation from four different configurations for low aspect ratio channels and the second part deal with characterizing improvement in efficiency of TEGs due to the heat transfer augmentation techniques. / Master of Science
334

Application of r-Adaptation Techniques for Discretization Error Improvement in CFD

Tyson, William Conrad 29 January 2016 (has links)
Computational fluid dynamics (CFD) has proven to be an invaluable tool for both engineering design and analysis. As the performance of engineering devices become more reliant upon the accuracy of CFD simulations, it is necessary to not only quantify and but also to reduce the numerical error present in a solution. Discretization error is often the primary source of numerical error. Discretization error is introduced locally into the solution by truncation error. Truncation error represents the higher order terms in an infinite series which are truncated during the discretization of the continuous governing equations of a model. Discretization error can be reduced through uniform grid refinement but is often impractical for typical engineering problems. Grid adaptation provides an efficient means for improving solution accuracy without the exponential increase in computational time associated with uniform grid refinement. Solution accuracy can be improved through local grid refinement, often referred to as h-adaptation, or by node relocation in the computational domain, often referred to as r-adaptation. The goal of this work is to examine the effectiveness of several r-adaptation techniques for reducing discretization error. A framework for geometry preservation is presented, and truncation error is used to drive adaptation. Sample problems include both subsonic and supersonic inviscid flows. Discretization error reductions of up to an order of magnitude are achieved on adapted grids. / Master of Science
335

Feasibility Study of a Natural Uranium Neutron Spallation Target using FLiBe as a Coolant

Boulanger, Andrew James 08 June 2011 (has links)
The research conducted was a feasibility study using Lithium Fluoride-Beryllium Fluoride (LiF-BeF2) or FLiBe as a coolant with a natural uranium neutron spallation source applied to an accelerator driven sub-critical molten salt reactor. The study utilized two different software tools, MCNPX 2.6 and FLUENT 12.1. MCNPX was used to determine the neutronics and heat deposited in the spallation target structure while FLUENT was used to determine the feasibility of cooling the target structure with FLiBe. Several target structures were analyzed using a variety of plates and large cylinders of natural uranium with a proton beam incident on a Hastelloy-N window. The supporting structures were created from Hastelloy-N due to their anti-corrosive properties of molten salts such as FLiBe and their resistance to neutron damage. The final design chosen was a "Sandwich" design utilizing a section of thick plates followed by several smaller plates then finally a section of thick plates to stop any protons from irradiating the bottom of the target support structure or the containment vessel of the reactor. Utilizing a proton beam with 0.81 MW of proton beam power at 1.35 mA with proton kinetic energies of 600 MeV, the total heat generated in the spallation target was about 0.9 MW due to fissions in the natural uranium. Additionally, the neutrons produced from the final design of the spallation target were approximately 1.25x1018 neutrons per second which were mainly fast neutrons. The use of a natural uranium target proved to be very promising. However, cooling the target using FLiBe would require further optimization or investigation into alternate coolants. Specifically, the final design developed using FLiBe as a coolant was not practically feasible due to the hydraulic forces resulting from the high flow rates necessary to keep the natural uranium target structures cooled. The primary reason for the lack of a feasible solution was the FLiBe as a coolant; FLiBe is unable to pull enough heat generated in the target out of the target structure. Due to the high energy density of a natural uranium spallation target structure, a more effective method of cooling will be required to avoid high hydraulic forces, such as a liquid metal coolant like lead-bismuth eutectic. / Master of Science
336

A computational study of the 3D flow and performance of a vaned radial diffuser

Akseraylian, Dikran 18 November 2008 (has links)
A computational study was performed on a vaned radial diffuser using the MEFP (The Moore Elliptic Flow Program) flow code. The vaned diffuser studied by Dalbert et al. was chosen as a test case for this thesis. The geometry and inlet conditions were established from this study. The performance of the computational diffuser was compared to the test case diffuser. The CFD analysis was able to demonstrate the 3D flow within the diffuser. An inlet conditions analysis was performed to establish the boundary conditions at the diffuser inlet. The given inlet flow angles were reduced in order to match the specified mass flow rate. The inlet static pressure was held constant over the height of the diffuser. The diffuser was broken down into its subcomponents to study the effects of each component on the overall performance of the diffuser. The diffuser inlet region, which comprises the vaneless and semi-vaneless spaces, contains the greatest losses, 56%, but the highest static pressure rise, 54%. The performance at the throat was also evaluated and the blockage and pressure recovery were calculated. The results show the static pressure comparison for the computational study and the test case. The overall pressure rise of the computational study was in good agreement with the measured pressure rise. The static pressure and total pressure loss distributions in the inlet region, at the throat, and in the exit region of the diffuser were also analyzed. The flow development was presented for the entire diffuser. The 3D flow calculations were able to illustrate a leading edge recirculation at the hub, caused by an inlet skew and high losses at the hub, and the secondary flows in the diffuser convected the high losses. The study presented in this thesis demonstrated the flow development in a vaned diffuser and its subcomponents. The performance was evaluated by calculating the static pressure rise, total pressure losses, and throat blockage. It also demonstrated current CFD capabilities for diffusers using steady 3D flow analysis. / Master of Science
337

Numerical Modeling of Air-Water Flows in Bubble Columns and Airlift Reactors

Studley, Allison F. 15 January 2011 (has links)
Bubble columns and airlift reactors were modeled numerically to better understand the hydrodynamics and analyze the mixing characteristics for each configuration. An Eulerian-Eulerian approach was used to model air as the dispersed phase within a continuous phase of water using the commercial software FLUENT. The Schiller-Naumann drag model was employed along with virtual mass and the standard k-e turbulence model. The equations were discretized using the QUICK scheme and solved with the SIMPLE coupling algorithm. The flow regimes of a bubble column were investigated by varying the column diameter and the inlet gas velocity using two-dimensional simulations. The typical characteristics of a homogeneous, slug, and heterogeneous flow were shown by examining gas holdup. The flow field predicted using two-dimensional simulations of the airlift reactor showed a regular oscillation of the gas flow due to recirculation from the downcomer and connectors, whereas the bubble column oscillations were random and resulted in gas flow through the center of the column. The profiles of gas holdup, gas velocity, and liquid velocity showed that the airlift reactor flow was asymmetric and the bubble column flow was symmetric about the vertical axis of the column. The average gas holdup in a 10.2 cm diameter bubble column was calculated and the results for the two-dimensional simulation of varying inlet gas velocities were similar to published experimental results. The average gas holdup in the airlift reactor for the three-dimensional simulations compared well with the experiments, and the two-dimensional simulations underpredicted the average gas holdup. / Master of Science
338

Investigation of Erosion and Deposition of Sand Particles within a Pin Fin Array

Cowan, Jonathan B. 11 December 2009 (has links)
The transport of particulates within both a fully developed and developing pin fin arrays is explored using computational fluid dynamics (CFD) simulations. The simulations are carried out using the LES solver, GenIDLEST, for the fluid (carrier) phase and a Langragian approach for the particle (dispersed) phase. A grid independency study and validation case versus relevant experiments are given to lend confidence to the numerical simulations. Various Stokes numbers (0.78, 3.1 and 19.5) are explored as well as three nondimensional particle softening temperatures (θ<sub>ST</sub> = 0, 0.37 and 0.67). The deposition is shown to increase with decreasing particle Stokes number and thus decreasing size from 0.005% for St<sub>p</sub> = 19.5 to 13.4% for St<sub>p</sub> = 0.78 and is almost completely concentrated on the channel walls (99.6% - 100%). The erosion potential is shown to increase with Stokes number and is highest on the pin faces. As is to be expected, the deposition increases with decreasing softening temperature from 13.4% at θ<sub>ST</sub> = 0.67 to 79% for θ<sub>ST</sub> =0. Overall, the channel walls of the array show the greatest potential for deposition. On the other hand, the pin faces show the greatest potential for erosion. Similarly, the higher Stokes number particles have more erosion potential while the lower Stokes number particles have a higher potential for erosion. / Master of Science
339

Insight Driven Sampling for Interactive Data Intensive Computing

Masiane, Moeti Moeklesia 24 June 2020 (has links)
Data Visualization is used to help humans perceive high dimensional data, but it is unable to be applied in real time to data intensive computing applications. Attempts to process and apply traditional information visualization techniques to such applications result in slow or non-responsive applications. For such applications, sampling is often used to reduce big data to smaller data so that the benefits of data visualization can be brought to data intensive applications. Sampling allows data visualization to be used as an interface between humans and insights contained in the big data of data intensive computing. However, sampling introduces error. The objective of sampling is to reduce the amount of data being processed without introducing too much error into the results of the data intensive application. To determine an adequate level of sampling one can use statistical measures like standard error. However, such measures do not translate well for cases involving data visualization. Knowing the standard error of a sample can tell you very little about the visualization of that data. What is needed is a measure that allows system users to make an informed decision on the level of sampling needed to speed up a data intensive application. In this work we introduce an insight based measure for the impact of sampling on the results of visualized data. We develop a framework for the quantification of the level of insight, model the relationship between the level of insight and the amount of sampling, use this model to provide data intensive computing users with the ability to control the amount of sampling as a function of user provided insight requirements, and we develop a prototype that utilizes our framework. This work allows users to speed up data intensive applications with a clear understanding of how the speedup will impact the insights gained from the visualization of this data. Starting with a simple one dimensional data intensive application we apply our framework and work our way to a more complicated computational fluid dynamics case as a proof concept of the application of our framework and insight error feedback measure for those using sampling to speedup data intensive computing. / Doctor of Philosophy / Data Visualization is used to help humans perceive high dimensional data, but it is unable to be applied in real time to computing applications that generate or process vast amounts of data, also known as data intensive computing applications. Attempts to process and apply traditional information visualization techniques to such data result in slow or non-responsive data intensive applications. For such applications, sampling is often used to reduce big data to smaller data so that the benefits of data visualization can be brought to data intensive applications. Sampling allows data visualization to be used as an interface between humans and insights contained in the big data of data intensive computing. However, sampling introduces error. The objective of sampling is to reduce the amount of data being processed without introducing too much error into the results of the data intensive application. This error results from the possibility that a data sample could exclude valuable information that was included in the original data set. To determine an adequate level of sampling one can use statistical measures like standard error. However, such measures do not translate well for cases involving data visualization. Knowing the standard error of a sample can tell you very little about the visualization of that data. What is needed is a measure that allows one to make an informed decision of how much sampling to use in a data intensive application, as a result of knowing how sampling impacts how people gain insights from a visualization of the sampled data. In this work we introduce an insight based measure for the impact of sampling on the results of visualized data. We develop a framework for the quantification of the level of insight, model the relationship between the level of insight and the amount of sampling, use this model to provide data intensive computing users with an insight based feedback measure for each arbitrary sample size they choose for speeding up data intensive computing, and we develop a prototype that utilizes our framework. Our prototype applies our framework and insight based feedback measure to a computational fluid dynamics (CFD) case, but our work starts off with a simple one dimensional data application and works its way up to the more complicated CFD case. This work allows users to speed up data intensive applications with a clear understanding of how the speedup will impact the insights gained from the visualization of this data.
340

Computational Modeling of Total Temperature Probes

Schneider, Alex Joseph 23 February 2015 (has links)
A study is presented to explore the suitability of CFD as a tool in the design and analysis of total temperature probes. Simulations were completed using 2D axisymmetric and 3D geometry of stagnation total temperature probes using ANSYS Fluent. The geometric effects explored include comparisons of shielded and unshielded probes, the effect of leading edge curvature on near-field flow, and the influence of freestream Mach number and pressure on probe performance. Data were compared to experimental results from the literature, with freestream conditions of M=0.3-0.9, p_t=0.2-1 atm, T_t=300-1111.1 K. It is shown that 2D axisymmetric geometry is ill-suited for analyses of unshielded probes with bare-wire thermocouples due to their dependence upon accurate geometric characterization of bare-wire thermocouples. It is also shown that shielded probes face additional challenges when modeled using 2D axisymmetric geometry, including vent area sizing inconsistencies. Analysis of shielded probes using both 2D axisymmetric and 3D geometry were able to produce aerodynamic recovery correction values similar to the experimental results from the literature. 2D axisymmetric geometry is shown to be sensitive to changes in freestream Mach number and pressure based upon the sizing of vent geometry, described in this report. Aerodynamic recovery correction values generated by 3D geometry do not show this sensitivity and very nearly match the results from the literature. A second study was completed of a cooled, shielded total temperature probe which was designed, manufactured, and tested at Virginia Tech to characterize conduction error. The probe was designed utilizing conventional total temperature design guidelines and modified with feedback from CFD analysis. This test case was used to validate the role of CFD in the design of total temperature probes and the fidelity of the solutions generated when compared to experimental results. A high level of agreement between CFD predictions and experimental results is shown, while simplified, low-order model results under predicted probe recovery. / Master of Science

Page generated in 0.1326 seconds