Spelling suggestions: "subject:"computational fluid dynamics"" "subject:"eomputational fluid dynamics""
331 |
Application of r-Adaptation Techniques for Discretization Error Improvement in CFDTyson, William Conrad 29 January 2016 (has links)
Computational fluid dynamics (CFD) has proven to be an invaluable tool for both engineering design and analysis. As the performance of engineering devices become more reliant upon the accuracy of CFD simulations, it is necessary to not only quantify and but also to reduce the numerical error present in a solution. Discretization error is often the primary source of numerical error. Discretization error is introduced locally into the solution by truncation error. Truncation error represents the higher order terms in an infinite series which are truncated during the discretization of the continuous governing equations of a model. Discretization error can be reduced through uniform grid refinement but is often impractical for typical engineering problems. Grid adaptation provides an efficient means for improving solution accuracy without the exponential increase in computational time associated with uniform grid refinement. Solution accuracy can be improved through local grid refinement, often referred to as h-adaptation, or by node relocation in the computational domain, often referred to as r-adaptation. The goal of this work is to examine the effectiveness of several r-adaptation techniques for reducing discretization error. A framework for geometry preservation is presented, and truncation error is used to drive adaptation. Sample problems include both subsonic and supersonic inviscid flows. Discretization error reductions of up to an order of magnitude are achieved on adapted grids. / Master of Science
|
332 |
Feasibility Study of a Natural Uranium Neutron Spallation Target using FLiBe as a CoolantBoulanger, Andrew James 08 June 2011 (has links)
The research conducted was a feasibility study using Lithium Fluoride-Beryllium Fluoride (LiF-BeF2) or FLiBe as a coolant with a natural uranium neutron spallation source applied to an accelerator driven sub-critical molten salt reactor. The study utilized two different software tools, MCNPX 2.6 and FLUENT 12.1. MCNPX was used to determine the neutronics and heat deposited in the spallation target structure while FLUENT was used to determine the feasibility of cooling the target structure with FLiBe. Several target structures were analyzed using a variety of plates and large cylinders of natural uranium with a proton beam incident on a Hastelloy-N window. The supporting structures were created from Hastelloy-N due to their anti-corrosive properties of molten salts such as FLiBe and their resistance to neutron damage. The final design chosen was a "Sandwich" design utilizing a section of thick plates followed by several smaller plates then finally a section of thick plates to stop any protons from irradiating the bottom of the target support structure or the containment vessel of the reactor. Utilizing a proton beam with 0.81 MW of proton beam power at 1.35 mA with proton kinetic energies of 600 MeV, the total heat generated in the spallation target was about 0.9 MW due to fissions in the natural uranium. Additionally, the neutrons produced from the final design of the spallation target were approximately 1.25x1018 neutrons per second which were mainly fast neutrons. The use of a natural uranium target proved to be very promising. However, cooling the target using FLiBe would require further optimization or investigation into alternate coolants. Specifically, the final design developed using FLiBe as a coolant was not practically feasible due to the hydraulic forces resulting from the high flow rates necessary to keep the natural uranium target structures cooled. The primary reason for the lack of a feasible solution was the FLiBe as a coolant; FLiBe is unable to pull enough heat generated in the target out of the target structure. Due to the high energy density of a natural uranium spallation target structure, a more effective method of cooling will be required to avoid high hydraulic forces, such as a liquid metal coolant like lead-bismuth eutectic. / Master of Science
|
333 |
A computational study of the 3D flow and performance of a vaned radial diffuserAkseraylian, Dikran 18 November 2008 (has links)
A computational study was performed on a vaned radial diffuser using the MEFP (The Moore Elliptic Flow Program) flow code. The vaned diffuser studied by Dalbert et al. was chosen as a test case for this thesis. The geometry and inlet conditions were established from this study. The performance of the computational diffuser was compared to the test case diffuser. The CFD analysis was able to demonstrate the 3D flow within the diffuser.
An inlet conditions analysis was performed to establish the boundary conditions at the diffuser inlet. The given inlet flow angles were reduced in order to match the specified mass flow rate. The inlet static pressure was held constant over the height of the diffuser.
The diffuser was broken down into its subcomponents to study the effects of each component on the overall performance of the diffuser. The diffuser inlet region, which comprises the vaneless and semi-vaneless spaces, contains the greatest losses, 56%, but the highest static pressure rise, 54%. The performance at the throat was also evaluated and the blockage and pressure recovery were calculated.
The results show the static pressure comparison for the computational study and the test case. The overall pressure rise of the computational study was in good agreement with the measured pressure rise. The static pressure and total pressure loss distributions in the inlet region, at the throat, and in the exit region of the diffuser were also analyzed. The flow development was presented for the entire diffuser. The 3D flow calculations were able to illustrate a leading edge recirculation at the hub, caused by an inlet skew and high losses at the hub, and the secondary flows in the diffuser convected the high losses.
The study presented in this thesis demonstrated the flow development in a vaned diffuser and its subcomponents. The performance was evaluated by calculating the static pressure rise, total pressure losses, and throat blockage. It also demonstrated current CFD capabilities for diffusers using steady 3D flow analysis. / Master of Science
|
334 |
Numerical Modeling of Air-Water Flows in Bubble Columns and Airlift ReactorsStudley, Allison F. 15 January 2011 (has links)
Bubble columns and airlift reactors were modeled numerically to better understand the hydrodynamics and analyze the mixing characteristics for each configuration. An Eulerian-Eulerian approach was used to model air as the dispersed phase within a continuous phase of water using the commercial software FLUENT. The Schiller-Naumann drag model was employed along with virtual mass and the standard k-e turbulence model. The equations were discretized using the QUICK scheme and solved with the SIMPLE coupling algorithm. The flow regimes of a bubble column were investigated by varying the column diameter and the inlet gas velocity using two-dimensional simulations. The typical characteristics of a homogeneous, slug, and heterogeneous flow were shown by examining gas holdup. The flow field predicted using two-dimensional simulations of the airlift reactor showed a regular oscillation of the gas flow due to recirculation from the downcomer and connectors, whereas the bubble column oscillations were random and resulted in gas flow through the center of the column. The profiles of gas holdup, gas velocity, and liquid velocity showed that the airlift reactor flow was asymmetric and the bubble column flow was symmetric about the vertical axis of the column. The average gas holdup in a 10.2 cm diameter bubble column was calculated and the results for the two-dimensional simulation of varying inlet gas velocities were similar to published experimental results. The average gas holdup in the airlift reactor for the three-dimensional simulations compared well with the experiments, and the two-dimensional simulations underpredicted the average gas holdup. / Master of Science
|
335 |
Investigation of Erosion and Deposition of Sand Particles within a Pin Fin ArrayCowan, Jonathan B. 11 December 2009 (has links)
The transport of particulates within both a fully developed and developing pin fin arrays is explored using computational fluid dynamics (CFD) simulations. The simulations are carried out using the LES solver, GenIDLEST, for the fluid (carrier) phase and a Langragian approach for the particle (dispersed) phase. A grid independency study and validation case versus relevant experiments are given to lend confidence to the numerical simulations. Various Stokes numbers (0.78, 3.1 and 19.5) are explored as well as three nondimensional particle softening temperatures (θ<sub>ST</sub> = 0, 0.37 and 0.67). The deposition is shown to increase with decreasing particle Stokes number and thus decreasing size from 0.005% for St<sub>p</sub> = 19.5 to 13.4% for St<sub>p</sub> = 0.78 and is almost completely concentrated on the channel walls (99.6% - 100%). The erosion potential is shown to increase with Stokes number and is highest on the pin faces. As is to be expected, the deposition increases with decreasing softening temperature from 13.4% at θ<sub>ST</sub> = 0.67 to 79% for θ<sub>ST</sub> =0. Overall, the channel walls of the array show the greatest potential for deposition. On the other hand, the pin faces show the greatest potential for erosion. Similarly, the higher Stokes number particles have more erosion potential while the lower Stokes number particles have a higher potential for erosion. / Master of Science
|
336 |
Insight Driven Sampling for Interactive Data Intensive ComputingMasiane, Moeti Moeklesia 24 June 2020 (has links)
Data Visualization is used to help humans perceive high dimensional data, but it is unable to be applied in real time to data intensive computing applications. Attempts to process and apply traditional information visualization techniques to such applications result in slow or non-responsive applications. For such applications, sampling is often used to reduce big data to smaller data so that the benefits of data visualization can be brought to data intensive applications. Sampling allows data visualization to be used as an interface between humans and insights contained in the big data of data intensive computing. However, sampling introduces error. The objective of sampling is to reduce the amount of data being processed without introducing too much error into the results of the data intensive application. To determine an adequate level of sampling one can use statistical measures like standard error. However, such measures do not translate well for cases involving data visualization. Knowing the standard error of a sample can tell you very little about the visualization of that data. What is needed is a measure that allows system users to make an informed decision on the level of sampling needed to speed up a data intensive application. In this work we introduce an insight based measure for the impact of sampling on the results of visualized data. We develop a framework for the quantification of the level of insight, model the relationship between the level of insight and the amount of sampling, use this model to provide data intensive computing users with the ability to control the amount of sampling as a function of user provided insight requirements, and we develop a prototype that utilizes our framework. This work allows users to speed up data intensive applications with a clear understanding of how the speedup will impact the insights gained from the visualization of this data. Starting with a simple one dimensional data intensive application we apply our framework and work our way to a more complicated computational fluid dynamics case as a proof concept of the application of our framework and insight error feedback measure for those using sampling to speedup data intensive computing. / Doctor of Philosophy / Data Visualization is used to help humans perceive high dimensional data, but it is unable to be applied in real time to computing applications that generate or process vast amounts of data, also known as data intensive computing applications. Attempts to process and apply traditional information visualization techniques to such data result in slow or non-responsive data intensive applications. For such applications, sampling is often used to reduce big data to smaller data so that the benefits of data visualization can be brought to data intensive applications. Sampling allows data visualization to be used as an interface between humans and insights contained in the big data of data intensive computing. However, sampling introduces error. The objective of sampling is to reduce the amount of data being processed without introducing too much error into the results of the data intensive application. This error results from the possibility that a data sample could exclude valuable information that was included in the original data set. To determine an adequate level of sampling one can use statistical measures like standard error. However, such measures do not translate well for cases involving data visualization. Knowing the standard error of a sample can tell you very little about the visualization of that data. What is needed is a measure that allows one to make an informed decision of how much sampling to use in a data intensive application, as a result of knowing how sampling impacts how people gain insights from a visualization of the sampled data. In this work we introduce an insight based measure for the impact of sampling on the results of visualized data. We develop a framework for the quantification of the level of insight, model the relationship between the level of insight and the amount of sampling, use this model to provide data intensive computing users with an insight based feedback measure for each arbitrary sample size they choose for speeding up data intensive computing, and we develop a prototype that utilizes our framework. Our prototype applies our framework and insight based feedback measure to a computational fluid dynamics (CFD) case, but our work starts off with a simple one dimensional data application and works its way up to the more complicated CFD case. This work allows users to speed up data intensive applications with a clear understanding of how the speedup will impact the insights gained from the visualization of this data.
|
337 |
Computational Modeling of Total Temperature ProbesSchneider, Alex Joseph 23 February 2015 (has links)
A study is presented to explore the suitability of CFD as a tool in the design and analysis of total temperature probes. Simulations were completed using 2D axisymmetric and 3D geometry of stagnation total temperature probes using ANSYS Fluent. The geometric effects explored include comparisons of shielded and unshielded probes, the effect of leading edge curvature on near-field flow, and the influence of freestream Mach number and pressure on probe performance. Data were compared to experimental results from the literature, with freestream conditions of M=0.3-0.9, p_t=0.2-1 atm, T_t=300-1111.1 K.
It is shown that 2D axisymmetric geometry is ill-suited for analyses of unshielded probes with bare-wire thermocouples due to their dependence upon accurate geometric characterization of bare-wire thermocouples. It is also shown that shielded probes face additional challenges when modeled using 2D axisymmetric geometry, including vent area sizing inconsistencies.
Analysis of shielded probes using both 2D axisymmetric and 3D geometry were able to produce aerodynamic recovery correction values similar to the experimental results from the literature. 2D axisymmetric geometry is shown to be sensitive to changes in freestream Mach number and pressure based upon the sizing of vent geometry, described in this report. Aerodynamic recovery correction values generated by 3D geometry do not show this sensitivity and very nearly match the results from the literature.
A second study was completed of a cooled, shielded total temperature probe which was designed, manufactured, and tested at Virginia Tech to characterize conduction error. The probe was designed utilizing conventional total temperature design guidelines and modified with feedback from CFD analysis. This test case was used to validate the role of CFD in the design of total temperature probes and the fidelity of the solutions generated when compared to experimental results. A high level of agreement between CFD predictions and experimental results is shown, while simplified, low-order model results under predicted probe recovery. / Master of Science
|
338 |
CFD analysis of airflow patterns and heat transfer in small, medium, and large structuresDetaranto, Michael Francis 05 November 2014 (has links)
Designing buildings to use energy more efficiently can lead to lower energy costs, while maintaining comfort for occupants. Computational fluid dynamics (CFD) can be utilized to visualize and simulate expected flows in buildings and structures. CFD gives architects and designers the ability to calculate the velocity, pressure, and heat transfer within a building. Previous research has not modeled natural ventilation situations that challenge common design rules of thumb used for cross-ventilation and single-sided ventilation. The current study uses a commercial code (FLUENT) to simulate cross-ventilation in simple structures and analyzes the flow patterns and heat transfer in the rooms. In the Casa Giuliana apartment and the Affleck house, this study simulates passive cooling in spaces well-designed for natural ventilation. Heat loads, human models, and electronics are included in the apartment to expand on prior research into natural ventilation in a full-scale building. Two different cases were simulated. The first had a volume flow rate similar to the ambient conditions, while the second had a much lower flow rate that had an ACH of 5, near the minimum recommended value Passive cooling in the Affleck house is simulated and has an unorthodox ventilation method; a window in the floor that opens to an exterior basement is opened along with windows and doors of the main floor to create a pressure difference. In the Affleck house, two different combinations of window and door openings are simulated to model different scenarios. Temperature contours, flow patterns, and the air changes per hour (ACH) are explored to analyze the ventilation of these structures. / Master of Science
|
339 |
Computational Modeling of Radiation Effects on Total Temperature ProbesReardon, Jonathan Paul 29 January 2016 (has links)
The requirement for accurate total temperature measurements in gaseous flows was first recognized many years ago by engineers working on the development of superchargers and combustion diagnostics. A standard temperature sensor for high temperature applications was and remains to be the thermocouple. However, this sensor is characterized by errors due to conduction heat transfer from the sensing element, as well as errors associated with the flow over it. In particular in high temperature flows, the sensing element of the thermocouple will be much hotter than its surroundings, leading to radiation heat losses. This in turn will lead to large errors in the temperature indicated by the thermocouple. Because the design and testing of thermocouple sensors can be time consuming and costly due to the many parameters that can be varied and because of the high level of detail attainable from computational studies, the use of advanced computational simulations is ideally suited to the study of thermocouple performance.
This work sought to investigate the errors associated with the use of total temperature thermocouple probes and to assess the ability to predict the performance of such probes using coupled fluid-heat transfer simulations. This was done for a wide range of flow temperatures and subsonic velocities. Simulations were undertaken for three total temperature thermocouple probe designs. The first two probes were legacy probes developed by Glawe, Simmons, and Stickney in the 1950's and were used as a validation case since these probes were extensively documented in a National Advisory Committee for Aeronautics (NACA) technical report. The third probe studied was developed at Virginia Tech which was used to investigate conduction errors experimentally. In all cases, the results of the computational simulations were compared to the experimental results to assess their applicability. In the case of the legacy NACA probes, it was shown that the predicted radiation correction compared well with the documented values. This served as a validation of the computational method. Next the procedure was extended to the conduction error case, where the recovery factor, a metric used to relate the total temperature of the flow to the total temperature indicated by the sensor, was compared. Good agreement between the experimental results was found. The effects of radiation were quantified and shown to be small. It was also demonstrated that computational simulations can be used to obtain quantities that are not easily measured experimentally. Specifically, the heat transfer coefficients and the flow through the vented shield were investigated. The heat transfer coefficients were tabulated as Nusselt numbers and were compared to a legacy correlation. It was found that although the legacy correlation under-predicted the Nusselt number, the predicted results did follow the same trend. A new correlation of the same functional form was therefore suggested. Finally, it was found that the mounting strut had a large effect on the internal flow patterns and therefore the heat transfer to the thermocouple. Overall, this work highlights the usefulness of computational simulations in the design and analysis of total temperature thermocouple sensors. / Master of Science
|
340 |
Exploring Alternative Designs for Solar Chimneys using Computational Fluid DynamicsHeisler, Elizabeth Marie 08 October 2014 (has links)
Solar chimney power plants use the buoyancy-nature of heated air to harness the Sun's energy without using solar panels. The flow is driven by a pressure difference in the chimney system, so traditional chimneys are extremely tall to increase the pressure differential and the air's velocity. Computational fluid dynamics (CFD) was used to model the airflow through a solar chimney. Different boundary conditions were tested to find the best model that simulated the night-time operation of a solar chimney assumed to be in sub-Saharan Africa. At night, the air is heated by the energy that was stored in the ground during the day dispersing into the cooler air. It is necessary to model a solar chimney with layer of thermal storage as a porous material for FLUENT to correctly calculate the heat transfer between the ground and the air. The solar collector needs to have radiative and convective boundary conditions to accurately simulate the night-time heat transfer on the collector. To correctly calculate the heat transfer in the system, it is necessary to employ the Discrete Ordinates radiation model. Different chimney configurations were studied with the hopes of designing a shorter solar chimney without decreases the amount of airflow through the system. Clusters of four and five shorter chimneys decreased the air's maximum velocity through the system, but increased the total flow rate. Passive advections wells were added to the thermal storage and were analyzed as a way to increase the heat transfer from the ground to the air. / Master of Science
|
Page generated in 0.1616 seconds