361 |
Insight Driven Sampling for Interactive Data Intensive ComputingMasiane, Moeti Moeklesia 24 June 2020 (has links)
Data Visualization is used to help humans perceive high dimensional data, but it is unable to be applied in real time to data intensive computing applications. Attempts to process and apply traditional information visualization techniques to such applications result in slow or non-responsive applications. For such applications, sampling is often used to reduce big data to smaller data so that the benefits of data visualization can be brought to data intensive applications. Sampling allows data visualization to be used as an interface between humans and insights contained in the big data of data intensive computing. However, sampling introduces error. The objective of sampling is to reduce the amount of data being processed without introducing too much error into the results of the data intensive application. To determine an adequate level of sampling one can use statistical measures like standard error. However, such measures do not translate well for cases involving data visualization. Knowing the standard error of a sample can tell you very little about the visualization of that data. What is needed is a measure that allows system users to make an informed decision on the level of sampling needed to speed up a data intensive application. In this work we introduce an insight based measure for the impact of sampling on the results of visualized data. We develop a framework for the quantification of the level of insight, model the relationship between the level of insight and the amount of sampling, use this model to provide data intensive computing users with the ability to control the amount of sampling as a function of user provided insight requirements, and we develop a prototype that utilizes our framework. This work allows users to speed up data intensive applications with a clear understanding of how the speedup will impact the insights gained from the visualization of this data. Starting with a simple one dimensional data intensive application we apply our framework and work our way to a more complicated computational fluid dynamics case as a proof concept of the application of our framework and insight error feedback measure for those using sampling to speedup data intensive computing. / Doctor of Philosophy / Data Visualization is used to help humans perceive high dimensional data, but it is unable to be applied in real time to computing applications that generate or process vast amounts of data, also known as data intensive computing applications. Attempts to process and apply traditional information visualization techniques to such data result in slow or non-responsive data intensive applications. For such applications, sampling is often used to reduce big data to smaller data so that the benefits of data visualization can be brought to data intensive applications. Sampling allows data visualization to be used as an interface between humans and insights contained in the big data of data intensive computing. However, sampling introduces error. The objective of sampling is to reduce the amount of data being processed without introducing too much error into the results of the data intensive application. This error results from the possibility that a data sample could exclude valuable information that was included in the original data set. To determine an adequate level of sampling one can use statistical measures like standard error. However, such measures do not translate well for cases involving data visualization. Knowing the standard error of a sample can tell you very little about the visualization of that data. What is needed is a measure that allows one to make an informed decision of how much sampling to use in a data intensive application, as a result of knowing how sampling impacts how people gain insights from a visualization of the sampled data. In this work we introduce an insight based measure for the impact of sampling on the results of visualized data. We develop a framework for the quantification of the level of insight, model the relationship between the level of insight and the amount of sampling, use this model to provide data intensive computing users with an insight based feedback measure for each arbitrary sample size they choose for speeding up data intensive computing, and we develop a prototype that utilizes our framework. Our prototype applies our framework and insight based feedback measure to a computational fluid dynamics (CFD) case, but our work starts off with a simple one dimensional data application and works its way up to the more complicated CFD case. This work allows users to speed up data intensive applications with a clear understanding of how the speedup will impact the insights gained from the visualization of this data.
|
362 |
Computational Modeling of Total Temperature ProbesSchneider, Alex Joseph 23 February 2015 (has links)
A study is presented to explore the suitability of CFD as a tool in the design and analysis of total temperature probes. Simulations were completed using 2D axisymmetric and 3D geometry of stagnation total temperature probes using ANSYS Fluent. The geometric effects explored include comparisons of shielded and unshielded probes, the effect of leading edge curvature on near-field flow, and the influence of freestream Mach number and pressure on probe performance. Data were compared to experimental results from the literature, with freestream conditions of M=0.3-0.9, p_t=0.2-1 atm, T_t=300-1111.1 K.
It is shown that 2D axisymmetric geometry is ill-suited for analyses of unshielded probes with bare-wire thermocouples due to their dependence upon accurate geometric characterization of bare-wire thermocouples. It is also shown that shielded probes face additional challenges when modeled using 2D axisymmetric geometry, including vent area sizing inconsistencies.
Analysis of shielded probes using both 2D axisymmetric and 3D geometry were able to produce aerodynamic recovery correction values similar to the experimental results from the literature. 2D axisymmetric geometry is shown to be sensitive to changes in freestream Mach number and pressure based upon the sizing of vent geometry, described in this report. Aerodynamic recovery correction values generated by 3D geometry do not show this sensitivity and very nearly match the results from the literature.
A second study was completed of a cooled, shielded total temperature probe which was designed, manufactured, and tested at Virginia Tech to characterize conduction error. The probe was designed utilizing conventional total temperature design guidelines and modified with feedback from CFD analysis. This test case was used to validate the role of CFD in the design of total temperature probes and the fidelity of the solutions generated when compared to experimental results. A high level of agreement between CFD predictions and experimental results is shown, while simplified, low-order model results under predicted probe recovery. / Master of Science
|
363 |
CFD analysis of airflow patterns and heat transfer in small, medium, and large structuresDetaranto, Michael Francis 05 November 2014 (has links)
Designing buildings to use energy more efficiently can lead to lower energy costs, while maintaining comfort for occupants. Computational fluid dynamics (CFD) can be utilized to visualize and simulate expected flows in buildings and structures. CFD gives architects and designers the ability to calculate the velocity, pressure, and heat transfer within a building. Previous research has not modeled natural ventilation situations that challenge common design rules of thumb used for cross-ventilation and single-sided ventilation. The current study uses a commercial code (FLUENT) to simulate cross-ventilation in simple structures and analyzes the flow patterns and heat transfer in the rooms. In the Casa Giuliana apartment and the Affleck house, this study simulates passive cooling in spaces well-designed for natural ventilation. Heat loads, human models, and electronics are included in the apartment to expand on prior research into natural ventilation in a full-scale building. Two different cases were simulated. The first had a volume flow rate similar to the ambient conditions, while the second had a much lower flow rate that had an ACH of 5, near the minimum recommended value Passive cooling in the Affleck house is simulated and has an unorthodox ventilation method; a window in the floor that opens to an exterior basement is opened along with windows and doors of the main floor to create a pressure difference. In the Affleck house, two different combinations of window and door openings are simulated to model different scenarios. Temperature contours, flow patterns, and the air changes per hour (ACH) are explored to analyze the ventilation of these structures. / Master of Science
|
364 |
Computational Modeling of Radiation Effects on Total Temperature ProbesReardon, Jonathan Paul 29 January 2016 (has links)
The requirement for accurate total temperature measurements in gaseous flows was first recognized many years ago by engineers working on the development of superchargers and combustion diagnostics. A standard temperature sensor for high temperature applications was and remains to be the thermocouple. However, this sensor is characterized by errors due to conduction heat transfer from the sensing element, as well as errors associated with the flow over it. In particular in high temperature flows, the sensing element of the thermocouple will be much hotter than its surroundings, leading to radiation heat losses. This in turn will lead to large errors in the temperature indicated by the thermocouple. Because the design and testing of thermocouple sensors can be time consuming and costly due to the many parameters that can be varied and because of the high level of detail attainable from computational studies, the use of advanced computational simulations is ideally suited to the study of thermocouple performance.
This work sought to investigate the errors associated with the use of total temperature thermocouple probes and to assess the ability to predict the performance of such probes using coupled fluid-heat transfer simulations. This was done for a wide range of flow temperatures and subsonic velocities. Simulations were undertaken for three total temperature thermocouple probe designs. The first two probes were legacy probes developed by Glawe, Simmons, and Stickney in the 1950's and were used as a validation case since these probes were extensively documented in a National Advisory Committee for Aeronautics (NACA) technical report. The third probe studied was developed at Virginia Tech which was used to investigate conduction errors experimentally. In all cases, the results of the computational simulations were compared to the experimental results to assess their applicability. In the case of the legacy NACA probes, it was shown that the predicted radiation correction compared well with the documented values. This served as a validation of the computational method. Next the procedure was extended to the conduction error case, where the recovery factor, a metric used to relate the total temperature of the flow to the total temperature indicated by the sensor, was compared. Good agreement between the experimental results was found. The effects of radiation were quantified and shown to be small. It was also demonstrated that computational simulations can be used to obtain quantities that are not easily measured experimentally. Specifically, the heat transfer coefficients and the flow through the vented shield were investigated. The heat transfer coefficients were tabulated as Nusselt numbers and were compared to a legacy correlation. It was found that although the legacy correlation under-predicted the Nusselt number, the predicted results did follow the same trend. A new correlation of the same functional form was therefore suggested. Finally, it was found that the mounting strut had a large effect on the internal flow patterns and therefore the heat transfer to the thermocouple. Overall, this work highlights the usefulness of computational simulations in the design and analysis of total temperature thermocouple sensors. / Master of Science
|
365 |
Exploring Alternative Designs for Solar Chimneys using Computational Fluid DynamicsHeisler, Elizabeth Marie 08 October 2014 (has links)
Solar chimney power plants use the buoyancy-nature of heated air to harness the Sun's energy without using solar panels. The flow is driven by a pressure difference in the chimney system, so traditional chimneys are extremely tall to increase the pressure differential and the air's velocity. Computational fluid dynamics (CFD) was used to model the airflow through a solar chimney. Different boundary conditions were tested to find the best model that simulated the night-time operation of a solar chimney assumed to be in sub-Saharan Africa. At night, the air is heated by the energy that was stored in the ground during the day dispersing into the cooler air. It is necessary to model a solar chimney with layer of thermal storage as a porous material for FLUENT to correctly calculate the heat transfer between the ground and the air. The solar collector needs to have radiative and convective boundary conditions to accurately simulate the night-time heat transfer on the collector. To correctly calculate the heat transfer in the system, it is necessary to employ the Discrete Ordinates radiation model. Different chimney configurations were studied with the hopes of designing a shorter solar chimney without decreases the amount of airflow through the system. Clusters of four and five shorter chimneys decreased the air's maximum velocity through the system, but increased the total flow rate. Passive advections wells were added to the thermal storage and were analyzed as a way to increase the heat transfer from the ground to the air. / Master of Science
|
366 |
Hydrodynamic Design of Highly Loaded Torque-neutral Ducted Propulsor for Autonomous Underwater VehiclesPawar, Suraj Arun 24 January 2019 (has links)
The design method for marine propulsor (propeller/stator) is presented for an autonomous underwater vehicle (AUV) that operates at a very high loading condition. The design method is applied to Virginia Tech Dragon AUV. It is based on the parametric geometry definition for the propulsor, use of high-fidelity CFD RANSE solver with the transition model, construction of the surrogate model, and multi-objective genetic optimization algorithm. The CFD model is validated using the paint pattern visualization on the surface of the propeller for an open propeller at model scale. The CFD model is then applied to study hydrodynamics of ducted propellers such as forces and moments, tip leakage vortex, leading-edge flow separation, and counter-rotating vortices formed at the duct trailing edge. The effect of variation of thickness for stator blades and different approaches for modeling the postswirl stator is presented. The field trials for Dragon AUV shows that there is a good correlation between expected and achieved design speed under tow condition with the designed base propulsor. The marine propulsor design is further improved with an objective to maximize the propulsive efficiency and minimize the rolling of AUV. The stator is found to eliminate the swirl component of velocity present in the wake of the propeller to the maximum extent. The propulsor designed using this method (surrogate-based optimization) is demonstrated to have an improved torque balance characteristic with a slight improvement in efficiency than the base propulsor design. / Master of Science / The propulsion system is the critical design element for an AUV, especially if it is towing a large payload. The propulsor for towing AUVs has to provide a very large thrust and hence the propulsor is highly loaded. The propeller has to rotate at very high speed to produce the required thrust and is likely to cavitate at this high speed. Also at this high loading condition, the maximum ideal efficiency of the propulsor is very less. Another challenge is the induced torque from the propeller on AUV that can cause the rolling of an AUV which is undesirable. This problem can be addressed by installing the stator behind the propeller that will produce torque in the opposite direction of the propeller torque. In this work, we present a design methodology for marine propulsor (propeller/stator) that can be used in AUV towing a large payload. The propulsor designed using this method has improved torque characteristics and has the efficiency close to 80 % of the ideal efficiency of ducted propeller at that loading condition.
|
367 |
Turbulent Simulations of a Buoyant Jet-in-CrossflowMartin, Christian Tyler 08 January 2020 (has links)
A lack of complex analysis for a thermally buoyant jet in a stratified crossflow has motivated the studies presented. A computational approach using the incompressible Navier--Stokes equations (NSE) under the Boussinesq approximation is utilized. Temperature and salinity scalar transport equations are utilized in conjunction with a linear equation of state (EOS) to obtain the density field and thus the buoyancy forcing. Comparing simulation data to experimental data of a point heat source in a stratified environment provides general agreement between the aforementioned computational model and the physics studied. From the literature surveyed, no unified agreement was presented on the selection of turbulence models for the jet--in--crossflow (JICF) problem. For this reason, a comparison is presented for a standard Reynolds--Averaged Navier--Stokes (RANS) and a hybrid Reynolds--Averaged Navier--Stokes/large eddy simulation (HRLES) turbulence model. The mathematical differences are outlined as well as the implications each model has on solving a buoyant jet in stratified crossflow. The RANS model provides a general over prediction of all flow quantities when comparing to the HRLES models. Studies involving the removal of the thermal component inside the jet as well as varying the environmental stratification strength have largely determined that these affects do not alter the near-field in any significant way, at least for a high Reynolds number JICF. The velocity ratio of the jet being the ratio of the jet velocity to the free--stream flow velocity. Deviating from a velocity ratio of one has provided information on the variability of the forcing on the plate the jet exits from, as well as in the integrated energy quantities far downstream of the jet's exit. The departures presented here show that any deviation from the unity value provides an increase in the overall forces seen by the plate. It was also found that the change in the integrated potential and turbulent kinetic energies is proportional to the deviation from a unity velocity ratio. / Master of Science / A lack of complex analysis for a heated jet in a non-uniform crossflow has motivated the studies presented. A computational approach for the fluid dynamics governing equations under specific assumptions is implemented. Additional equations are solved for temperature and salinity in conjunction with a linear equation of state to obtain the density field. Comparing simulations to experimental data of a point heat source in a non-uniform, fluid tank provides general agreement between the aforementioned computational model and the physics studied. Studying the literature yields no unified agreement on the selection of turbulence treatment for the jet-in-crossflow problem. For this reason, a comparison is presented for two various techniques with differing complexity. The mathematical differences as well as the implications each model are outlined, specifically pertaining to a heated jet in a non-uniform crossflow. The simpler model provides a general over prediction when compared to the more complex model. Studies involving the removal of the heat from inside the jet as well as varying the environmental forcing have largely determined that these affects do not alter the flow field near the jet's origin point in any significant way. Changing the jet's velocity has provided information on the variability of the forcing on the plate the jet exits from, as well as in the energy released into the environment far downstream of the jet's exit. The ratios presented show that any deviation from a notional value provides an increase in the overall forces seen by the plate. It was also found that the change in the released energies is proportional to the deviation from the notional jet velocity.
|
368 |
Computational Fluid Dynamics Analysis in Support of the NASA/Virginia Tech Benchmark ExperimentsBeardsley, Colton Tack 23 June 2020 (has links)
Computational fluid dynamics methods have seen an increasing role in aerodynamic analysis since their first implementation. However, there are several major limitations is these methods of analysis, especially in the area of modeling separated flow. There exists a large demand for high-fidelity experimental data for turbulence modeling validation. Virginia Tech has joined NASA in a cooperative project to design and perform an experiment in the Virginia Tech Stability Wind Tunnel with the purpose of providing a benchmark set of data for the turbulence modeling community for the flow over a three-dimensional bump. This process requires thorough risk mitigation and analysis of potential flow sensitivities. The current study investigates several aspects of the experimental design through the use of several computational fluid dynamics codes.
An emphasis is given to boundary condition matching and uncertainty quantification, as well as sensitivities of the flow features to Reynolds number and inflow conditions. Solutions are computed for two different RANS turbulence models, using two different finite-volume CFD codes. Boundary layer inflow parameters are studied as well as pressure and skin friction distribution on the bump surface. The shape and extent of separation are compared for the various solutions. Pressure distributions are compared to available experimental data for two different Reynolds numbers. / Master of Science / Computational fluid dynamics (CFD) methods have seen an increasing role in engineering analysis since their first implementation. However, there are several major limitations is these methods of analysis, especially in the area of modeling of several common aerodynamic phenomena such as flow separation. This motivates the need for high fidelity experimental data to be used for validating computational models. This study is meant to support the design of an experiment being cooperatively developed by NASA and Virginia Tech to provide validation data for turbulence modeling. Computational tools can be used in the experimental design process to mitigate potential experimental risks, investigate flow sensitivities, and inform decisions about instrumentation. Here, we will use CFD solutions to identify risks associated with the current experimental design and investigate their sensitivity to incoming flow conditions and Reynolds number. Numerical error estimation and uncertainty quantification is performed. A method for matching experimental inflow conditions is proposed, validated, and implemented. CFD data is also compared to experimental data. Comparisons are also made between different models and solvers.
|
369 |
A Microscopic Continuum Model of a Proton Exchange Membrane Fuel Cell Electrode Catalyst LayerArmstrong, Kenneth Weber 14 October 2004 (has links)
A series of steady-state microscopic continuum models of the cathode catalyst layer (active layer) of a proton exchange membrane fuel cell are developed and presented. This model incorporates O₂ species and ion transport while taking a discrete look at the platinum particles within the active layer. The original 2-dimensional axisymmetric Thin Film and Agglomerate Models of Bultel, Ozil, and Durand [8] were initially implemented, validated, and used to generate various results related to the performance of the active layer with changes in the thermodynamic conditions and geometry. The Agglomerate Model was then further developed, implemented, and validated to include among other things pores, flooding, and both humidified air and humidified O₂. All models were implemented and solved using FEMAP™ and a computational fluid dynamics (CFD) solver, developed by Blue Ridge Numerics Inc. (BRNI) called CFDesign™. The use of these models for the discrete modeling of platinum particles is shown to be beneficial for understanding the behavior of a fuel cell. The addition of gas pores is shown to promote high current densities due to increased species transport throughout the agglomerate. Flooding is considered, and its effect on the cathode active layer is evaluated. The model takes various transport and electrochemical kinetic parameters values from the literature in order to do a parametric study showing the degree to which temperature, pressure, and geometry are crucial to overall performance. This parametric study quantifies among a number of other things the degree to which lower porosities for thick active layers and higher porosities for thin active layers are advantageous to fuel cell performance. Cathode active layer performance is shown not to be solely a function of catalyst surface area but discrete catalyst placement within the agglomerate. / Master of Science
|
370 |
Tools and Techniques for Flow Characterization in the Development of Load Leveling Valves for Heavy Truck ApplicationGupta, Yashvardhan 04 June 2018 (has links)
This research examines different techniques and proposes a Computational Fluid Dynamics (CFD) model as a robust tool for flow characterization of load leveling valves. The load leveling valve is a critical component of an air suspension system since it manages air spring pressure, a key function that directly impacts vehicle dynamic performance in addition to maintaining a static ride height. Efficiency of operation of a load leveling valve is established by its flow characteristics, a metric useful in determining suitability of the valve for application in a truck-suspension configuration and for comparison among similar products. The disk-slot type load leveling valve was chosen as the subject of this study due to its popularity in the heavy truck industry. Three distinct methods are presented to model and evaluate flow characteristics of a disk-slot valve. First is a theoretical formulation based on gas dynamic behavior through an orifice; second is an experimental technique in which a full pneumatic apparatus is used to collect instantaneous pressure data to estimate air discharge; and third is a CFD approach. Significant discrepancies observed between theoretically estimated results and experimental data suggest that the theoretical model is incapable of accurately capturing losses that occur during air flow. These variations diminish as the magnitude of discharge coefficient is altered.
A detailed CFD model is submitted as an effective tool for load leveling valve flow characterization/analysis. This model overcomes the deficiencies of the theoretical model and improves the accuracy of simulations. A 2-D axisymmetric approximation of the real fluid domain is analyzed for flow characteristics using a Realizable k-ϵ turbulence model, scalable wall functions, and a pressure-based coupled algorithm with a second order discretization function. The CFD-generated results were observed to be in agreement with the experimental findings. CFD is found to be advantageous in the evaluation of flow characteristics as it furnishes precise data without the need to experimentally evaluate a physical model/prototype of the valve, thereby benefitting suspension engineers involved in the development and testing of load leveling valve designs. This document concludes with a sample case study which uses CFD to characterize flow in a modified disk-slot load leveling valve, and discusses the results in light of application on a heavy truck. / MS / A majority of heavy trucks in North America equipped with air suspensions use a device known as a load leveling valve. This is a mechanical control system which manages pressure in air springs to maintain a preset/constant static ride height irrespective of the payload, doing so by sensing the distance between the truck frame and the axle. The rate of airflow to/from air springs in response to a road disturbance or load shift is critical to the stability of the truck when on the road. This rate of airflow for a given set of conditions constitutes flow characteristics of a load leveling valve. Accurate measurement of flow characteristics is necessary to understand the actual effect of the use of a particular valve on a truck-suspension configuration. This research addresses that requirement by presenting three distinct methods to model and evaluate flow characteristics of a load leveling valve, conducted on the disk-slot valve for its popularity in the heavy truck industry. First is a theoretical formulation based on flow of gas through an orifice; second is an experimental technique in which a full pneumatic apparatus is used to collect instantaneous pressure data to estimate air discharge; and third is a Computational Fluid Dynamics (CFD) approach. Significant discrepancies observed between theoretically estimated results and experimental data suggest that the theoretical model is incapable of accurately capturing losses that occur during air flow. The disparities also justify the adoption of CFD as an alternate method.
A comprehensive CFD model is proposed as a capable tool for load leveling valve flow analysis/characterization. This model overcomes the deficiencies of the theoretical model and improves the accuracy of simulations. CFD-generated results are found to be in agreement with the experimental findings, highlighting its effectiveness at flow characterization. The ability of a CFD model to furnish precise data without the need to experimentally evaluate a physical model/prototype of the valve promises to benefit suspension engineers involved in the development and testing of load leveling valve designs. This document concludes with a sample case study which uses CFD to characterize flow in a modified disk-slot valve, and discusses the results in light of application on a heavy truck.
|
Page generated in 0.1539 seconds