• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 572
  • 213
  • 91
  • 41
  • 26
  • 17
  • 11
  • 10
  • 10
  • 8
  • 6
  • 5
  • 5
  • 3
  • 2
  • Tagged with
  • 1464
  • 1464
  • 1464
  • 391
  • 355
  • 317
  • 212
  • 197
  • 169
  • 165
  • 155
  • 145
  • 139
  • 133
  • 128
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Novel Approach for Computational Modeling of a Non-Premixed Rotating Detonation Engine

Subramanian, Sathyanarayanan 17 July 2019 (has links)
Detonation cycles are identified as an efficient alternative to the Brayton cycles used in power and propulsion applications. Rotating Detonation Engine (RDE) operating on a detonation cycle works by compressing the working fluid across a detonation wave, thereby reducing the number of compressor stages required in the thermodynamic cycle. Numerical analyses of RDEs are flexible in understanding the flow field within the RDE, however, three-dimensional analyses are expensive due to the differences in time-scale required to resolve the combustion process and flow-field. The alternate two-dimensional analyses are generally modeled with perfectly premixed fuel injection and do not capture the effects of improper mixing arising due to discrete injection of fuel and oxidizer into the chamber. To model realistic injection in a 2-D analysis, the current work uses an approach in which, a Probability Density Function (PDF) of the fuel mass fraction at the chamber inlet is extracted from a 3-D, cold-flow simulation and is used as an inlet boundary condition for fuel mass fraction in the 2-D analysis. The 2-D simulation requires only 0.4% of the CPU hours for one revolution of the detonation compared to an equivalent 3-D simulation. Using this method, a perfectly premixed RDE is comparing with a non-premixed case. The performance is found to vary between the two cases. The mean detonation velocities, time-averaged static pressure profiles are found to be similar between the two cases, while the local detonation velocities and peak pressure values vary in the non-premixed case due to local pockets fuel rich/lean mixtures. The mean detonation cell sizes are similar, but the distribution in the non-premixed case is closer due to stronger shock structures. An analytical method is used to check the effects of fuel-product stratification and heat loss from the RDE and these effects adversely affect the local detonation velocity. Overall, this method of modeling captures the complex physics in an RDE with the advantage of reduced computational cost and therefore can be used for design and diagnostic purposes. / Master of Science / The conventional Brayton cycle used in power and propulsion applications is highly optimized, at cycle and component levels. In pursuit of higher thermodynamic efficiency, detonation cycles are identified as an efficient alternative and gained increased attention in the scientific community. In a Rotating Detonation Engine (RDE), which is based on the detonation cycle, the compression of gases occurs across a shock wave. This method of achieving high compression ratios reduces the number of compressor stages required for operation. In an RDE (where combustion occurs between two coaxial cylinders), the fuel and oxidizer are injected axially into the combustion chamber where the detonation is initiated. The resultant detonation wave spins continuously in the azimuthal direction, consuming fresh fuel mixture. The combustion products expand and exhaust axially providing thrust/mechanical energy when coupled with a turbine. Numerical analyses of RDEs are flexible over experimental analysis, in terms of understanding the flow physics and the physical/chemical processes occurring within the engine. However, three-dimensional numerical analyses are computationally expansive, and therefore demanding an equivalent, efficient two-dimensional analysis. In most RDEs, fuel and oxidizer are injected from separate plenums into the chamber. This type of injection leads to inhomogeneity of the fuel-air mixture within the RDE which adversely affects the performance of the engine. The current study uses a novel method to effectively capture these physics in a 2-D numerical analysis. Furthermore, the performance of the combustor is compared between perfectly premixed injection and discrete, non-premixed injection. The method used in this work can be used for any injector design and is a powerful/efficient way to numerically analyze a Rotating Detonation Engine.
252

Influence of Fuel Inhomogeneity and Stratification Length Scales on Detonation Wave Propagation in a Rotating Detonation Combustor (RDC)

Raj, Piyush 03 May 2021 (has links)
The detonation-based engine has the key advantage of increased thermodynamic efficiency over the traditional constant pressure combustor. These detonation-based engines are also known as Pressure Gain Combustion systems (PGC) and Rotating Detonation Combustor (RDC) is a form of PGC, in which the detonation wave propagates azimuthally around an annular combustor. Prior researchers have performed a high fidelity 3-D numerical simulation of a rotating detonation combustor (RDC) to understand the flow physics such as detonation wave velocity, pressure profile, wave structure; however, performing these 3-D simulations is computationally expensive. 2-D simulations are a potential alternative to reduce computational cost. In most RDCs, fuel and oxidizer are injected discretely from separate plenums, and this discrete fuel/air injection results in inhomogeneous mixing within the domain. Due to the discrete fuel injection locations, fuel/oxidizer will stratify to form localized pockets of rich and lean mixtures. The motivation of the present study is to investigate the impact of unmixedness and stratification length scales on the performance of an RDC using a 2-D numerical approach. Unmixedness, which is defined as the standard deviation of equivalence ratio normalized by the mean global equivalence ratio, is a measure of the degree of fuel-oxidizer inhomogeneity. To model the effect of unmixedness in a 2-D domain, a lognormal distribution of the fuel mass fraction is generated with a mean equivalence ratio of 1 and varying standard deviations at the inlet boundary as a numerical source term. Moreover, to model the effects of stratification length scales, fuel mass fraction at the inlet boundary cells is bundled for a given length scale, and the mass fractions for these bundles are updated based on the lognormal distribution after every three-time steps. Using this methodology, 2-D numerical analyses are carried out to investigate the performance of an RDC for an H2-air mixture with varying unmixedness and stratification length scales. Results show that mean detonation velocity decreases and wave speed variation increases with an increase in unmixedness. However, with an increase in stratification length scale mean velocity remain relatively unchanged but variation in local velocity increases. The detonation wave front corrugation also increases with an increase in mixture inhomogeneity. The mean detonation cell size increases with an increase in unmixedness. The cell shape becomes more distorted and irregular with an increase in stratification length scale and unmixedness. The combined effect of unmixedness and stratification length scale leads to a decrease in pressure gain. Overall, this concept is able to elucidate the effects of varying unmixedness and stratification length scales on the performance of an RDC. / Master of Science / Pressure Gain Combustion (PGC) system has gained significant focus in recent years due to its increased thermodynamic efficiency over a constant pressure Brayton Cycle. Rotating Detonation Combustor (RDC) is a type of PGC system, which is thermodynamically more efficient than the conventional gas turbine combustor. One of the main aspects of the detonation process is the rapid burning of the fuel-oxidizer mixture, which occurs so fast that there is not enough time for pressure to equilibrate. Therefore, the process is thermodynamically closer to a constant volume process rather than a constant pressure process. A constant volume cycle is thermodynamically more efficient than a constant pressure Brayton cycle. In an RDC, a mixture of fuel and air is injected axially, and a detonation wave propagates continuously through the circumferential section. Numerical simulation of an RDC provides additional flexibility over experiments in understanding the flow physics, detonation wave structure, and analyzing the physical and chemical processes involved in the detonation cycle. Prior researchers have utilized a full-scale 3-D numerical simulation for understanding the performance of an RDC. However, the major challenge with 3-D analyses is the computational expense. Thus, to overcome this, an inexpensive 2-D simulation is used to model the flow physics of an RDC. In most RDCs, the fuel and oxidizer are injected discretely from separate plenums. Due to the discrete fuel injection, the fuel/air mixture is never perfectly premixed and results in a stratified flow field. The objective of the current work is to develop a novel approach to independently investigate the effects of varying unmixedness and stratification length scales on RDC performance using a 2-D simulation.
253

A numerical study of the short- and long-term heat transfer phenomena of borehole heat exchangers

Harris, Brianna January 2024 (has links)
This thesis contributes an in-depth comparative study of u-tube and coaxial borehole heat exchangers. While it is widely accepted that the lower resistance of the coaxial heat exchanger should result in a performance advantage, the findings of several studies comparing the heat exchanger configurations did not definitively establish the mechanisms causing differences in performance. This study employs numerical modelling to consider heat exchangers over a broad range of time scales and under carefully controlled geometry and flow conditions, resulting in the identification of the key parameters influencing borehole heat exchanger performance. The first part of this study consists of a comparison of u-tube and coaxial heat exchangers under continuous loading. A detailed conjugate heat transfer numerical model was developed in OpenFOAM, designed to capture both short and long time scales of heat exchange, necessary to understand the nuanced differences between designs. A novel transient resistance analysis was employed to understand the dominant factors influencing performance. This study established that marginal differences exist between u-tube and coaxial borehole heat exchangers (BHEs) when operated continuously long term but that greater differences occur early in operation. The second phase of this investigation provided a framework for analysing borehole heat exchanger performance during intermittent operation, while also comparing u-tube and coaxial designs. During this study, it was found that reducing operating time, improving the the rate of the ground's recovery to its original temperature, and lowering the duty cycle improved BHE performance. Transit time was identified as a influential time scale, below which heating at the outlet was limited. Further, the benefits of operating below the transit time were mitigated by design-specific interaction between inlet and outlet flows. Finally, this study found that non-dimensionalizing operating time by transit time causes the differences between u-tube and coaxial performance to vanish, leading to the conclusion that differences in BHE performance are caused by variations in flow rather than thermal mass. / Thesis / Doctor of Philosophy (PhD) / This thesis provides an in-depth comparative study of two different designs of borehole heat exchanger, the u-tube and coaxial, which are used in geothermal applications to transfer heat to and from the ground. While many researchers anticipated that the coaxial design would perform better, several studies comparing the heat exchangers were not able to provide a clear answer about which heat exchanger performed best. This study addressed this gap by using detailed numerical simulations which showed that there was a marginal difference in performance between the two heat exchangers when operated for periods longer than a few hours, but that larger differences occurred early in operation (under 15 minutes). The results also showed that operating intermittently resulted in improvements in performance of the heat exchanger, particularly when operated for periods less than the time it takes fluid to travel the length of the piping.
254

Computational Simulation of Coal Gasification in Fluidized Bed Reactors

Soncini, Ryan Michael 24 August 2017 (has links)
The gasification of carbonaceous fuel materials offers significant potential for the production of both energy and chemical products. Advancement of gasification technologies may be expedited through the use of computational fluid dynamics, as virtual reactor design offers a low cost method for system prototyping. To that end, a series of numerical studies were conducted to identify a computational modeling strategy for the simulation of coal gasification in fluidized bed reactors. The efforts set forth by this work first involved the development of a validatable hydrodynamic modeling strategy for the simulation of sand and coal fluidization. Those fluidization models were then applied to systems at elevated temperatures and polydisperse systems that featured a complex material injection geometry, for which no experimental data exists. A method for establishing similitude between 2-D and 3-D multiphase systems that feature non-symmetric material injection were then delineated and numerically tested. Following the development of the hydrodynamic modeling strategy, simulations of coal gasification were conducted using three different chemistry models. Simulated results were compared to experimental outcomes in an effort to assess the validity of each gasification chemistry model. The chemistry model that exhibited the highest degree of agreement with the experimental findings was then further analyzed identify areas of potential improvement. / Ph. D. / Efficient utilization of coal is critical to ensuring stable domestic energy supplies while mitigating human impact on climate change. This idea may be realized through the use of gasification systems technologies. The design and planning of next-generation coal gasification reactors can benefit from the use of computational simulations to reduce both development time and cost. This treatise presents several studies where computational fluid dynamics was applied to the problem of coal gasification in a bubbling fluidized bed reactor with focuses on accurate tracking of solid material locations and modeling of chemical reactions.
255

Modeling the Stimulation of Vestibular Hair Cell Bundles Using Computational Fluid Dynamics and Finite Element Analysis

Welker, Joseph Robert 19 September 2012 (has links)
Computational fluid dynamics and finite element analysis were employed to study vestibular hair cell bundle mechanics under physiologic stimulus conditions. CFD was performed using ANSYS CFX and FEA utilized a custom MATLAB model. Nine varieties of hair cell bundles were modeled using tip-forcing only (commonly used experimentally), fluid-flow only (physiologic for free-standing bundles), and combined loading (physiologic for bundles with tip attachments) conditions to determine how the bundles behaved in each case. The bundles differed in the heights of their components, their length and width, and their number of steriocilia. Tip links were modeled to determine ion-channel opening behavior. Results show that positive pressures, negative pressures, and shear stresses on the exterior of the bundles are of comparable magnitude. Under combined loading, some bundles experienced very high suction pressures on their interior. The bundles with tall steriocilia are hindered by the endolymph while those with short steriocilia and much taller kinocilia are assisted by the fluid flow. Each bundle type has a different range over which it is most sensitive so that the bundles cumulatively cover a very large range of stimuli; the order in which bundles respond from smallest stimulus magnitude to largest is free-standing extrastriolar bundles, attached striolar bundles, attached extrastriolar bundles, and free-standing extrastriolar bundles. A short examination of off-axis loading shows that the prevailing theory suggesting that bundle response is proportional to the cosine of the angle between the stimulus direction and the bundle's direction of maximum excitation is incorrect. / Ph. D.
256

Liquid Sodium Stratication Prediction and Simulation in a Two-Dimensional Slice

Langhans, Robert Florian 28 March 2017 (has links)
In light of rising global temperatures and energy needs, nuclear power is uniquely positioned to offer carbon-free and reliable electricity. In many markets, nuclear power faces strong headwinds due to competition with other fuel sources and prohibitively high capital costs. Small Modular Reactors (SMRs), such as the proposed Advanced Fast Reactor (AFR) 100, have gained popularity in recent years as they promise economies of scale, reduced capital costs, and flexibility of deployment. Fast sodium reactors commonly feature an upper plenum with a large inventory of sodium. When temperatures change due to transients, stratification can occur. It is important to understand the stratification behavior of these large volumes because stratification can counteract natural circulation and fatigue materials. This work features steady-state and transient simulations of thermal stratification and natural circulation of liquid sodium in a simple rectangular slice using a commercial CFD code (ANSYS FLUENT). Different inlet velocities and their effect on stratification are investigated by changing the inlet geometry. Stratification was observed in the two cases with the lowest inlet velocities. An approach for tracking the stratification interface was developed that focuses on temperature gradients rather than differences. Other authors have developed correlations to predict stratification in three dimensional enclosures. However, these correlations predict stratified conditions for all simulations even the ones that did not stratify. The previous models are modified to reflect the two-dimensional nature of the flow in the enclosure. The results align more closely with the simulations and correctly predict stratification in the investigated cases. / Master of Science
257

Feasibility Study of a Natural Uranium Neutron Spallation Target using FLiBe as a Coolant

Boulanger, Andrew James 08 June 2011 (has links)
The research conducted was a feasibility study using Lithium Fluoride-Beryllium Fluoride (LiF-BeF2) or FLiBe as a coolant with a natural uranium neutron spallation source applied to an accelerator driven sub-critical molten salt reactor. The study utilized two different software tools, MCNPX 2.6 and FLUENT 12.1. MCNPX was used to determine the neutronics and heat deposited in the spallation target structure while FLUENT was used to determine the feasibility of cooling the target structure with FLiBe. Several target structures were analyzed using a variety of plates and large cylinders of natural uranium with a proton beam incident on a Hastelloy-N window. The supporting structures were created from Hastelloy-N due to their anti-corrosive properties of molten salts such as FLiBe and their resistance to neutron damage. The final design chosen was a "Sandwich" design utilizing a section of thick plates followed by several smaller plates then finally a section of thick plates to stop any protons from irradiating the bottom of the target support structure or the containment vessel of the reactor. Utilizing a proton beam with 0.81 MW of proton beam power at 1.35 mA with proton kinetic energies of 600 MeV, the total heat generated in the spallation target was about 0.9 MW due to fissions in the natural uranium. Additionally, the neutrons produced from the final design of the spallation target were approximately 1.25x1018 neutrons per second which were mainly fast neutrons. The use of a natural uranium target proved to be very promising. However, cooling the target using FLiBe would require further optimization or investigation into alternate coolants. Specifically, the final design developed using FLiBe as a coolant was not practically feasible due to the hydraulic forces resulting from the high flow rates necessary to keep the natural uranium target structures cooled. The primary reason for the lack of a feasible solution was the FLiBe as a coolant; FLiBe is unable to pull enough heat generated in the target out of the target structure. Due to the high energy density of a natural uranium spallation target structure, a more effective method of cooling will be required to avoid high hydraulic forces, such as a liquid metal coolant like lead-bismuth eutectic. / Master of Science
258

A computational study of the 3D flow and performance of a vaned radial diffuser

Akseraylian, Dikran 18 November 2008 (has links)
A computational study was performed on a vaned radial diffuser using the MEFP (The Moore Elliptic Flow Program) flow code. The vaned diffuser studied by Dalbert et al. was chosen as a test case for this thesis. The geometry and inlet conditions were established from this study. The performance of the computational diffuser was compared to the test case diffuser. The CFD analysis was able to demonstrate the 3D flow within the diffuser. An inlet conditions analysis was performed to establish the boundary conditions at the diffuser inlet. The given inlet flow angles were reduced in order to match the specified mass flow rate. The inlet static pressure was held constant over the height of the diffuser. The diffuser was broken down into its subcomponents to study the effects of each component on the overall performance of the diffuser. The diffuser inlet region, which comprises the vaneless and semi-vaneless spaces, contains the greatest losses, 56%, but the highest static pressure rise, 54%. The performance at the throat was also evaluated and the blockage and pressure recovery were calculated. The results show the static pressure comparison for the computational study and the test case. The overall pressure rise of the computational study was in good agreement with the measured pressure rise. The static pressure and total pressure loss distributions in the inlet region, at the throat, and in the exit region of the diffuser were also analyzed. The flow development was presented for the entire diffuser. The 3D flow calculations were able to illustrate a leading edge recirculation at the hub, caused by an inlet skew and high losses at the hub, and the secondary flows in the diffuser convected the high losses. The study presented in this thesis demonstrated the flow development in a vaned diffuser and its subcomponents. The performance was evaluated by calculating the static pressure rise, total pressure losses, and throat blockage. It also demonstrated current CFD capabilities for diffusers using steady 3D flow analysis. / Master of Science
259

Investigation of Erosion and Deposition of Sand Particles within a Pin Fin Array

Cowan, Jonathan B. 11 December 2009 (has links)
The transport of particulates within both a fully developed and developing pin fin arrays is explored using computational fluid dynamics (CFD) simulations. The simulations are carried out using the LES solver, GenIDLEST, for the fluid (carrier) phase and a Langragian approach for the particle (dispersed) phase. A grid independency study and validation case versus relevant experiments are given to lend confidence to the numerical simulations. Various Stokes numbers (0.78, 3.1 and 19.5) are explored as well as three nondimensional particle softening temperatures (θ<sub>ST</sub> = 0, 0.37 and 0.67). The deposition is shown to increase with decreasing particle Stokes number and thus decreasing size from 0.005% for St<sub>p</sub> = 19.5 to 13.4% for St<sub>p</sub> = 0.78 and is almost completely concentrated on the channel walls (99.6% - 100%). The erosion potential is shown to increase with Stokes number and is highest on the pin faces. As is to be expected, the deposition increases with decreasing softening temperature from 13.4% at θ<sub>ST</sub> = 0.67 to 79% for θ<sub>ST</sub> =0. Overall, the channel walls of the array show the greatest potential for deposition. On the other hand, the pin faces show the greatest potential for erosion. Similarly, the higher Stokes number particles have more erosion potential while the lower Stokes number particles have a higher potential for erosion. / Master of Science
260

Insight Driven Sampling for Interactive Data Intensive Computing

Masiane, Moeti Moeklesia 24 June 2020 (has links)
Data Visualization is used to help humans perceive high dimensional data, but it is unable to be applied in real time to data intensive computing applications. Attempts to process and apply traditional information visualization techniques to such applications result in slow or non-responsive applications. For such applications, sampling is often used to reduce big data to smaller data so that the benefits of data visualization can be brought to data intensive applications. Sampling allows data visualization to be used as an interface between humans and insights contained in the big data of data intensive computing. However, sampling introduces error. The objective of sampling is to reduce the amount of data being processed without introducing too much error into the results of the data intensive application. To determine an adequate level of sampling one can use statistical measures like standard error. However, such measures do not translate well for cases involving data visualization. Knowing the standard error of a sample can tell you very little about the visualization of that data. What is needed is a measure that allows system users to make an informed decision on the level of sampling needed to speed up a data intensive application. In this work we introduce an insight based measure for the impact of sampling on the results of visualized data. We develop a framework for the quantification of the level of insight, model the relationship between the level of insight and the amount of sampling, use this model to provide data intensive computing users with the ability to control the amount of sampling as a function of user provided insight requirements, and we develop a prototype that utilizes our framework. This work allows users to speed up data intensive applications with a clear understanding of how the speedup will impact the insights gained from the visualization of this data. Starting with a simple one dimensional data intensive application we apply our framework and work our way to a more complicated computational fluid dynamics case as a proof concept of the application of our framework and insight error feedback measure for those using sampling to speedup data intensive computing. / Doctor of Philosophy / Data Visualization is used to help humans perceive high dimensional data, but it is unable to be applied in real time to computing applications that generate or process vast amounts of data, also known as data intensive computing applications. Attempts to process and apply traditional information visualization techniques to such data result in slow or non-responsive data intensive applications. For such applications, sampling is often used to reduce big data to smaller data so that the benefits of data visualization can be brought to data intensive applications. Sampling allows data visualization to be used as an interface between humans and insights contained in the big data of data intensive computing. However, sampling introduces error. The objective of sampling is to reduce the amount of data being processed without introducing too much error into the results of the data intensive application. This error results from the possibility that a data sample could exclude valuable information that was included in the original data set. To determine an adequate level of sampling one can use statistical measures like standard error. However, such measures do not translate well for cases involving data visualization. Knowing the standard error of a sample can tell you very little about the visualization of that data. What is needed is a measure that allows one to make an informed decision of how much sampling to use in a data intensive application, as a result of knowing how sampling impacts how people gain insights from a visualization of the sampled data. In this work we introduce an insight based measure for the impact of sampling on the results of visualized data. We develop a framework for the quantification of the level of insight, model the relationship between the level of insight and the amount of sampling, use this model to provide data intensive computing users with an insight based feedback measure for each arbitrary sample size they choose for speeding up data intensive computing, and we develop a prototype that utilizes our framework. Our prototype applies our framework and insight based feedback measure to a computational fluid dynamics (CFD) case, but our work starts off with a simple one dimensional data application and works its way up to the more complicated CFD case. This work allows users to speed up data intensive applications with a clear understanding of how the speedup will impact the insights gained from the visualization of this data.

Page generated in 0.0513 seconds