• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 137
  • 37
  • 32
  • 17
  • 10
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 415
  • 415
  • 92
  • 84
  • 79
  • 78
  • 57
  • 53
  • 51
  • 49
  • 40
  • 38
  • 37
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Efeito da radiação gama sobre a viscosidade de soluções de gelatina e pectina utilizadas na indústria de alimentos / Effect of gamma irradiation in the viscosity of gelatin and pectin solutions used in food industry

Inamura, Patrícia Yoko 22 February 2008 (has links)
A pectina é uma substância polissacarídica originada de plantas, que pode ser utilizada como agente gelificante, estabilizante de compotas, em bebidas láticas e iogurtes. A gelatina, proteína, nesse caso de origem bovina, é principalmente utilizada como agente gelificante, pois, forma hidrogéis por resfriamento. O processo de irradiação por radiação gama pode causar uma variedade de modificações nas macromoléculas algumas de aplicação industrial, como é o caso de reticulação. A resposta dinâmica de materiais viscoelásticos pode ser usada para dar informação sobre o aspecto estrutural de um sistema a nível molecular. No presente trabalho, pectina com diferentes graus de metoxilação, gelatina e a mistura de ambas foram empregados para estudar a sensibilidade à radiação por meio de medidas de viscosidade. Amostras de soluções de pectina de alto teor de metoxilação (ATM) 1%, pectina de baixo teor (BTM) 1%, gelatina 0,5%,1% e 2%, e a mistura de ambas a 1% e 2% foram irradiadas com raios gama em doses de até 15kGy e taxa de dose em torno de 2kGy/h. Após irradiação, a viscosidade foi medida dentro de um período de 48 h. A viscosidade da pectina ATM e BTM diminuiu drasticamente com o aumento da dose de radiação. A gelatina, entretanto, apresentou grande resistência à radiação. Na mistura de ambas, houve predomínio do comportamento esperado para a pectina. / Pectin is a polysaccharide substance of plant origin that may be used as gelling agent, stabilizer in jams, in yogurt drinks and lactic acid beverages. Gelatin, a protein from bovine origin, in this case, is mainly used as gelling agent due to hydrogel formation during cooling. The 60 Co-irradiation process may cause various modifications in macromolecules, some with industrial application, as reticulation. The dynamic response of viscoelastic materials can be used in order to give information about the structural aspect of a system at molecular level. In the present work samples of pectin with different degree of methoxylation, gelatin and the mixture of both were employed to study the radiation sensitivity by means of viscosity measurements. Solutions prepared with citric pectin with high methoxylation content (ATM) 1 por cent, pectin with low content (BTM) 1 por cent, gelatin 0.5 por cent, 1 por cent and 2 por cent, and the mixture 1 por cent and 2 por cent were irradiated with gamma rays at different doses, up to 15 kGy with dose rate about 2 kGy/h. After irradiation the viscosity was measured within a period of 48 h. The viscosity of ATM and BTM pectin solutions decreased sharply with the radiation dose. However, the gelatin sample presented a great radiation resistance. When pectin and gelatin solutions were mixed a predominance of pectin behavior was found.
262

Correction et simplification de modèles géologiques par frontières : impact sur le maillage et la simulation numérique en sismologie et hydrodynamique / Repair and simplification of geological boundary representation models : impact on mesh and numerical simulation in seismology and hydrodynamics

Anquez, Pierre 12 June 2019 (has links)
Les modèles géologiques numériques 2D et 3D permettent de comprendre l'organisation spatiale des roches du sous-sol. Ils sont également conçus pour réaliser des simulations numériques afin d’étudier ou de prédire le comportement physique du sous-sol. Pour résoudre les équations qui gouvernent les phénomènes physiques, les structures internes des modèles géologiques peuvent être discrétisées spatialement à l’aide de maillages. Cependant, la qualité des maillages peut être considérablement altérée à cause de l’inadéquation entre, d’une part, la géométrie et la connectivité des objets géologiques à représenter et, d’autre part, les contraintes requises sur le nombre, la forme et la taille des éléments des maillages. Dans ce cas, il est souhaitable de modifier un modèle géologique afin de pouvoir générer des maillages de bonne qualité permettant la réalisation de simulations physiques fidèles en un temps raisonnable. Dans cette thèse, j’ai développé des stratégies de réparation et de simplification de modèles géologiques 2D dans le but de faciliter la génération de maillages et la simulation de processus physiques sur ces modèles. Je propose des outils permettant de détecter les éléments des modèles qui ne respectent pas le niveau de détail et les prérequis de validité spécifiés. Je présente une méthode pour réparer et simplifier des coupes géologiques de manière locale, limitant ainsi l’extension des modifications. Cette méthode fait appel à des opérations d’édition de la géométrie et de la connectivité des entités constitutives des modèles géologiques. Deux stratégies sont ainsi explorées : modifications géométriques (élargissements locaux de l'épaisseur des couches) et modifications topologiques (suppressions de petites composantes et fusions locales de couches fines). Ces opérations d’édition produisent un modèle sur lequel il est possible de générer un maillage et de réaliser des simulations numériques plus rapidement. Cependant, la simplification des modèles géologiques conduit inévitablement à la modification des résultats des simulations numériques. Afin de comparer les avantages et les inconvénients des simplifications de modèles sur la réalisation de simulations physiques, je présente trois exemples d'application de cette méthode : (1) la simulation de la propagation d'ondes sismiques sur une coupe au sein du bassin houiller lorrain, (2) l’évaluation des effets de site liés à l'amplification des ondes sismiques dans le bassin de la basse vallée du Var, et (3) la simulation d'écoulements fluides dans un milieu poreux fracturé. Je montre ainsi (1) qu'il est possible d’utiliser les paramètres physiques des simulations, la résolution sismique par exemple, pour contraindre la magnitude des simplifications et limiter leur impact sur les simulations numériques, (2) que ma méthode de simplification de modèles permet de réduire drastiquement le temps de calcul de simulations numériques (jusqu’à un facteur 55 sur une coupe 2D dans le cas de l’étude des effets de site) tout en conservant des réponses physiques équivalentes, et (3) que les résultats de simulations numériques peuvent être modifiés en fonction de la stratégie de simplification employée (en particulier, la modification de la connectivité d’un réseau de fractures peut modifier les écoulements fluides et ainsi surestimer ou sous-estimer la quantité des ressources produites). / Numerical geological models help to understand the spatial organization of the subsurface. They are also designed to perform numerical simulations to study or predict the rocks physical behavior. The internal structures of geological models are commonly discretized using meshes to solve the physical governing equations. The quality of the meshes can be, however, considerably degraded due to the mismatch between, on the one hand, the geometry and the connectivity of the geological objects to be discretized and, on the other hand, the constraints imposed on number, shape and size of the mesh elements. As a consequence, it may be desirable to modify a geological model in order to generate good quality meshes that allow realization of reliable physical simulations in a reasonable amount of time. In this thesis, I developed strategies for repairing and simplifying 2D geological models, with the goal of easing mesh generation and simulation of physical processes on these models. I propose tools to detect model elements that do not meet the specified validity and level of detail requirements. I present a method to repair and simplify geological cross-sections locally, thus limiting the extension of modifications. This method uses operations to edit both the geometry and the connectivity of the geological model features. Two strategies are thus explored: geometric modifications (local enlargements of the layer thickness) and topological modifications (deletions of small components and local fusions of thin layers). These editing operations produce a model on which it is possible to generate a mesh and to realize numerical simulations more efficiently. But the simplifications of geological models inevitably lead to the modification of the numerical simulation results. To compare the advantages and disadvantages of model simplifications on the physical simulations, I present three applications of the method: (1) the simulation of seismic wave propagation on a cross-section within the Lorraine coal basin, (2) the site effects evaluation related to the seismic wave amplifications in the basin of the lower Var river valley, and (3) the simulation of fluid flows in a fractured porous medium. I show that (1) it is possible to use the physical simulation parameters, like the seismic resolution, to constrain the magnitude of the simplifications and to limit their impact on the numerical simulations, (2) my method of model simplification is able to drastically reduce the computation time of numerical simulations (up to a factor of 55 in the site effects case study) while preserving an equivalent physical response, and (3) the results of numerical simulations can be changed depending on the simplification strategy employed (in particular, changing the connectivity of a fracture network can lead to a modification of fluid flow paths and overestimation or underestimation of the quantity of produced resources).
263

Enhancement of Rainfall-Triggered Shallow Landslide Hazard Assessment at Regional and Site Scales Using Remote Sensing and Slope Stability Analysis Coupled with Infiltration Modeling

Rajaguru Mudiyanselage, Thilanki Maneesha Dahigamuwa 14 November 2018 (has links)
Landslides cause significant damage to property and human lives throughout the world. Rainfall is the most common triggering factor for the occurrence of landslides. This dissertation presents two novel methodologies for assessment of rainfall-triggered shallow landslide hazard. The first method focuses on using remotely sensed soil moisture and soil surface properties in developing a framework for real-time regional scale landslide hazard assessment while the second method is a deterministic approach to landslide hazard assessment of the specific sites identified during first assessment. In the latter approach, landslide inducing transient seepage in soil during rainfall and its effect on slope stability are modeled using numerical analysis. Traditionally, the prediction of rainfall-triggered landslides has been performed using pre-determined rainfall intensity-duration thresholds. However, it is the infiltration of rainwater into soil slopes which leads to an increase of porewater pressure and destruction of matric suction that causes a reduction in soil shear strength and slope instability. Hence, soil moisture, pore pressure and infiltration properties of soil must be direct inputs to reliable landslide hazard assessment methods. In-situ measurement of pore pressure for real-time landslide hazard assessment is an expensive endeavor and thus, the use of more practical remote sensing of soil moisture is constantly sought. In past studies, a statistical framework for regional scale landslide hazard assessment using remotely sensed soil moisture has not been developed. Thus, the first major objective of this study is to develop a framework for using downscaled remotely sensed soil moisture available on a daily basis to monitor locations that are highly susceptible to rainfall- triggered shallow landslides, using a well-structured assessment procedure. Downscaled soil moisture, the relevant geotechnical properties of saturated hydraulic conductivity and soil type, and the conditioning factors of elevation, slope, and distance to roads are used to develop an improved logistic regression model to predict the soil slide hazard of soil slopes using data from two geographically different regions. A soil moisture downscaling model with a proven superior prediction accuracy than the downscaling models that have been used in previous landslide studies is employed in this study. Furthermore, this model provides satisfactory classification accuracy and performs better than the alternative water drainage-based indices that are conventionally used to quantify the effect that elevated soil moisture has upon the soil sliding. Furthermore, the downscaling of soil moisture content is shown to improve the prediction accuracy. Finally, a technique that can determine the threshold probability for identifying locations with a high soil slide hazard is proposed. On the other hand, many deterministic methods based on analytical and numerical methodologies have been developed in the past to model the effects of infiltration and subsequent transient seepage during rainfall on the stability of natural and manmade slopes. However, the effects of continuous interplay between surface and subsurface water flows on slope stability is seldom considered in the above-mentioned numerical and analytical models. Furthermore, the existing seepage models are based on the Richards equation, which is derived using Darcy’s law, under a pseudo-steady state assumption. Thus, the inertial components of flow have not been incorporated typically in modeling the flow of water through the subsurface. Hence, the second objective of this study is to develop a numerical model which has the capability to model surface, subsurface and infiltration water flows based on a unified approach, employing fundamental fluid dynamics, to assess slope stability during rainfall-induced transient seepage conditions. The developed model is based on the Navier-Stokes equations, which possess the capability to model surface, subsurface and infiltration water flows in a unified manner. The extended Mohr-Coulomb criterion is used in evaluating the shear strength reduction due to infiltration. Finally, the effect of soil hydraulic conductivity on slope stability is examined. The interplay between surface and subsurface water flows is observed to have a significant impact on slope stability, especially at low hydraulic conductivity values. The developed numerical model facilitates site-specific calibration with respect to saturated hydraulic conductivity, remotely sensed soil moisture content and rainfall intensity to predict landslide inducing subsurface pore pressure variations in real time.
264

Methane sources, fluid flow, and diagenesis along the northern Cascadia Margin; using authigenic carbonates and pore waters to link modern fluid flow to the past

Joseph, Craig E. 29 February 2012 (has links)
Methane derived authigenic carbonate (MDAC) precipitation occurs within marine sediments as a byproduct of the microbial anaerobic oxidation of methane (AOM). While these carbonates form in chemical and isotopic equilibrium with the fluids from which they precipitate, burial diagenesis and recrystallization can overprint these signals. Plane polarized light (PPL) and cathodoluminescent (CL) petrography have allowed for detailed characterization of carbonate phases and their subsequent alteration. Modern MDACs sampled offshore in northern Cascadia (n =33) are compared with paleoseep carbonates (n =13) uplifted on the Olympic Peninsula in order to elucidate primary vs. secondary signals, with relevance to interpretations of the carbonate record. The modern offshore environment (S. Hydrate Ridge and Barkley Canyon) is dominated by metastable acicular and microcrystalline aragonite and hi-Mg calcite (HMC) that with time will recrystallize to low-Mg calcite (LMC). The diagenetic progression is accompanied by a decrease in Mg/Ca and Sr/Ca ratios while variation in Ba/Ca depends upon the Ba-concentration of fluids that spur recrystallization. CL images discern primary carbonates with high Mn/Ca from secondary phases that reflect the Mn- enrichment that characterizes deep sourced fluids venting at Barkley Canyon. Methane along the Cascadia continental margin is mainly of biogenic origin, where reported strontium isotopic values reflect a mixture of seawater with fluids modified by reactions with the incoming Juan de Fuca plate. In contrast, the Sr-isotopic composition of carbonates and fluids from Integrated Ocean Drilling Program (IODP) Site U1329 and nearby Barkley Canyon point to a distinct endmember (lowest ⁸⁷Sr/⁸⁶Sr = 0.70539). These carbonates also show elevated Mn/Ca and δ¹⁸O values as low as -12‰, consistent with a deep-source of fluids feeding thermogenic hydrocarbons to the Barkley Canyon seeps. Two paleoseep carbonates sampled from the uplifted Pysht/Sooke Fm. have ⁸⁷Sr/⁸⁶Sr values similar to those of the anomalous Site U1329 and Barkley Canyon carbonates (⁸⁷Sr/⁸⁶Sr = 0.70494 and 0.70511). We postulate that the ⁸⁷Sr-depleted carbonates and pore fluids found at Barkley Canyon represent migration by the same type of deep, exotic fluid as is found in high permeability conglomerate layers down to 190 mbsf at Site U1329, and which fed paleoseeps in the Pysht/Sooke Fm. These exotic fluids likely reflect interaction with the 52-57 Ma igneous Crescent Terrane, which is located down-dip from both Barkley Canyon and Site U1329. This previously unidentified endmember fluid in northern Cascadia may have sourced cold seeps in this margin since at least the late Oligocene. / Graduation date: 2012
265

Temperature proton exchange membrane fuel cells in a serpentine design

Maasdorp, Lynndle Caroline January 2010 (has links)
<p>The aim of my work is to model a segment of a unit cell of a fuel cell stack using numerical methods which is classified as computational fluid dynamics and implementing the work in a commercial computational fluid dynamics package, FLUENT. The focus of my work is to study the thermal distribution within this segment. The results of the work aid in a better understanding of the fuel cell operation in this temperature range. At the time of my investigation experimental results were unavailable for validation and therefore my results are compared to previously published results published. The outcome of the results corresponds to this, where the current flux density increases with the increasing of operating temperature and fixed operating voltage and the temperature variation across the fuel cell at varying operating voltages. It is in the anticipation of determining actual and or unique material input parameters that this work is done and at which point this studies results would contribute to the understanding high temperature PEM fuel cell thermal behaviour, significantly.</p>
266

Residual Error Estimation And Adaptive Algorithms For Fluid Flows

Ganesh, N 05 1900 (has links)
The thesis deals with the development of a new residual error estimator and adaptive algorithms based on the error estimator for steady and unsteady fluid flows in a finite volume framework. The aposteriori residual error estimator referred to as R--parameter, is a measure of the local truncation error and is derived from the imbalance arising from the use of an exact operator on the numerical solution for conservation laws. A detailed and systematic study of the R--parameter on linear and non--linear hyperbolic problems, involving continuous flows and discontinuities is performed. Simple theoretical analysis and extensive numerical experiments are performed to establish the fact that the R--parameter is a valid estimator at limiter--free continuous flow regions, but is rendered inconsistent at discontinuities and with limiting. The R--parameter is demonstrated to work equally well on different mesh topologies and detects the sources of error, making it an ideal choice to drive adaptive strategies. The theory of the error estimation is also extended for unsteady flows, both on static and moving meshes. The R--parameter can be computed with a low computational overhead and is easily incorporated into existing finite volume codes with minimal effort. Adaptive refinement algorithms for steady flows are devised employing the residual error estimator. For continuous flows devoid of limiters, a purely R--parameter based adaptive algorithm is designed. A threshold length scale derived from the estimator determines the refinement/derefinement criterion, leading to a self--evolving adaptive algorithm devoid of heuristic parameters. On the other hand, for compressible flows involving discontinuities and limiting, a hybrid adaptive algorithm is proposed. In this hybrid algorithm, error indicators are used to flag regions for refinement, while regions of derefinement are detected using the R--parameter. Two variants of these algorithms, which differ in the computation of the threshold length scale are proposed. The disparate behaviour of the R--parameter for continuous and discontinuous flows is exploited to design a simple and effective discontinuity detector for compressible flows. For time--dependent flow problems, a two--step methodology is proposed for adaptive grid refinement. In the first step, the ``best" mesh at any given time instant is determined. The second step involves predicting the evolution of flow phenomena over a period of time and refines regions into which the flow features would progress into. The latter step is implemented using a geometric--based ``Refinement Level Projection" strategy which guarantees that the flow features remain in adapted zones between successive adaptive cycles and hence uniform solution accuracy. Several numerical experiments involving inviscid and viscous flows on different grid topologies are performed to illustrate the success of the proposed adaptive algorithms. Appendix 1 Candidate's response to the comments/queries of the examiners The author would like to thank the reviewers for their appreciation of the work embodied in the thesis and for their comments. The clarifications to the comments and queries posed in the reviews are summarized below. Referee 1 Q: The example of mesh refinement for RANS solution with shock was performed with isotropic mesh, while the author claims that it is appropriate with anisotropic mesh. If this is the case, why did he not demonstrate that ? As the author knows well, in the case of full 3--D configuration, isotropic adaptation will lead to substantial grid points. The large mesh will hamper timely turnaround time of simulation. Therefore it would be a significant contribution to the aero community if this point is investigated at a later date. Response: The author is of the view that for most practical situations, a pragmatic approach to mesh adaptation for RANS computations would merely involve generating a viscous padding of adequate fineness around the body and allowing for grid adaptation only in the outer potential region. Of course, this method would allow for grid adaptation in the outer layers of viscous padding only to the extent the smoothness criterion is satisfied while adapting the grids in the potential region. This completely obviates point addition to the wall (CAD surface) and there by avoids all complexities (like loss in automation) resulting from the interaction with the surface modeler while adding point on the wall. This method is expected to do well for attached flows and mildly separated flows. This method is expected to do well even for problems involving shock - boundary layer interaction, owing to the fact that the shock is normal to the wall surface (recall, a flow aligned grid is ideal to capture such shocks), as long as the interaction does not result in a massive separation. This approach has already been demonstrated in section 4.5.3 where in adaptive high-lift computations have been performed. Isotropic adaptation retains the goodness of the zero level grid and therefore the robustness of the solver does not suffer through successive levels of grid adaptation. This procedure may result in large number of volumes. On the other hand, the anisotropic refinement may result in significantly less number of volumes, but the mesh quality may have badly degenerated during successive levels of adaptation leading to difficulties in convergence. Therefore, the choice of either of these strategies is effectively dictated by requirements on grid quality and grid size. Also, it is generally understood that building tools for anisotropic adaptation are more complicated as compared to those required for isotropic adaptation, while anisotropic refinement may not require point addition on the wall. Considering these facts, in the view of the author, this issue is an open issue and his personal preference would be to use isotropic refinement or a hybrid strategy employing a combination of these methodologies, particularly considering aspects of solution quality. Finally, in both the examples cited by the reviewer (sections 6.4.5 & 6.4.6) the objective was to demonstrate the efficacy of the new adaptive algorithm (using error indicators and the residual estimator), rather than evaluating the pros & cons of isotropic and anisotropic refinement strategies. In the sections cited above, the author has merely highlighted the advantages of the refinement strategies in specific context of the problem considered and these statements need not be considered as general. Referee 2 Q: For convection problems, a good error estimator must be able to distinguish between locally generated error and convected error. The thesis says the residual error estimator is able to do this and some numerical evidence is presented, but can the candidate comment how the estimator is able to achieve this ? Response: The ultimate aim of any AMR strategy is to reduce the global error. The residual error estimator proposed in this work measures the local truncation error. It has been shown in the context of a linear convective equation that the global error in a cell consists of two parts--the locally generated error in the cell (which is the R--parameter) and the local error transported from other cells in the domain. Either of these errors are dependent on the local error itself and any algorithm that reduces the local truncation error (sources of error) will reduce the global error in the domain. This conclusion is supported by the test case of isentropic flow past an airfoil (Chapter 3, C, Pg 79), where refinement based on the R--parameter leads to lower global error levels than a global error based refinement itself. Q: While analysing the R--parameter in Section 3.3, the operator δ2 is missing. Response: The analysis in Section 3.3 is based on Eq.(3.3) (Pg 58) which provides the local truncation error. As can be seen from Eq.(3.14), the LHS represents the discrete operator acting on the numerical solution (which is zero) and the first term on the RHS is the exact operator acting on the numerical solution (which is I[u]). Consequently the truncation terms T1 and T2 contribute to the truncation error R1 . However, from the viewpoint of computing the error estimate on a discretised domain, we need to replace the exact operator I by a higher order discrete operator δ2 . This gives the R-parameter, which has contributions from R1 as well as discretisation errors due to the higher order operator, R2 . When the latter is negligible compared to the former, the R--parameter is an estimate of the local truncation error. The truncation error depends on the accuracy of the reconstruction procedure used in obtaining the numerical solution and hence on the discrete operator δ1. On very similar lines, it can be shown that operator δ2 leads to a formal second order accuracy and this operator is only required in computing the residual error estimate. Q: What does the phrase "exact derivatives of the numerical solution" mean ? Response: This statement exemplifies the fact that the numerical solution is the exact solution to the modified partial differential equation and that the truncation terms T1 and T2 that constitute the R--parameter are functions of the derivatives of this numerical solution. Q: For the operator δ2 quadratic reconstruction is employed. Is the exact or numerical flux function used ? Response: The operator δ2 is a higher order discrete approximation to the exact operator I. Therefore, a quadratic polynomial with a three--point Gauss quadrature has been used in the error estimation procedure. Error estimation does not involve issues with convergence associated with the flow solver and therefore an exact flux function has been employed with the δ2 operator. Nevertheless, it is also possible to use the same numerical flux function as employed in the flow solver for error estimation also. Q: The same stencil of grid points is used for the solution update and the error estimation. Does this not lead to an increased stencil size ? Response: In comparison to reconstruction using higher degree polynomials such as cubic and quartic reconstruction, quadratic reconstruction involves only a smaller stencil of points consisting of the node--sharing neighbours of a cell. The use of such a support stencil is sufficient for linear reconstruction also and adds to the robustness of the flow solver, although a linear reconstruction can, in principle, work with a smaller support stencil. A possible alternative to using quadratic reconstruction (and hence a slightly larger stencil) is to adopt a Defect Correction strategy to obtain derivatives to higher order accuracy and needs to be explored in detail. Q: How is the R--parameter computed for viscous flows ? Response: The computation of the R--parameter for viscous flows is on the same lines as for inviscid flows. The gradients needed for viscous flux computation at the face centers are obtained using quadratic reconstruction. The procedure for calculation of the R--parameter for steady flows (both inviscid and viscous) is the step--by--step algorithm in Section 3.5. Q: In some cases, regions ahead of the shock show no coarsening. Response: The adaptive algorithm proposed in this work does not allow for coarsening of the initial mesh, and regions ahead of the shock remain unaffected (because of uniform flow) at all levels of refinement. Q: Do adaptation strategies terminate automatically atleast for steady flows ? Response: The adaptation strategies (RAS and HAS) must, in principle by virtue of construction of the algorithm, automatically terminate for steady flows. In the HAS algorithms though, there are certain heuristic criteria for termination of refinement especially at shocks/turbulent boundary layers. In this work, a maximum of four cycles of refinement/derefinement have only been carried out and therefore an automatic termination of the adaptive strategies were no studied. Q: How do residual--based adaptive strategies compare and contrast with adjoint--based approaches which are now becoming popular for goal--oriented adaptation ? Adjoint--based methods involve solution to the adjoint problem in addition to solving the primal problem, which represents a substantial computational cost. A timing study for a typical 3D problem[2] indicates that the solution of the adjoint problem (which needs the computation of the Jacobian and sensitivities of the functional) could require as much as one--half of the total time needed to compute the flow solution. On the contrary, R--parameter based refinement involves no additional information than that required by the flow solver and is roughly equivalent to one explicit iteration of the flow solver (Section 3.5.1). For practical 3--D applications, adjoint--based approaches will lead to a prohibitively high cost, and more so for dynamic adaptation. This is also exemplified by the fact that there has been only few recent works on 3D adaptive computations based on adjoint error estimation (which consider only inviscid flows)[1,2]. Goal--oriented adaptation involves reducing the error in some functional of interest. This can be achieved within the framework of R--parameter based adaptation, by introducing additional termination criteria based on integrated quantities. Within an automated adaptation loop, such an algorithm would terminate when the integrated quantities do not change appreciably with refinement levels. This is in contrast to the adjoint--based approach which strives to reduce the error in the functional below a certain threshold. Considering the fact that reducing the residual leads to reducing the global error itself, the R--parameter based adaptive algorithm would also lead to accurate estimates of the integrated quantities (which depend on the numerical solution). This is also reflected in the fact that the R--parameter based adaptation for the three--element NHLP configuration predicts the lift and drag coefficients to reasonable accuracy, as shown in Section 4.5.3. The author is of the belief that the R--parameter based adaptive algorithm holds huge promise for adaptive simulations of flow past complex geometries, both in terms of computational cost and solution accuracy. This is exemplified by successful adaptive simulations of inviscid flow past ONERA M6 wing as well as a conventional missile configuration[3]. A more concrete comparison of the R--parameter based and adjoint--based approaches would involve systematically solving a set of problems by both approaches and has not been considered in this thesis. [1] Nemec and Aftosmis,``Adjoint error estimation and adaptive refinement for embedded--boundary cartesian meshes", AIAA Paper 2007--4187, 2007. [2] Wintzer, Nemec and Aftosmis,``Adjoint--based adaptive mesh refinement for sonic boom prediction", AIAA Paper 2008--6593, 2008. [3] Nikhil Shende, ``A general purpose flow solver for Euler equations", Ph.D. Thesis, Dept. of Aerospace Engg., Indian Institute of Science, 2005.
267

Flow Acoustic Analysis Of Complex Muffler Configurations

Vijaya Sree, N K 07 1900 (has links) (PDF)
A theoretical study has been carried out on different methods available to analyze complex mufflers. Segmentation methods have been discussed in detail. The latest two port segmentation method has been discussed and employed for a few common muffler configurations, describing its implications and limitations. A new transfer matrix based method has been developed in view of the lacunae of the available approaches. This Integrated Transfer Matrix (ITM) method has been developed particularly to analyze complex mufflers. An Integrated transfer matrix relates the state variables across the entire cross-section of the muffler shell, as one moves along the axis of the muffler, and can be partitioned appropriately in order to relate the state variables of different tubes constituting the cross-section. The method presents a 1-D approach, using transfer matrices of simple acoustic elements which are available in the literature. Results from the present approach have been validated through comparisons with the available experimental and three dimensional FEM based results. The total pressure drop across perforated muffler elements has been measured experimentally and generalized expressions have been developed for the pressure loss across cross-flow expansion, cross-flow contraction elements, etc. These have then been used to derive empirical expressions for flow-acoustic resistance for use in the Integrated Transfer Matrix Method in order to predict the flow-acoustic performance of commercial mufflers. A flow resistance model has been developed to analytically determine the flow distribution and thereby pressure drop of mufflers. Generalized expressions for resistance across the perforated elements have been derived by means of flow experiments as mentioned above. The derived expressions have been implemented in a flow resistance network that has been developed to determine the pressure drop across any given complex muffler. The results have been validated with experimental data.
268

Modeling Fluid Flow Effects on Shallow Pore Water Chemistry and Methane Hydrate Distribution in Heterogeneous Marine Sediment

Chatterjee, Sayantan 06 September 2012 (has links)
The depth of the sulfate-methane transition (SMT) above gas hydrate systems is a direct proxy to interpret upward methane flux and hydrate saturation. However, two competing reaction pathways can potentially form the SMT. Moreover, the pore water profiles across the SMT in shallow sediment show broad variability leading to different interpretations for how carbon, including CH4, cycles within gas-charged sediment sequences over time. The amount and distribution of marine gas hydrate impacts the chemistry of several other dissolved pore water species such as the dissolved inorganic carbon (DIC). A one-dimensional (1-D) numerical model is developed to account for downhole changes in pore water constituents, and transient and steady-state profiles are generated for three distinct hydrate settings. The model explains how an upward flux of CH4 consumes most SO42- at a shallow SMT implying that anaerobic oxidation of methane (AOM) is the dominant SO42- reduction pathway, and how a large flux of 13C-enriched DIC enters the SMT from depth impacting chemical changes across the SMT. Crucially, neither the concentration nor the d13C of DIC can be used to interpret the chemical reaction causing the SMT. The overall thesis objective is to develop generalized models building on this 1-D framework to understand the primary controls on gas hydrate occurrence. Existing 1-D models can provide first-order insights on hydrate occurrence, but do not capture the complexity and heterogeneity observed in natural gas hydrate systems. In this study, a two-dimensional (2-D) model is developed to simulate multiphase flow through porous media to account for heterogeneous lithologic structures (e.g., fractures, sand layers) and to show how focused fluid flow within these structures governs local hydrate accumulation. These simulations emphasize the importance of local, vertical, fluid flux on local hydrate accumulation and distribution. Through analysis of the fluid fluxes in 2-D systems, it is shown that a local Peclet number characterizes the local hydrate and free gas saturations, just as the Peclet number characterizes hydrate saturations in 1-D, homogeneous systems. Effects of salinity on phase equilibrium and co-existence of hydrate and gas phases can also be investigated using these models. Finally, infinite slope stability analysis assesses the model to identify for potential subsea slope failure and associated risks due to hydrate formation and free gas accumulation. These generalized models can be adapted to specific field examples to evaluate the amount and distribution of hydrate and free gas and to identify conditions favorable for economic gas production.
269

Hydraulic Fracturing in Particulate Materials

Chang, Hong 29 November 2004 (has links)
For more than five decades, hydraulic fracturing has been widely used to enhance oil and gas production. Hydraulic fracturing in solid materials (e.g., rock) has been studied extensively. The main goal of this thesis is a comprehensive study of the physical mechanisms of hydraulic fracturing in cohesionless sediments. For this purpose, experimental techniques are developed to quantify the initiation and propagation of hydraulic fractures in dry particulate materials. We have conducted a comprehensive experimental series by varying such controlling parameters as the properties of particulate materials and fracturing fluids, boundary conditions, initial stress states, and injection volumes and rates. In this work, we suggest principle fundamental mechanisms of hydraulic fracturing in particulate materials and determine relevant scaling relationships (e.g., the interplay between elastic and plastic processes). The main conclusion of this work is that hydraulic fracturing in particulate materials is not only possible, but even probable if the fluid leak-off is minimized (e.g., high flow rate, high viscosity, low permeability). Another important conclusion of this work is that all parts of the particulate material are likely to be in compression. Also, the scale effect (within the range of the laboratory scales) appears to be relatively insignificant, that is, the observed features of fractures of different sizes are similar. Based on the observed fracture geometries, and injection pressures we suggested three models of hydraulic fracturing in particulate materials. In the cavity expansion or ??e driving model, the fracturing fluid is viewed as a sheet pile (blade) that disjoints the host material, and the cavity expansion occurs at the fracture (blade) front. The shear banding model is also consistent with a compressive stress state everywhere in the particulate material and explains the commonly observed beveled fracture front. The model of induced cohesion is based on the fluid leak-off ahead of the fracture front. The induced cohesion may be caused by the tensile strain near the fracture tip (where the stress state is also compressive), which, in turn, induces the cavitation of the leaked-off fluid and hence capillary forces.
270

A dynamic behavior of pulp floc and fibers in the papermaking process

Park, Chang Shin 03 1900 (has links)
No description available.

Page generated in 0.1047 seconds