• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 97
  • 31
  • 17
  • 10
  • 6
  • 6
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 223
  • 102
  • 69
  • 48
  • 47
  • 33
  • 30
  • 29
  • 28
  • 28
  • 27
  • 26
  • 25
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Etude mathématique du problème de couplage océan-atmosphère incluant les échelles turbulentes / Mathematical study of the air-sea coupling problem including turbulent scale effects

Pelletier, Charles 15 February 2018 (has links)
Cette thèse s'intéresse à la modélisation numérique du couplage entre l'océan et l'atmosphère. Bien que présentant un certain nombre de caractéristiques communes, ces deux milieux physiques sont suffisamment dissemblables pour être numériquement simulés par des modèles distincts, incluant chacun des spécificités propres. Par conséquent, leurs interactions sont prises en compte via des algorithmes de couplage multiphysique.La mise en place de tels algorithmes nécessite une bonne compréhension des modélisations des milieux océanique et atmosphérique, en particulier au voisinage de leur interface commune. C'est pourquoi une partie conséquente de la présente thèse dissèque, analyse et complète les paramétrisations turbulentes, qui sont des mécanismes numériques définis au niveau continu, traitant la couche limite turbulente au voisinage de la surface océanique. Les travaux entrepris ont permis d'identifier deux sources d'erreurs, théoriquement et numériquement significatives, dans la modélisation numérique standard de l'interface océan-atmosphère.La première source d'erreur se manifeste dans les formulations continues des paramétrisations turbulentes: celles-ci sont actuellement utilisées de manière incomplète, ce qui se traduit par le caractère mathématiquement irrégulier des solutions qu'elles génèrent. En revenant aux fondements de la théorie dont les paramétrisations découlent, la présente thèse étend leur domaine d'application, permettant de générer des profils de solution réguliers, dans un cadre théorique uniforme et bi-domaine. Les effets d'une telle extension sont numériquement évalués sur des cas tests physiquement réalistes: celle-ci peut mener à des biais considérables (de l'ordre de 20%) dans les flux échangés entre océan et atmosphère. D'un point de vue théorique, cette extension permet de définir des critères simples sous lesquels le couplage océan-atmosphère peut être considéré comme cohérent par rapport aux deux domaines physiques, et surtout aux paramétrisations turbulentes.La seconde source d'erreur est de nature algorithmique: elle concerne la discrétisation temporelle des mécanismes de couplage. Les méthodes actuelles, dites ad hoc, ne garantissent pas une complète cohérence des flux d'un modèle à l'autre. Les algorithmes de Schwarz globaux en temps, issus de thématiques liées à la décomposition de domaine, constituent une piste intéressante pour traiter ces aspects. La mise en place de tels algorithmes sur des modèles physiquement réalistes représente un défi considérable. Leur impact numérique sur des cas tests simplifiés est évalué. L'étude préalable des paramétrisations turbulentes permet de donner des pistes quant au développement d'algorithmes de couplage, concernant à la fois la cohérence du couplage précédemment introduite, et l'incorporation graduelle d'effets physiques plus complexes. / This thesis focuses on the numerical modelling of the air-sea coupling. Although they share some common features, these two physical environments are sufficiently dissimilar for their numerical treatment to be carried out by distinct models, each including their own specificities. The interactions between these two components are thus taken into account through coupling algorithms.Implementing such algorithms requires proper understanding of the oceanic and atmospheric modelling, most importantly in the vicinity of their common interface. Therefore a substantial part of this thesis dissects, analyzes and completes turbulent parameterization schemes, which are the numerical mechanisms, defined at a continuous level, through which the turbulent surface layer at the vicinity of the sea surface is treated. Two theoretically and numerically meaningful sources of errors in the standard numerical modelling of the air-sea interface have been isolated.The first source of error lies in the continuous formulation of the turbulent parameterizations, which are currently used in an incomplete manner, leading to mathematically irregular solution profiles. By carefully studying their theoretical bases, this thesis extends the parameterizations, allowing them to generate regular profiles within a standardized, bi-domain framework. Numerical investigations on physically relevant test cases show that including such an extension can result in considerable bias (of the order of 20%) in air-sea fluxes evaluations. From a theoretical perspective, carrying this extension leads to establishing simple criteria under which the air-sea coupling can be considered as coherent with respect to the two physical environments, and more importantly, to the turbulent parameterizations.The second source of error is algorithmic in essence: it is linked to the temporal discretization of the coupling mechanisms. Existing ad hoc methods do not guarantee perfect coherence of the air-sea fluxes from one model to the other. Global in time Schwarz algorithms, which have first been developed as domain decomposition methods, are good candidates for correcting these flaws, although their implementation to the air-sea context is a considerable challenge, given the complexity of this problem. Investigations on the numerical impact of such algorithms are carried out on simplified test cases. Thanks to the undertaken work on turbulent parameterizations, perspectives on the development of coupling algorithms are given, regarding both their coherence as per the aforementioned conditions, and the gradually increasing complexity of physical effects that are accounted for.
42

Electrochemical machining : towards 3D simulation and application on SS316

Gomez Gallegos, Ares Argelia January 2016 (has links)
Electrochemical machining (ECM) is a non-conventional manufacturing process, which uses electrochemical dissolution to shape any conductive metal regardless of its mechanical properties and without leaving behind residual stresses or tool wear. Therefore, ECM can be an alternative for machining difficult-to-cut materials, complex geometries, and materials with improved characteristics, such as strength, heat-resistance or corrosion-resistance. Notwithstanding its great potential as a shaping tool, the ECM process is still not fully characterised and its research is an on-going process. Various phenomena are involved in ECM, e.g. electrodynamics, mass transfer, heat transfer, fluid dynamics and electrochemistry, which occur in parallel and this can lead to a different material dissolution rate at each point of the workpiece surface. This makes difficult an accurate prediction of the final workpiece geometry. This problem was addressed in the first part of the present thesis by developing a simulation model of the ECM process in a two-dimensional (2D) environment. A finite element analysis (FEA) package, COMSOL multiphysics® was used for this purpose due to its capacity to handle the diverse phenomena involved in ECM and couple them into a single solution. Experimental tests were carried out by applying ECM on stainless steel 316 (SS316) samples. This work was done in collaboration with pECM Systems Ltd® from Barnsley, UK. The interest of studying ECM on stainless steels (SS) resides on the fact that the application of ECM on SS typically results in various different surface finishes. The chromium in SS alloys usually induces the formation of a protective oxide film that prevents further corrosion of the alloy, giving the metal the special characteristic of corrosion resistance. This oxide film has low electrical conductivity; hence normal anodic dissolution often cannot proceed without oxide breakdown. Partial breakdown of the oxide film often occurs, which causes pits on the surface or a non-uniform surface finish. Therefore the role of the ECM machining parameters, such as interelectrode gap, voltage, electrolyte flow rate, and electrolyte inlet temperature, on the achievement of a uniform oxide film breakdown was evaluated in this work. Experimental results show that the resulting surface finish is highly influenced by the over-potential and current density, and by the characteristics of the electrolyte, flow rate and conductivity. The complexity of experimentally controlling these parameters emphasised the need for the development of a computational model that allows the simulation of the ECM process in full. The simulation of ECM in a three-dimensional (3D) environment is crucial to understand the behaviour of the ECM process in the real world. In a 3D model, information that was not visible before can be observed and a more detailed realistic solution can be achieved. Hence, in this work a computer aided design (CAD) software was used to construct a 3D geometry, which was imported to COMSOL Multiphysics® to simulate the ECM process, but this time in a 3D environment. This enhanced simulation model includes fluid dynamics, heat transfer, mass transfer, electrodynamics and electrochemistry, and has the novelty that an accurate computational simulation of the ECM process can be carry out a priori the experimental tests and allows the extraction of enough information from the ECM process in order to predict the workpiece final shape and surface finish. Moreover, this simulation model can be applied to diverse materials and electrolytes by modifying the input ECM parameters.
43

Accurate and Efficient Autonomic Closure for Turbulent Flows

January 2019 (has links)
abstract: Autonomic closure is a new general methodology for subgrid closures in large eddy simulations that circumvents the need to specify fixed closure models and instead allows a fully- adaptive self-optimizing closure. The closure is autonomic in the sense that the simulation itself determines the optimal relation at each point and time between any subgrid term and the variables in the simulation, through the solution of a local system identification problem. It is based on highly generalized representations of subgrid terms having degrees of freedom that are determined dynamically at each point and time in the simulation. This can be regarded as a very high-dimensional generalization of the dynamic approach used with some traditional prescribed closure models, or as a type of “data-driven” turbulence closure in which machine- learning methods are used with internal training data obtained at a test-filter scale at each point and time in the simulation to discover the local closure representation. In this study, a priori tests were performed to develop accurate and efficient implementations of autonomic closure based on particular generalized representations and parameters associated with the local system identification of the turbulence state. These included the relative number of training points and bounding box size, which impact computational cost and generalizability of coefficients in the representation from the test scale to the LES scale. The focus was on studying impacts of these factors on the resulting accuracy and efficiency of autonomic closure for the subgrid stress. Particular attention was paid to the associated subgrid production field, including its structural features in which large forward and backward energy transfer are concentrated. More than five orders of magnitude reduction in computational cost of autonomic closure was achieved in this study with essentially no loss of accuracy, primarily by using efficient frame-invariant forms for generalized representations that greatly reduce the number of degrees of freedom. The recommended form is a 28-coefficient representation that provides subgrid stress and production fields that are far more accurate in terms of structure and statistics than are traditional prescribed closure models. / Dissertation/Thesis / Doctoral Dissertation Aerospace Engineering 2019
44

Numerical calculations of optical structures using FEM

Wiklund, Henrik January 2006 (has links)
<p>Complex surface structures in nature often have remarkable optical properties. By understanding the origin of these properties, such structures may be utilized in metamaterials, giving possibilities to create materials with new specific optical properties. To simplify the optical analysis of these naturally developed surface structures there is a need to assist data analysis and analytical calculations with numerical calculations.</p><p>In this work an application tool for numerical calculations of optical properties of surface structures, such as reflectances and ellipsometric angles, has been developed based on finite element methods (FEM). The data obtained from the application tool has been verified by comparison to analytical expressions in a thorough way, starting with reflection from the simplest of interfaces stepwise increasing the complexity of the surfaces.</p><p>The application tool were developed within the electromagnetic module of Comsol Multiphysics and used the script language to perform post-process calculations on the obtained electromagnetic fields. The data obtained from this application tool are given in such way that easily allows for comparison with data received from spectroscopic ellipsometry measurements.</p>
45

Adaptive finite element methods for multiphysics problems

Bengzon, Fredrik January 2009 (has links)
In this thesis we develop and analyze the performance ofadaptive finite element methods for multiphysics problems. Inparticular, we propose a methodology for deriving computable errorestimates when solving unidirectionally coupled multiphysics problemsusing segregated finite element solvers.  The error estimates are of a posteriori type and are derived using the standard frameworkof dual weighted residual estimates.  A main feature of themethodology is its capability of automatically estimating thepropagation of error between the involved solvers with respect to anoverall computational goal. The a posteriori estimates are used todrive local mesh refinement, which concentrates the computationalpower to where it is most needed.  We have applied and numericallystudied the methodology to several common multiphysics problems usingvarious types of finite elements in both two and three spatialdimensions. Multiphysics problems often involve convection-diffusion equations for whichstandard finite elements can be unstable. For such equations we formulatea robust discontinuous Galerkin method of optimal order with piecewiseconstant approximation. Sharp a priori and a posteriori error estimatesare proved and verified numerically. Fractional step methods are popular for simulating incompressiblefluid flow. However, since they are not genuine Galerkin methods, butrather based on operator splitting, they do not fit into the standardframework for a posteriori error analysis. We formally derive an aposteriori error estimate for a prototype fractional step method byseparating the error in a functional describing the computational goalinto a finite element discretization residual, a time steppingresidual, and an algebraic residual.
46

Thermo-Piezo-Electro-Mechanical Simulation of AlGaN (Aluminum Gallium Nitride) / GaN (Gallium Nitride) High Electron Mobility Transistor

Stevens, Lorin E. 01 May 2013 (has links)
Due to the current public demand of faster, more powerful, and more reliable electronic devices, research is prolific these days in the area of high electron mobility transistor (HEMT) devices. This is because of their usefulness in RF (radio frequency) and microwave power amplifier applications including microwave vacuum tubes, cellular and personal communications services, and widespread broadband access. Although electrical transistor research has been ongoing since its inception in 1947, the transistor itself continues to evolve and improve much in part because of the many driven researchers and scientists throughout the world who are pushing the limits of what modern electronic devices can do. The purpose of the research outlined in this paper was to better understand the mechanical stresses and strains that are present in a hybrid AlGaN (Aluminum Gallium Nitride) / GaN (Gallium Nitride) HEMT, while under electrically-active conditions. One of the main issues currently being researched in these devices is their reliability, or their consistent ability to function properly, when subjected to high-power conditions. The researchers of this mechanical study have performed a static (i.e. frequency-independent) reliability analysis using powerful multiphysics computer modeling/simulation to get a better idea of what can cause failure in these devices. Because HEMT transistors are so small (micro/nano-sized), obtaining experimental measurements of stresses and strains during the active operation of these devices is extremely challenging. Physical mechanisms that cause stress/strain in these structures include thermo-structural phenomena due to mismatch in both coefficient of thermal expansion (CTE) and mechanical stiffness between different materials, as well as stress/strain caused by "piezoelectric" effects (i.e. mechanical deformation caused by an electric field, and conversely voltage induced by mechanical stress) in the AlGaN and GaN device portions (both piezoelectric materials). This piezoelectric effect can be triggered by voltage applied to the device's gate contact and the existence of an HEMT-unique "two-dimensional electron gas" (2DEG) at the GaN-AlGaN interface. COMSOL Multiphysics computer software has been utilized to create a finite element (i.e. piece-by-piece) simulation to visualize both temperature and stress/strain distributions that can occur in the device, by coupling together (i.e. solving simultaneously) the thermal, electrical, structural, and piezoelectric effects inherent in the device. The 2DEG has been modeled not with the typically-used self-consistent quantum physics analytical equations, rather as a combined localized heat source* (thermal) and surface charge density* (electrical) boundary condition. Critical values of stress/strain and their respective locations in the device have been identified. Failure locations have been estimated based on the critical values of stress and strain, and compared with reports in literature. The knowledge of the overall stress/strain distribution has assisted in determining the likely device failure mechanisms and possible mitigation approaches. The contribution and interaction of individual stress mechanisms including piezoelectric effects and thermal expansion caused by device self-heating (i.e. fast-moving electrons causing heat) have been quantified. * Values taken from results of experimental studies in literature
47

Modeling and optimization of a thermosiphon for passive thermal management systems

Loeffler, Benjamin Haile 15 November 2012 (has links)
An optimally designed thermosiphon for power electronics cooling is developed. There exists a need for augmented grid assets to facilitate power routing and decrease line losses. Power converter augmented transformers (PCATs) are critically limited thermally. Conventional active cooling system pumps and fans will not meet the 30 year life and 99.9% reliability required for grid scale implementation. This approach seeks to develop a single-phase closed-loop thermosiphon to remove heat from power electronics at fluxes on the order of 10 - 15 W/cm2. The passive thermosiphon is inherently a coupled thermal-fluid system. A parametric model and multi-physics design optimization code will be constructed to simulate thermosiphon steady state performance. The model will utilize heat transfer and fluid dynamic correlations from literature. A particle swarm optimization technique will be implemented for its performance with discrete domain problems. Several thermosiphons will be constructed, instrumented, and tested to verify the model and reach an optimal design.
48

Diffusion in inhomogenous media

Bandola, Nicolas 30 October 2009 (has links)
This project considers the diffusion of water molecules through a cellular medium in which the cells are modeled by square compartments placed symmetrically in a square domain. We assume the diffusion process is governed by the 2D diffusion equations and the solution is provided by implementing the Crank-Nicolson scheme. These results are verified and illustrated to agree well with the finite element method using the Comsol Multiphysics package. The model is used to compute the values of the apparent diffusion coefficient, (ADC) which is a measure that is derived from diffusion weighted MRI data and can be used to identify, e.g., regions of ischemia in the brain. With our model, it is possible to examine how the value of the apparent diffusion coefficient is affected whenever the extracellular space is varied. We observe that the average distance that the water molecules travel in a definite time is highly dependent on the geometrical properties of the cellular media. / UOIT
49

Simulation of Temperature Distribution in IR Camera Chip / Simulering av temperaturdistribution i IR-kamerachip

Salomonsson, Stefan January 2011 (has links)
The thesis investigates the temperature distribution in the chip of an infrared camera caused by its read out integrated circuit. The heat from the read out circuits can cause distortions to the thermal image. Knowing the temperature gradient caused by internal heating, it will later be possible to correct the image by implementing algorithms subtracting temperature contribution from the read out integrated circuit. The simulated temperature distribution shows a temperature gradient along the edges of the matrix of active bolometers. There are also three hot spots at both the left and right edge of the matrix, caused by heat from the chip temperaturesensors and I/O pads. Heat from the chip temperature sensors also causes an uneven temperature profile in the column of reference pixels, possibly causing imperfections in the image at the levels of the sensors. Simulations of bolometer row biasing are carried out to get information about how biasing affects temperatures in neighbouring rows. The simulations show some row-to-row interference, but the thermal model suffers from having biasing heat inserted directly onto the top surface of the chip, as opposed to having heat originate from the bolometers. To get better simulation results describing the row biasing, a thermal model of the bolometers needs to be included. The results indicate a very small temperature increase in the active pixel array, with temperatures not exceeding ten millikelvin. Through comparisons with another similar simulation of the chip, there is reason to believe the simulated temperature increase is a bit low. The other simulation cannot be used to draw any conclusions about the distribution of temperature. / Examensarbetet undersöker den temperaturdistribution som uppkommer i ett chip till en IR-kamera till följd av värmeutvecklingen i dess egna utläsningskretsar. Genom att ha information om temperaturdistributionen är det möjligt att längre fram i utvecklingsprocessen skapa algoritmer som subtraherar bort chippets interna värmetillskott från den termiska bilden. Den simulerade temperaturdistributionen visar att de största temperaturgradienterna uppkommer längs den aktiva pixelmatrisens sidor. Det är även möjligt att se tre varmare områden vid både den vänstra och högra sidan av matrisen skapade av värme från chippets temperatursensorer och I/O-kretsar. Värme från temperatursensorerna påverkar även temperaturen i kolumnen med referenspixlar, vilket kan ge upphov till avvikelser i den termiska bilden i höjd med dessa temperatursensorer. Simuleringar av radvis basering av bolometrar utförs för att få information om hur bolometerbiaseringen påverkar temperaturen i angränsade rader. Simuleringarna visar att det finns störningar mellan rader, men simuleringsmodellen lider av avsaknaden av en termisk bolometermodell och tvingas applicera värme direkt på chipytan istället för att låta värme utvecklas i bolometrarna. För bättre simuleringsresultat innefattande bolometerbiasering bör en termisk bolometermodell inkluderas i simuleringen. Resultaten visar på en mycket liten temperaturökning inom den värmekänsliga aktiva pixelmatrisen, med temperaturökningar inom detta område som inte överstiger tio millikelvin. Genom jämförelser med en liknande simulering av samma chip är det inte omöjligt att dra slutsatsen att temperaturökningen är något låg. Det går inte att dra några slutsatser om temperaturens distribution genom denna jämförelse av simuleringar.
50

Numerical calculations of optical structures using FEM

Wiklund, Henrik January 2006 (has links)
Complex surface structures in nature often have remarkable optical properties. By understanding the origin of these properties, such structures may be utilized in metamaterials, giving possibilities to create materials with new specific optical properties. To simplify the optical analysis of these naturally developed surface structures there is a need to assist data analysis and analytical calculations with numerical calculations. In this work an application tool for numerical calculations of optical properties of surface structures, such as reflectances and ellipsometric angles, has been developed based on finite element methods (FEM). The data obtained from the application tool has been verified by comparison to analytical expressions in a thorough way, starting with reflection from the simplest of interfaces stepwise increasing the complexity of the surfaces. The application tool were developed within the electromagnetic module of Comsol Multiphysics and used the script language to perform post-process calculations on the obtained electromagnetic fields. The data obtained from this application tool are given in such way that easily allows for comparison with data received from spectroscopic ellipsometry measurements.

Page generated in 0.0473 seconds