• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 3
  • 1
  • Tagged with
  • 10
  • 10
  • 10
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Analysis of Droplet Impact on a Liquid Pool

Radhika Arvind Bhopatkar (9012413) 25 June 2020 (has links)
<p>Secondary atomization is very important in applications like IC engine and aircraft engine performance, agricultural sprays, and inkjet printing to name a few. In case of IC engines and aircraft engines, a good understanding of the modes of secondary atomization and the resultant drop size can contribute to improving the fuel injection and hence the efficiency of the engine. Similarly, with the help of appropriate secondary atomization desired agro-spray quality, ink usage and print quality can be achieved which would optimize the usage of chemicals and ink respectively and avoid any harmful effects on the environment.</p> <p> </p> <p>One of the reasons for secondary atomization that occurs very often in most of the spray applications is the drop impact on a solid or liquid surface. Especially it is cardinal to understand the impact of a drop on a liquid film since even in case of impact of liquid drops on a solid surface ultimately the drops that are injected at a later time are going have a target surface as a thin liquid film on the solid base due to the accumulation of the previously injected drops. Analysis of drop impact on a liquid film with non-dimensional thickness ranging from 0.1 to 1 has been done thoroughly before (Cossali <i>et al.,</i> 2004, Vander Waal <i>et al.,</i> 2006, Moreira <i>et al.,</i> 2010), however, analysis of drop impact on a liquid film with non-dimensional thickness greater than 1 is still in a rudimentary stage. This work focuses on determining the probability density functions for the secondary drop sizes for drops produced in case of drop impact on a liquid film while varying the h/d ratio beyond 1. The experimental set-up used to study drop impact includes a droplet generator and DIH system as mentioned in, Yao <i>et al.</i> (2017). The DIH set-up includes a CW laser, spatial filter, beam expander and a collimator as adapted from Guildenbecher <i>et al.</i> (2016). The height of drop impact is varied to vary the impact <i>We</i>, by adjusting the syringe height. Three fluids- DI-Water, ethanol and glycerol are tested for examining the effect of viscosity on the resultant drop sizes. Results are plotted with respect to viscosity, impact <i>We</i> and the non-dimensional film thickness, as the fragmentation of drops is directly associated to these parameters. Results indicate that majority of the secondary droplets lie in the size range of 25 µm to 50 µm. It is also observed that the tendency of secondary atomization from crown splashing increases with the increase in <i>We</i> and decreases with increase in <i>Oh.</i></p>
2

Generalised density function estimation using moments and the characteristic function

Esterhuizen, Gerhard 03 1900 (has links)
139 leaves printed single pages, preliminary pages i-xi and numbered pages 1-127. Includes bibliography and a list of figures and tables. Digitized at 600 dpi grayscale to pdf format (OCR),using a Bizhub 250 Konica Minolta Scanner. / Thesis (MScEng (Electrical and Electronic Engineering))--University of Stellenbosch, 2003. / ENGLISH ABSTRACT: Probability density functions (PDFs) and cumulative distribution functions (CDFs) play a central role in statistical pattern recognition and verification systems. They allow observations that do not occur according to deterministic rules to be quantified and modelled. An example of such observations would be the voice patterns of a person that is used as input to a biometric security device. In order to model such non-deterministic observations, a density function estimator is employed to estimate a PDF or CDF from sample data. Although numerous density function estimation techniques exist, all the techniques can be classified into one of two groups, parametric and non-parametric, each with its own characteristic advantages and disadvantages. In this research, we introduce a novel approach to density function estimation that attempts to combine some of the advantages of both the parametric and non-parametric estimators. This is done by considering density estimation using an abstract approach in which the density function is modelled entirely in terms of its moments or characteristic function. New density function estimation techniques are first developed in theory, after which a number of practical density function estimators are presented. Experiments are performed in which the performance of the new estimators are compared to two established estimators, namely the Parzen estimator and the Gaussian mixture model (GMM). The comparison is performed in terms of the accuracy, computational requirements and ease of use of the estimators and it is found that the new estimators does combine some of the advantages of the established estimators without the corresponding disadvantages. / AFRIKAANSE OPSOMMING: Waarskynlikheids digtheidsfunksies (WDFs) en Kumulatiewe distribusiefunksies (KDFs) speel 'n sentrale rol in statistiese patroonherkenning en verifikasie stelsels. Hulle maak dit moontlik om nie-deterministiese observasies te kwantifiseer en te modelleer. Die stempatrone van 'n spreker wat as intree tot 'n biometriese sekuriteits stelsel gegee word, is 'n voorbeeld van so 'n observasie. Ten einde sulke observasies te modelleer, word 'n digtheidsfunksie afskatter gebruik om die WDF of KDF vanaf data monsters af te skat. Alhoewel daar talryke digtheidsfunksie afskatters bestaan, kan almal in een van twee katagoriee geplaas word, parametries en nie-parametries, elk met hul eie kenmerkende voordele en nadele. Hierdie werk Ie 'n nuwe benadering tot digtheidsfunksie afskatting voor wat die voordele van beide die parametriese sowel as die nie-parametriese tegnieke probeer kombineer. Dit word gedoen deur digtheidsfunksie afskatting vanuit 'n abstrakte oogpunt te benader waar die digtheidsfunksie uitsluitlik in terme van sy momente en karakteristieke funksie gemodelleer word. Nuwe metodes word eers in teorie ondersoek en ontwikkel waarna praktiese tegnieke voorgele word. Hierdie afskatters het die vermoe om 'n wye verskeidenheid digtheidsfunksies af te skat en is nie net ontwerp om slegs sekere families van digtheidsfunksies optimaal voor te stel nie. Eksperimente is uitgevoer wat die werkverrigting van die nuwe tegnieke met twee gevestigde tegnieke, naamlik die Parzen afskatter en die Gaussiese mengsel model (GMM), te vergelyk. Die werkverrigting word gemeet in terme van akkuraatheid, vereiste numeriese verwerkingsvermoe en die gemak van gebruik. Daar word bevind dat die nuwe afskatters weI voordele van die gevestigde afskatters kombineer sonder die gepaardgaande nadele.
3

Estudo de modelos estatisticos utilizados na caracterização de tecidos por ultra-som / A study of statistical models used for ultrasonic tissue characterization

Vivas, Gustavo de Castro 08 April 2006 (has links)
Orientadores: Eduardo Tavares Costa, Ricardo Grossi Dantas / Dissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-07T21:03:41Z (GMT). No. of bitstreams: 1 Vivas_GustavodeCastro_M.pdf: 7295002 bytes, checksum: 6c61cdae482950b95224f30787f35db0 (MD5) Previous issue date: 2006 / Resumo: O diagnóstico médico por ultra-som vem sendo amplamente difundido, tornando-se referência em muitos exames clínicos, destacando-se as imagens em modo-B, capazes de representar a anatomia de tecidos e órgãos de forma não-invasiva, em tempo real e sem a utilização de radiação ionizante. Entretanto, o speckle, artefato inerente aos sistemas que utilizam fontes coerentes como nos sistemas de ultra-som, degrada a qualidade das imagens, podendo reduzir bastante a capacidade de detecção de lesões pelo médico. A caracterização de tecidos por ultra-som visa extrair informações de relevância clínica sobre as reais características da estrutura biológica sob investigação e que não podem ser facilmente percebidas por inspeção visual. Neste trabalho foi realizado um estudo comparativo entre os principais modelos de distribuição estatística encontrados na literatura e adotados na caracterização de tecidos por ultra-som. Foram utilizadas funções densidade de probabilidade que melhor representassem o padrão de brilho existente em uma dada região de uma imagem. Os resultados indicaram a versatilidade da distribuição Composta (K-Nakagami) em modelar diferentes condições de espalhamento existentes nos tecidos, mostrando-se uma forte candidata para a caracterização de tecidos por ultra-som. Entretanto, usando o conceito de espalhadores equivalentes, pôde ser mostrado que a abordagem estatística utilizada não fornece parâmetros quantitativos conclusivos sobre a estrutura investigada, mas uma contribuição conjunta de vários fatores, entre eles a densidade e a distribuição de amplitudes dos espalhadores acústicos / Abstract: Ultrasound medical diagnosis has been widely used and has become a reference in many clinical examinations, especially B-mode imaging, capable of representing tissue and organ anatomy without ionizing radiation in a non-invasive way and in real-time. However, speckle, an inherent artifact of systems that use coherent sources like ultrasound systems, degrades image quality, leading to subjective and possibly misleading diagnostics. Ultrasonic tissue characterization aims to extract clinical relevant information of the biological structure characteristics under investigation and that cannot be easily achieved by visual inspection. In this dissertation it was carried out a comparative study of the most popular models of statistics distributions found in literature and commonly adopted in ultrasonic tissue characterization. It has been used probability density functions that better represented the brightness pattern of a given region of an ultrasound image. The results indicated the versatility of the Compound distribution (K-Nakagami) in modeling different scattering conditions of tissues, revealing itself a good model for use in ultrasonic tissue characterization. However, using the concept of equivalent scatterers, it could be shown that the statistics approach does not supply conclusive quantitative parameters of the structure under investigation, being a joint contribution of many factors such as density and amplitude distribution of the acoustic scatterers / Mestrado / Engenharia Biomedica / Mestre em Engenharia Elétrica
4

Derivation of Probability Density Functions for the Relative Differences in the Standard and Poor's 100 Stock Index Over Various Intervals of Time

Bunger, R. C. (Robert Charles) 08 1900 (has links)
In this study a two-part mixed probability density function was derived which described the relative changes in the Standard and Poor's 100 Stock Index over various intervals of time. The density function is a mixture of two different halves of normal distributions. Optimal values for the standard deviations for the two halves and the mean are given. Also, a general form of the function is given which uses linear regression models to estimate the standard deviations and the means. The density functions allow stock market participants trading index options and futures contracts on the S & P 100 Stock Index to determine probabilities of success or failure of trades involving price movements of certain magnitudes in given lengths of time.
5

Sistemas computacionais aplicados ao manejo florestal / Computer systems applied to forest management

Binoti, Daniel Henrique Breda 01 August 2012 (has links)
Made available in DSpace on 2015-03-26T12:27:13Z (GMT). No. of bitstreams: 1 texto completo.pdf: 3360592 bytes, checksum: 7d67d4fcf4c95ec01e67b457ff878b1e (MD5) Previous issue date: 2012-08-01 / Conselho Nacional de Desenvolvimento Científico e Tecnológico / The objective of the present work to start several projects with the aim of generating free systems to assist forest managers, academics and extension in solving some problems in the forest sector. The projects are started the RPF (forest regulation system), OtimTotas (optimization of multiproduct of timber system), FitFD (system for fitting of probability density functions), and Select (system for selection of data for growth and yield modeling). The projects were developed using the Java programming language. As development environment we used the IDE (Integrated Development Environment) Netbeans 7.1 and JDK 7.3 (Java Development Kit) the four systems were implemented and are freely available on the site NeuroForest (http://neuroforest.ucoz.com/) and is presented for flexible and efficient resolution of problems to which they have. / Objetivou-se no presente trabalho iniciar diversos projetos com o intuito de gerar sistemas gratuitos, para auxiliar gestores florestais, acadêmicos e extensionistas, na resolução de alguns problemas do setor florestal. Os projetos iniciados são o RPF (sistema para regulação da produção florestal), OtimTotas (sistema para otimização de multiprodutos madeireiros), FitFD (sistema para ajuste de funções densidade de probabilidade), e o Select (sistema para seleção de dados para ajuste de modelos de crescimento e produção). Os projetos foram desenvolvidos utilizando a linguagem de programação Java. Como ambiente de desenvolvimento foi utilizado a IDE (Integrated Development Environment) Netbeans 7.1, e a JDK 7.3 (Java Development Kit). Os quatro sistemas foram implementados e estão disponibilizados gratuitamente no site NeuroForest (http://neuroforest.ucoz.com/) e se apresentaram flexíveis e eficientes para a resolução de problemas aos quais se dispõem.
6

Contribuições teoricas para o estudo de funções de distribuição correlacionadas em um canal sem fio / Theoretical contributions to the study of correlated distributions funcions of wireless channels

Souza, Rausley Adriano Amaral de 13 August 2018 (has links)
Orientador: Miched Daoud Yacoub / Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação / Made available in DSpace on 2018-08-13T17:19:37Z (GMT). No. of bitstreams: 1 Souza_RausleyAdrianoAmaralde_D.pdf: 1070968 bytes, checksum: 289cdc544616610aa8ff990ce2ea0e82 (MD5) Previous issue date: 2009 / Resumo: Em comunicações móveis, o desvanecimento por múltiplos percursos é modelado por várias distribuições incluindo Hoyt, Rayleigh, Weibull, Nakagami-m e Rice. Nesta tese, são deduzidas expressões exatas para o modelo de duas variáveis Hoyt (Nakagami-q) com correlação arbitrária em um ambiente não estacionário. De forma específica, as seguintes estatísticas são encontradas: função densidade de probabilidade conjunta, função de distribuição cumulativa conjunta, coeficiente de correlação e algumas estatísticas relacionadas ao parâmetro SNR na saída do combinador de seleção, a saber, probabilidade de indisponibilidade e função densidade de probabilidade. As expressões fazem uso dos polinômios de Laguerre generalizados. Elas são matematicamente tratáveis e possuem flexibilidade suficiente para acomodar um grande número de cenários de correlação, úteis na análise de um ambiente com desvanecimento mais geral. Depois disto, aproveitando os resultados previamente deduzidos, expressões exatas relacionadas a processos Nakagami-m com duas variáveis com correlação arbitrária e parâmetros de desvanecimento igualmente arbitrários são encontradas. De forma mais específica, as seguintes estatísticas são obtidas neste trabalho: função geratriz de momentos, função densidade de probabilidade conjunta, função de distribuição cumulativa conjunta, coeficiente de correlação de potência, e várias estatísticas ligadas à relação sinal-ruído na saída do combinador de seleção, especialmente, probabilidade de indisponibilidade, função densidade de probabilidade e valor médio da relação sinal-ruído. Recentemente, o modelo de desvanecimento ®-µ foi proposto que leva em conta a não linearidade do meio de propagação assim como o fenômeno de cluster de múltiplos percursos das ondas de rádio. A distribuição ®-µ é geral, flexível e matematicamente tratável. Ela inclui importantes distribuições tais como Gamma (e suas versões discretas Erlang e Chi-Quadrada Central), Nakagami-m (e sua versão discreta Chi), Exponencial, Weibull, Gaussiana Unilateral e Rayleigh. Nesta tese, uma formulação através de série infinita para a função densidade de probabilidade multivariável ®-µ com matriz de correlação arbitrária e variáveis não identicamente distribuídas é encontrada. A expressão é exata e geral e inclui todos os resultados anteriormente publicados na literatura relacionados às distribuições compreendidas pela distribuição ®-µ. A expressão geral é então particularizada para uma solução aproximada simples na forma fechada. Adicionalmente, a função distribuição cumulativa conjunta multivariável é obtida, novamente, em uma forma fechada e simples. Os resultados exato e aproximado são muito próximos para valores pequenos e médio de correlação. Nós mantemos, entretanto, que uma relação entre os coeficientes de correlação das correspondentes componentes gaussianas deve ser mantida de forma a atender os critérios de convergência / Abstract: In wireless communications, the multipath fading is modeled by several distributions including Hoyt, Rayleigh, Weibull, Nakagami-m, and Rice. In this thesis, new, exact expressions for the bivariate Hoyt (Nakagami-q) processes with arbitrary correlation in a nonstationary environment are derived. More specifically, the following are obtained: joint probability density function, joint cumulative distribution function, power correlation coefficient, and some statistics related to the signal-tonoise ratio at the output of the selection combiner, namely, outage probability and probability density function. The expressions make use of the well known generalized Laguerre polynomials. They are mathematically tractable and flexible enough to accommodate a myriad of correlation scenarios, useful in the analysis of a more general fading environment. After this, capitalizing on result previously deduced, exact expressions concerning the bivariate Nakagami-mprocesses with arbitrary correlation and fading parameters are derived. More specifically, the following are obtained in the present work: joint moment generating function; joint probability density function; joint cumulative distribution function; power correlation coefficient; and several statistics related to the signal-to-noise ratio at the output of the selection combiner, namely, outage probability, probability density function, and mean SNR. More recently, the ®-µ fading model has been proposed that accounts for the non-linearity of the propagation medium as well as for the multipath clustering of the radio waves. The ®-µ distribution is general, flexible, and mathematically easily tractable. It includes important distributions such as Gamma (and its discrete versions Erlang and Central Chi-Squared), Nakagami-m (and its discrete version Chi), Exponential, Weibull, One-Side Gaussian, and Rayleigh. An infinite series formulation for the multivariate ®-µ joint probability density function with arbitrary correlation matrix and non-identically distributed variates is derived. The expression is exact and general and includes all of the results previously published in the literature concerning the distributions comprised by the ®-µ distribution. The general expression is then particularized to an indeed very simple, approximate closed-form solution. In addition, a multivariate joint cumulative distribution function is obtained, again in simple, closed-form manner. Approximate and exact results are very close to each other for small as well medium values of correlation. We maintain, however, that a relation among the correlation coefficients of the corresponding Gaussian components must be kept so that convergence is attained / Doutorado / Telecomunicações e Telemática / Doutor em Engenharia Elétrica
7

Modeling of Diesel HCCI combustion and its impact on pollutant emissions applied to global engine system simulation / Modélisation de la combustion diesel HCCI et de son impact sur la formation de polluants appliquée à la simulation système

Dulbecco, Alessio 02 February 2010 (has links)
La législation sur les émissions de polluants des Moteurs à Combustion Interne (ICEs) est de plus en plus contraignante et représente un gros défi pour les constructeurs automobiles. De nouvelles stratégies de combustion telles que la Combustion à Allumage par Compression Homogène (HCCI) et l’exploitation de stratégies d’injections multiples sont des voies prometteuses qui permettent de respecter les normes sur les émissions de NOx et de suies, du fait que la combustion a lieu dans un mélange très dilué et par conséquent à basse température. Ces aspects demandent la création d’outils numériques adaptés à ces nouveaux défis. Cette thèse présente le développement d’un nouveau modèle 0D de combustion Diesel HCCI : le dual Combustion Model (dual - CM). Le modèle dual-CM a été basé sur l’approche PCM-FPI utilisée en Mécanique des Fluides Numérique (CFD) 3D, qui permet de prédire les caractéristiques de l’auto-allumage et du dégagement de chaleur de tous les modes de combustion Diesel. Afin d’adapter l’approche PCM-FPI à un formalisme 0D, il est fondamental de décrire précisément le mélange à l’intérieur du cylindre. Par consequent, des modèles d’évaporation du carburant liquide, de formation de la zone de mélange et de variance de la fraction de mélange, qui permettent d’avoir une description détaillée des proprietés thermochimiques locales du mélange y compris pour des configurations adoptant des stratégies d’injections multiples, sont proposés. Dans une première phase, les résultats du modèle ont été comparés aux résultats du modèle 3D. Ensuite, le modèle dual-CM a été validé sur une grande base de données expérimentales; compte tenu du bon accord avec l’expérience et du temps de calcul réduit, l’approche présentée s’est montrée prometteuse pour des applications de type simulation système. Pour conclure, les limites des hypothèses utilisées dans dual-CM ont été investiguées et des perspectives pour les dévélopements futurs ont été proposées. / More and more stringent restrictions concerning the pollutant emissions of Internal Combustion Engines (ICEs) constitute a major challenge for the automotive industry. New combustion strategies such as Homogeneous Charge Compression Ignition (HCCI) and the implementation of complex injection strategies are promising solutions for achieving the imposed emission standards as they permit low NOx and soot emissions, via lean and highly diluted combustions, thus assuring low combustion temperatures. This requires the creation of numerical tools adapted to these new challenges. This Ph.D presents the development of a new 0D Diesel HCCI combustion model : the dual Combustion Model (dual−CM ). The dual-CM is based on the PCM-FPI approach used in 3D CFD, which allows to predict the characteristics of Auto-Ignition and Heat Release for all Diesel combustion modes. In order to adapt the PCM-FPI approach to a 0D formalism, a good description of the in-cylinder mixture is fundamental. Consequently, adapted models for liquid fuel evaporation, mixing zone formation and mixture fraction variance, which allow to have a detailed description of the local thermochemical properties of the mixture even in configurations adopting multiple injection strategies, are proposed. The results of the 0D model are compared in an initial step to the 3D CFD results. Then, the dual-CM is validated against a large experimental database; considering the good agreement with the experiments and low CPU costs, the presented approach is shown to be promising for global engine system simulations. Finally, the limits of the hypotheses made in the dual-CM are investigated and perspectives for future developments are proposed.
8

Long-Term Ambient Noise Statistics in the Gulf of Mexico

Snyder, Mark Alan 15 December 2007 (has links)
Long-term omni-directional ambient noise was collected at several sites in the Gulf of Mexico during 2004 and 2005. The Naval Oceanographic Office deployed bottom moored Environmental Acoustic Recording System (EARS) buoys approximately 159 nautical miles south of Panama City, Florida, in water depths of 3200 meters. The hydrophone of each buoy was 265 meters above the bottom. The data duration ranged from 10-14 months. The buoys were located near a major shipping lane, with an estimated 1.5 to 4.5 ships per day passing nearby. The data were sampled at 2500 Hz and have a bandwidth of 10-1000 Hz. Data are processed in eight 1/3-octave frequency bands, centered from 25 to 950 Hz, and monthly values of the following statistical quantities are computed from the resulting eight time series of noise spectral level: mean, median, standard deviation, skewness, kurtosis and coherence time. Four hurricanes were recorded during the summer of 2004 and they have a major impact on all of the noise statistics. Noise levels at higher frequencies (400-950 Hz) peak during extremely windy months (summer hurricanes and winter storms). Standard deviation is least in the region 100-200 Hz but increases at higher frequencies, especially during periods of high wind variability (summer hurricanes). Skewness is positive from 25-400 Hz and negative from 630-950 Hz. Skewness and kurtosis are greatest near 100 Hz. Coherence time is low in shipping bands and high in weather bands, and it peaks during hurricanes. The noise coherence is also analyzed. The 14-month time series in each 1/3- octave band is highly correlated with other 1/3-octave band time series ranging from 2 octaves below to 2 octaves above the band's center frequency. Spatial coherence between hydrophones is also analyzed for hydrophone separations of 2.29, 2.56 and 4.84 km over a 10-month period. The noise field is highly coherent out to the maximum distance studied, 4.84 km. Additionally, fluctuations of each time series are analyzed to determine time scales of greatest variability. The 14-month data show clearly that variability occurs primarily over three time scales: 7-22 hours (shipping-related), 56-282 hours (2-12 days, weather-related) and over an 8-12 month period.
9

Analyse du transport turbulent dans une zone de mélange issue de l'instabilité de Richtmyer-Meshkov à l'aide d'un modèle à fonction de densité de probabilité : Analyse du transport de l’énergie turbulente / Simulation of a turbulent mixing zone resulting from the Richtmyer-Meshkov instability using a probability density function model : Analysis of the turbulent kinetic energy transport

Guillois, Florian 07 September 2018 (has links)
Cette thèse a pour objet la simulation d'une zone de mélange turbulente issue de l'instabilité de Richtmyer-Meshkov à l'aide d'un modèle à fonction de densité de probabilité (PDF). Nous analysons plus particulièrement la prise en charge par le modèle PDF du transport de l'énergie cinétique turbulente dans la zone de mélange.Dans cette optique, nous commençons par mettre en avant le lien existant entre les statistiques en un point de l'écoulement et ses conditions initiales aux grandes échelles. Ce lien s'exprime à travers le principe de permanence des grandes échelles, et permet d'établir des prédictions pour certaines grandeurs de la zone de mélange, telles que son taux de croissance ou son anisotropie.Nous dérivons ensuite un modèle PDF de Langevin capable de restituer cette dépendance aux conditions initiales. Ce modèle est ensuite validé en le comparant à des résultats issus de simulations aux grandes échelles (LES).Enfin, une analyse asymptotique du modèle proposé permet d'éclairer notre compréhension du transport turbulent. Un régime de diffusion est mis en évidence, et l'expression du coefficient de diffusion associé à ce régime atteste l'influence de la permanence des grandes échelles sur le transport turbulent.Tout au long de cette thèse, nous nous sommes appuyés sur des résultats issus de simulations de Monte Carlo du modèle de Langevin. A cet effet, nous avons développé une méthode spécifique eulérienne et à l'avons comparé à des alternatives lagrangiennes. / The aim of the thesis is to simulate a turbulent mixing zone resulting from the Richtmyer-Meshkov instability using a probability density function (PDF) model. An emphasis is put on the analysis of the turbulent kinetic energy transport.To this end, we first highlight the link existing between the one-point statistics of the flow and its initial conditions at large scales. This link is expressed through the principle of permanence of large eddies, and allows to establish predictions for quantities of the mixing zone, such as its growth rate or its anisotropy.We then derive a Langevin PDF model which is able to reproduce this dependency of the statistics on the initial conditions. This model is then validated by comparing it against large eddy simulations (LES).Finally, an asymptotic analysis of the derived model helps to improve our understanding of the turbulent transport. A diffusion regime is identified, and the expression of the diffusion coefficient associated with this regime confirms the influence of the permanence of large eddies on the turbulent transport.Throughout this thesis, our numerical results were based on Monte Carlo simulations for the Langevin model. In this regard, we proceeded to the development of a specific Eulerian method and its comparison with Lagrangian counterparts.
10

Stably stratified atmospheric boundary layer: study trough large-eddy simulations, mesoscale modelling and observations

Jiménez Cortés, Maria Antònia 12 December 2005 (has links)
La capa límit atmosfèrica és l'àrea directament influenciada per la presència de la superfície de la terra i la seva alçada és d'uns centenars de metres a uns pocs quilòmetres. Durant el vespre, el refredament radiatiu estratifica establement l'aire prop del sòl i es forma el que es coneix com a Capa Límit Estable (CLE). D'avui en dia, la CLE és un règim que encara no està prou ben caracteritzat. La turbulència, que no és homogènia ni isòtropa, i la gran importància dels efectes locals com l'orografia, entre d'altres factors, dificulten l'estudi d'aquest règim. Per aquest motiu, la CLE és objecte d'especial atenció, sobretot a l'hora de millorar la seva representació en models tant de temps com de clima.Aquest treball es centra en l'estudi de la CLE mitjançant 3 eines diferents: 1) simulacions explícites de grans remolins (més conegudes com a simulacions LES), per determinar el comportament dels moviments turbulents, on les resolucions són de l'ordre de metres; 2) simulacions mesoscalars, per caracteritzar els efectes locals, on les resolucions són de l'ordre de kilòmetres; 3) anàlisi de les observacions sota aquestes condicions per tal de caracteritzar i entendre millor els fenòmens observats.En primer lloc s'estudia el rang d'estabilitats a on el model LES, que considera la teoria de Kolmogorov per la dissipació de l'energia, funciona correctament. Els resultats del model són realistes tal com mostra la seva comparació amb les mesures de dues campanyes experimentals (SABLES-98 i CASES-99). Per explorar més a fons els resultats LES i per comparar-los amb les mesures s'han utilitzat les Funcions de Distribució de Probabilitat (PDF). Aquests resultats LES són també comparables als obtinguts amb altres models LES, tal com mostra la intercomparació de models LES, més coneguda com a GABLS.Un cop desenvolupades totes les eines necessàries es fa un LES d'un cas més realista, basat en les observacions d'un màxim de vent de capes baixes (més conegut com a Low-Level Jet, LLJ). L'anàlisi combinat dels resultats LES i les mesures permet entendre millor els processos de barreja que tenen lloc a través de la inversió. Finalment, la contribució dels efectes locals s'estudia mitjançant les simulacions mesoscalars, en aquest cas centrades a l'illa de Mallorca. Durant el vespre es veu com les circulacions locals es desenvolupen a les conques (de longitud al voltant de 25km), formant-se, per exemple, vents catabàtics o LLJ com l'estudiat anteriorment. En aquest cas les simulacions es verifiquen amb imatges de satèl·lit NOAA i observacions de les estacions automàtiques de mesures, donant resultats semblants. / The atmospheric boundary layer is the area directly influenced by the presence of the Earth's surface and its height is from hundreds of meters to few kilometres. During the night, the radiative cooling stratifies the layer close to the surface and it forms the Stably-stratified Atmospheric Boundary Layer (SBL). Nowadays, the SBL is a regime not well enough characterized, yet. Turbulence, which is not homogeneous either isotropic, and the great importance of the local effects, like the orography, among other factors, make the SBL be a difficult regime to study. Even so, the SBL is an object of special attention, especially when improving its representation in numerical prediction models or climate models.This work focuses on the study of the SBL through 3 different tools: 1) Large-Eddy Simulations (LES), to determine the turbulent motions, where the resolutions are about 1m; 2) Mesoscale simulations, to characterize the local effects, where resolutions are about 1km; 3) Analysis of the observations under these conditions in order to better characterize and understand the observed phenomena.In first place, it is studied the range of stabilities where the LES model, that considers the Kolmogorov theory for the dissipation of the energy, works correctly. The results are realistic as the comparison with measures from two experimental campaigns (SABLES-98 and CASES-99) shows. To explore the results more thoroughly, and to compare the LES results to the measurements, the Probability Density Functions (PDF) have been used. The LES results are also comparable to the ones obtained with other LES models, as the intercomparison of different LES models show, better known as GABLS.Then, a more realistic case is performed using the LES model, based on observations of a Low-Level Jet (LLJ). The combined inspection of the LES results and the observations allow to better understand the mixing processes that take place through the inversion layer. Finally, the contribution of the local effects is studied through a mesoscale simulation. Here the attention is focused on the Mallorca Island. During the night, the model is able to reproduce the local circulations is a basin of a characteristic size of 25km. The main features obtained previously from the LES of the LLJ are also reproduced by the mesoscale model. These runs are verified with NOAA satellite images and observations from the automatic surface weather stations, giving that the model is able to reproduce realistic results.

Page generated in 0.1204 seconds