Spelling suggestions: "subject:"flows"" "subject:"slows""
791 |
Modeling Structured Data with Invertible Generative ModelsLu, You 01 February 2022 (has links)
Data is complex and has a variety of structures and formats. Modeling datasets is a core problem in modern artificial intelligence. Generative models are machine learning models, which model datasets with probability distributions. Deep generative models combine deep learning with probability theory, so that can model complicated datasets with flexible models. They have become one of the most popular models in machine learning, and have been applied to many problems.
Normalizing flows are a novel class of deep generative models that allow efficient exact likelihood calculation, exact latent variable inference and sampling. They are constructed using functions whose inverse and Jacobian determinant can be efficiently computed. In this paper, we develop normalizing flow based generative models to model complex datasets. In general, data can be categorized to unlabeled data, labeled data, and weakly labeled data. We develop models for these three types of data, respectively.
First, we develop Woodbury transformations, which are flow layers for general unsupervised normalizing flows, and can improve the flexibility and scalability of current flow based models. Woodbury transformations achieve efficient invertibility via Woodbury matrix identity and efficient determinant calculation via Sylvester's determinant identity. In contrast with other operations used in state-of-the-art normalizing flows, Woodbury transformations enable (1) high-dimensional interactions, (2) efficient sampling, and (3) efficient likelihood evaluation. Other similar operations, such as 1x1 convolutions, emerging convolutions, or periodic convolutions allow at most two of these three advantages. In our experiments on multiple image datasets, we find that Woodbury transformations allow learning of higher-likelihood models than other flow architectures while still enjoying their efficiency advantages.
Second, we propose conditional Glow (c-Glow), a conditional generative flow for structured output learning, which is an advanced variant of supervised learning with structured labels. Traditional structured prediction models try to learn a conditional likelihood, i.e., p(y|x), to capture the relationship between the structured output y and the input features x. For many models, computing the likelihood is intractable. These models are therefore hard to train, requiring the use of surrogate objectives or variational inference to approximate likelihood. C-Glow benefits from the ability of flow-based models to compute p(y|x) exactly and efficiently. Learning with c-Glow does not require a surrogate objective or performing inference during training. Once trained, we can directly and efficiently generate conditional samples. We develop a sample-based prediction method, which can use this advantage to do efficient and effective inference. In our experiments, we test c-Glow on five different tasks. C-Glow outperforms the state-of-the-art baselines in some tasks and predicts comparable outputs in the other tasks. The results show that c-Glow is applicable to many different structured prediction problems.
Third, we develop label learning flows (LLF), which is
a general framework for weakly supervised learning problems. Our method is a generative model based on normalizing flows. The main idea of LLF is to optimize the conditional likelihoods of all possible labelings of the data within a constrained space defined by weak signals. We develop a training method for LLF that trains the conditional flow inversely and avoids estimating the labels. Once a model is trained, we can make predictions with a sampling algorithm. We apply LLF to three weakly supervised learning problems. Experiment results show that our method outperforms many state-of-the-art alternatives.
Our research shows the advantages and versatility of normalizing flows. / Doctor of Philosophy / Data is now more affordable and accessible. At the same time, datasets are more and more complicated. Modeling data is a key problem in modern artificial intelligence and data analysis. Deep generative models combine deep learning and probability theory, and are now a major way to model complex datasets. In this dissertation, we focus on a novel class of deep generative model--normalizing flows. They are becoming popular because of their abilities to efficiently compute exact likelihood, infer exact latent variables, and draw samples. We develop flow-based generative models for different types of data, i.e., unlabeled data, labeled data, and weakly labeled data. First, we develop Woodbury transformations for unsupervised normalizing flows, which improve the flexibility and expressiveness of flow based models. Second, we develop conditional generative flows for an advanced supervised learning problem -- structured output learning, which removes the need of approximations, and surrogate objectives in traditional (deep) structured prediction models. Third, we develop label learning flows, which is a general framework for weakly supervised learning problems. Our research improves the performance of normalizing flows, and extend the applications of them to many supervised and weakly supervised problems.
|
792 |
Implementation of the phase field method with the Immersed Boundary Method for application to wave energy convertersJain, Sahaj Sunil 14 August 2023 (has links)
Consider a bottom-hinged Oscillating Wave Surge Converter (OWSC): This device oscillates due to the hydrodynamic forces applied on it by the action of ocean waves. The focus of this thesis is to build upon the in-house multi-block generalized coordinate finite volume solver GenIDLEST using a collocated grid arrangement within the framework of the fractional-step method to make it compatible to simulate such systems. The first step in this process is to deploy a convection scheme which differentiates between air and water. This process is further complicated by the 1:1000 density and 1:100 viscosity ratio between the two fluids. For this purpose, a phase field method is chosen for its ease of implementation and proven boundedness and conservativeness properties. Extensive validation and verification using standard test cases, such as droplet in shear flow, Rayleigh Taylor instability, and the Dam Break Problem is carried out. This development is then coupled with the present Immersed Boundary Module which is used to simulate the presence of moving bodies and again verified against test cases, such as the Dam Break problem with a vertical obstacle and heave decay of a partially submerged buoyant cylinder. Finally, a relaxation zone technique is used to generate waves and a numerical beach technique is used to absorb them. These are then used to simulate the Oscillating Surge Wave Converter. / Master of Science / An Oscillating Wave Surge Converter can be best described as a rectangular flap, hinged at the bottom, rotating under the influence of ocean waves from which energy is harvested. The singular aim of this thesis is to model this device using Computational Fluid Dynamics (CFD). More specifically, the aim is to model this dynamic device with the full Navier Stokes Equations, which include inertial forces, arising due to the motion of the fluid, viscous forces which dissipate energy, and body forces such as gravity. This involves three key steps:
1. Modelling the air-water interface using a convection scheme. A phase field method is used to differentiate between the two fluids. This task is made more challenging because of the very large density and viscosity differences between air and water.
2. Model dynamic moving geometries in a time-dependent framework. For this, we rely on the Immersed Boundary Method.
3. Develop a numerical apparatus to generate and absorb ocean waves. For this, we rely on the Relaxation Zone and Numerical Beach Method.
These developments are validated in different canonical problems and finally applied to a two-dimensional oscillating surge wave energy converter.
|
793 |
Analytical Solution of Suspended Sediment Concentration Profile: Relevance of Dispersive Flow Term in Vegetated ChannelsHuai, W., Yang, L., Guo, Yakun 22 June 2020 (has links)
Yes / Simulation of the suspended sediment concentration (SSC) has great significance in predicting the sediment transport rate, vegetation growth and the river ecosystem in the vegetated open channel flows. The present study focuses on investigating the vertical SSC profile in the vegetated open channel flows. To this end, a model of the dispersive flux is proposed in which the dispersive coefficient is expressed as partitioned linear profile above or below the half height of vegetation. The double-averaging method, i.e. time-spatial average, is applied to improve the prediction accuracy of the vertical SSC profile in the vegetated open channel flows. The analytical solution of SSC in both the submerged and the emergent vegetated open channel flows is obtained by solving the vertical double-averaging sediment advection-diffusion equation. The morphological coefficient, a key factor of the dispersive coefficient, is obtained by fitting the existing experimental data. The analytically predicted SSC agrees well with the experimental measurements, indicating that the proposed model can be used to accurately predict the SSC in the vegetated open channel flows. Results show that the dispersive term can be ignored in the region without vegetation, while the dispersive term has significant effect on the vertical SSC profile within the region of vegetation. The present study demonstrates that the dispersive coefficient is closely related to the vegetation density, the vegetation structure and the stem Reynolds number, but has little relation to the flow depth. With a few exceptions, the absolute value of the dispersive coefficient decreases with the increase of the vegetation density and increases with the increase of the stem Reynolds number in the submerged vegetated open channel flows. / Natural Science Foundation of China (Nos. 11872285 and 11672213), The UK Royal Society – International Exchanges Program (IES\R2\181122) and the Open Funding of State Key Laboratory of Water Resources and Hydropower Engineering Science (WRHES), Wuhan University (Project No: 2018HLG01).
|
794 |
Modeling of Flash Boiling Flows in Injectors with Gasoline-Ethanol Fuel BlendsNeroorkar, Kshitij Deepak 01 February 2011 (has links)
Flash boiling may be defined as the finite-rate mechanism that governs phase change in a high temperature liquid that is depressurized below its vapor pressure. This is a transient and complicated phenomenon which has applications in many industries. The main focus of the current work is on modeling flash boiling in injectors used in engines operating on the principle of gasoline direct injection (GDI). These engines are prone to flash boiling due to the transfer of thermal energy to the fuel, combined with the sub-atmospheric pressures present in the cylinder during injection. Unlike cavitation, there is little tendency for the fuel vapor to condense as it moves downstream because the fuel vapor pressure exceeds the downstream cylinder pressure, especially in the homogeneous charge mode. In the current work, a pseudo-fluid approach is employed to model the flow, and the non-equilibrium nature of flash boiling is captured through the use of an empirical time scale. This time scale represents the deviation from thermal equilibrium conditions. The fuel composition plays an important role in flash boiling and hence, any modeling of this phenomenon must account for the type of fuel being used. In the current work, standard, NIST codes are used to model single component fluids like n-octane, n-hexane, and water, and a multi-component surrogate for JP8. Additionally, gasoline-ethanol blends are also considered. These mixtures are azeotropic in nature, generating vapor pressures that are higher than those of either pure component. To obtain the properties of these fuels, two mixing models are proposed that capture this non-ideal behavior. Flash boiling simulations in a number of two and three dimensional nozzles are presented, and the flow behavior and phase change inside the nozzles is analyzed in detail. Comparison with experimental data is performed in cases where data are available. The results of these studies indicate that flash boiling significantly affects the characteristics of the nozzle spray, like the spray cone angle and liquid penetration into the cylinder. A parametric study is also presented that can help understand how the two different time scales, namely the residence time in the nozzle and the vaporization time scale, interact and affect the phenomenon of flash boiling.
|
795 |
Numerical Modelling of Subcooled Nucleate Boiling for Thermal Management Solutions Using OpenFOAMRabhi, Achref January 2021 (has links)
Two-phase cooling solutions employing subcooled nucleate boiling flows e.g. thermosyphons, have gained a special interest during the last few decades. This interest stems from their enhanced ability to remove extremely high heat fluxes, while keeping a uniform surface temperature. Consequently, modelling and predicting boiling flows is very important, in order to optimise the two-phase cooling operation and to increase the involved heat transfer coefficients. In this work, a subcooled boiling model is implemented in the open-source code OpenFOAM to improve and extend its existing solver reactingTwoPhaseEulerFoam dedicated to model boiling flows. These flows are modelled using Computational Fluid Dynamics (CFD) following the Eulerian two-fluid approach. The simulations are used to evaluate and analyse the existing Active Nucleation Site Density (ANSD) models in the literature. Based on this evaluation, the accuracy of the CFD simulations using existing boiling sub-models is determined, and features leading to improve this accuracy are highlighted. In addition, the CFD simulations are used to perform a sensitivity analysis of the interfacial forces acting on bubbles during boiling flows. Finally, CFD simulation data is employed to study the Onset of Nucleate Boiling (ONB) and to propose a new model for this boiling sub-model, with an improved prediction accuracy and extended validity range. It is shown in this work that predictions associated with existing boiling sub-models are not accurate, and such sub-models need to take into account several convective boiling quantities to improve their accuracy. These quantities are the thermophysical properties of the involved materials, liquid and vapour thermodynamic properties and the heated surface micro-structure properties. Regarding the interfacial momentum transfer, it is shown that all the interfacial forces have considerable effects on boiling, except the lift force, which can be neglected without influencing the simulations' output. The new proposed ONB model takes into account convective boiling features, and it able to predict the ONB with a very good accuracy with a standard deviation of 2.7% or 0.1 K. This new ONB model is valid for a wide range of inlet Reynolds numbers, covering both regimes, laminar and turbulent, and a wide range of inlet subcoolings and applied heat fluxes.
|
796 |
Beyond IT and Productivity : Effects of Digitized Information Flows in Grocery DistributionHorzella, Åsa January 2005 (has links)
During the last decades organizations have made large investments in Information Technology (IT). The effects of these investments have been studied in business and academic communities over the years. A large amount of research has been conducted on the relation between the investments in IT and productivity growth. Productivity is a central measure of national and organizational success and is often considered in economic decisionmaking. Researchers have however found it difficult to present a clear-cut answer to the effect of IT investments on productivity growth; an inability defined as the productivity paradox. Within the Impact of IT on Productivity (ITOP) research program the relevance of the productivity measure as an indicator of the value of IT is questionned. IT has over the years replaced physical interfaces with digital and in this way enabled new ways to process information. A retrospective research approach is therefore applied where the effects of digitized information flows are studied within specific organizational settings. In this thesis the effects of digitized information flows within Swedish grocery distribution are studied. A comprehensive presentation of the development is first conducted and three focal areas are thereafter presented. These describe supply chain information flows including order information, information on new items and analysis of point-of-sales information. The presentation of the focal areas identifies a number of effects from the digitization of information flows. The effects are analyzed according to a predefined analytical framework. The effects are divided into five categories and are thereafter evaluated when it comes to potential for generating value. The study shows that the digitization of information flows has generated numerous, multifaceted effects. Automational, informational, transformational, consumer surplus and other effects are observed. They are difficult to evaluate using a single ndicator. Specific indicators that are closely related to the effects can however be defined. The study also concludes that the productivity measure does not capture all positive effects generated by digitized information flows. / <p>ISRN/Report code: LiU-Tek-Lic-2005:39</p>
|
797 |
Verification and validation of the implementation of an Algebraic Reynolds-Stress Model for stratified boundary layersFormichetti, Martina January 2022 (has links)
This thesis studies the implementation of an Explicit Algebraic Reynolds-Stress Model(EARSM) for Atmospheric Boundary Layer (ABL) in an open source ComputationalFluid Dynamics (CFD) software, OpenFOAM, following the guidance provided by thewind company ENERCON that aims to make use of this novel model to improvesites’ wind-field predictions. After carefully implementing the model in OpenFOAM,the EARSM implementation is verified and validated by testing it with a stratifiedCouette flow case. The former was done by feeding mean flow properties, takenfrom OpenFOAM, in a python tool containing the full EARSM system of equationsand constants, and comparing the resulting flux profiles with the ones extracted bythe OpenFOAM simulations. Subsequently, the latter was done by comparing theprofiles of the two universal functions used by Monin-Obukhov Similarity Theory(MOST) for mean velocity and temperature to the results obtained by Želi et al. intheir study of the EARSM applied to a single column ABL, in “Modelling of stably-stratified, convective and transitional atmospheric boundary layers using the explicitalgebraic Reynolds-stress model” (2021). The verification of the model showed minordifferences between the flux profiles from the python tool and OpenFOAM thus, themodel’s implementation was deemed verified, while the validation step showed nodifference in the unstable and neutral stratification cases, but a significant discrepancyfor stably stratified flow. Nonetheless, the reason behind the inconsistency is believedto be related to the choice of boundary conditions thus, the model’s implementationitself is considered validated. Finally, the comparison between the EARSM and the k − ε model showed thatthe former is able to capture the physics of the flow properties where the latter failsto. In particular, the diagonal momentum fluxes resulting from the EARSM reflectthe observed behaviour of being different from each other, becoming isotropic withaltitude in the case of unstable stratification, and having magnitude u′u′ > v′v′ > w′w′ for stably stratified flows. On the other hand, the eddy viscosity assumption used bythe k − ε model computes the diagonal momentum fluxes as being equal to each other.Moreover, the EARSM captures more than one non-zero heat flux component in theCouette flow case, which has been observed to be the case in literature, while the eddydiffusivity assumption used by the k − ε model only accounts for one non-zero heat fluxcomponent.
|
798 |
Protolith, Mineralogy, and Gold Distribution of Carbonate Rich Rocks of the Larder Lake Break at Misema River, OntarioHaskett, William 05 1900 (has links)
<p> The Larder Lake Break (LLB) is one of the structures controlling
the location of gold deposits in the Kirland Lake camp. This
intensly carbonated and often strongly foliated zone is part of the
Larder Lake Group as defined by Downs (1980). Protoliths at the LLB
are problematical.
Misema River is a well exposed occurrence of the LLB, showing
chlorite schist, pervasively fuchsite quartz carbonate and syenite dyke
material. It is divided into three sections. Section I samples
indicate an ultramafic protolith as suggested by Jensen Cation plots,
and the section is interpreted as komatiitic flow(s). Section II is
well foliated and shows both ultramafic and calc-alkalic components
which decrease and increase in intensity respectively away from the
section I-section II contact. Section II is interpreted as a polymodal
sediment. Section III is similar chemically and texturally to section
I, and is therefore a komatiitic flow(s).
The intrusion of syenite dykes into section I occurred after
initial carbonatization and defonnation of the flows and associated
sediments.
Radiochemical neutron activation analysis shows all but one
of the syenite dyke samples to contain greater than 10 ppb gold
whereas the other rock types averages approximately 2 ppb. A peak content of 64 ppb occurred at a dyke contact. The high gold contents
clearly originate from the syenite dykes, which also provide a heat
source for a second period of carbonatization. </p> / Thesis / Bachelor of Science (BSc)
|
799 |
Details of a Study of Interfacial Momentum Transfer in Two-Phase Two-Component Critical FlowsSurgenor, Brian W. 01 1900 (has links)
<p> Preparations for an investigation of interfacial momentum transfer in two-phase two-component critical flows have been completed.</p> <p> The experiments involve the measurement of flow rates, axial pressure profiles, axial and transverse void fraction profiles, and axial wall shear stress profiles of steady-state gas-liquid critical flow in a vertical diverging nozzle. A photographic study is to be initiated to record the flow structure. The results of these experiments will be used to develop constitutive relations for interfacial momentum transfer.</p> <p> An experimental loop capable of circulating a gas-liquid mixture in a vertical test section was modified to suit the requirements of this investigation. The void fraction profiles are measured with a traversing gamma densitometer using a 20 mCi Co57 source. The wall shear stress profiles are obtained using the electrochemical method to measure the mass transfer coefficients of electrodes mounted flush with the test section wall. The liquid phase is an electrolyte and the gaseous phase can be air, nitrogen or freon. The latter is used to better approximate the densities of a steam-water flow.</p> <p> This report describes the required theory, measurement techniques, design and operation of the loop, and the experimental procedures.</p> / Thesis / Master of Engineering (MEngr)
|
800 |
Mass Transfer and Shear Stress at the Wall for Cocurrent Gas-Liquid Flows in a Vertical TubeSurgenor, Brian W. 01 1900 (has links)
<p> An investigation of the technique of obtaining the wall shear stress in a two-phase flow, by measuring the mass transfer coefficient at the wall with the electrochemical method, has been completed.</p> <p> The experiments involved the measurement of flow rates, pressure drops, void fractions and mass transfer coefficients, for a cocurrent upwards gas-liquid flow in a vertical tube, 13 mm in diameter. The liquid phase was an electrolyte consisting of 1.0 to 3.0 molar sodium hydroxide, and 0.005 to 0.010 equimolar potassium ferricyanide and potassium ferrocyanide. The gas phase was nitrogen. The flow regimes studied were slug, churn and annular.</p> <p> Emphasis is placed on the measurements obtained with the electrochemical method. Its application, advantages and disadvantages are detailed. A series of single-phase experiments were performed to explore the characteristics of the method and to serve as benchmarks for the two-phase experiments.</p> <p> The space-time-averaged values of the mass transfer coefficient were found to give the wall shear stresses to an
accuracy of ±20%. Frequency analysis of the local fluctuating values indicate that measurements of the local mass transfer coefficient can be used for flow regime identification.</p> <p> The theoretical flow regime map of Dukler and Taitel successfully predicted the flow regimes. The correlations of Griffith and Wallis, and Lockhart and Martinelli as modified by Davis, predicted the pressure drops and void fractions to an accuracy ±15% when applied to the appropriate flow regimes. As a further exercise, the force interactions between the phases, referred to as the interfacial shear terms, were calculated from both the measured and predicted void fractions and pressure drops.</p> / Thesis / Master of Engineering (MEngr)
|
Page generated in 0.0232 seconds