Spelling suggestions: "subject:"density ratio"" "subject:"density patio""
1 |
Experimental Investigation of Film Cooling Effectiveness on Gas Turbine BladesLi, Shiou-Jiuan 14 March 2013 (has links)
High turbine inlet temperature becomes necessary for increasing thermal efficiency of modern gas turbines. To prevent failure of turbine components, advance cooling technologies have been applied to different portions of turbine blades.
The detailed film cooling effectiveness distributions along a rotor blade has been studied under combined effects of upstream trailing edge unsteady wake with coolant ejection by the pressure sensitive paint (PSP). The experiment is conducted in a low speed wind tunnel with a five blade linear cascade and exit Reynolds number is 370,000. The density ratios for both blade and trailing edge coolant ejection range from 1.5 to 2.0. Blade blowing ratios are 0.5 and 1.0 on suction surface and 1.0 and 2.0 on pressure surface. Trailing edge jet blowing ratio and Strouhal number are 1.0 and 0.12, respectively. Results show the unsteady wake reduces overall effectiveness. However, the unsteady wake with trailing edge coolant ejection enhances overall effectiveness. Results also show that the overall effectiveness increases by using heavier coolant for ejection and blade film cooling.
Leading edge film cooling has been investigated using PSP. There are two test models: seven and three-row of film holes for simulating vane and blade, respectively. Four film holes’ configurations are used for both models: radial angle cylindrical holes, compound angle cylindrical holes, radial angle shaped holes, and compound angle shaped holes. Density ratios are 1.0 to 2.0 while blowing ratios are 0.5 to 1.5. Experiments were conducted in a low speed wind tunnel with Reynolds number 100,900. The turbulence intensity near test model is about 7%. The results show the shaped holes have overall higher effectiveness than cylindrical holes for both designs. As increasing density ratio, density effect on shaped holes becomes evident. Radial angle holes perform better than compound angle holes as increasing blowing and density ratios. Increasing density ratio generally increases overall effectiveness for all configurations and blowing ratios. One exception occurs for compound angle and radial angle shaped hole of three-row design at lower blowing ratio. Effectiveness along stagnation row reduces as increasing density ratio due to coolant jet with insufficient momentum caused by heavier density coolant, shaped hole, and stagnation row.
|
2 |
Statistical Inferences of Comparison between two Correlated ROC Curves with Empirical Likelihood ApproachesZHANG, DONG 20 September 2012 (has links)
No description available.
|
3 |
Detailed Numerical Simulation of Liquid Jet In Crossflow Atomization with High Density RatiosJanuary 2013 (has links)
abstract: The atomization of a liquid jet by a high speed cross-flowing gas has many applications such as gas turbines and augmentors. The mechanisms by which the liquid jet initially breaks up, however, are not well understood. Experimental studies suggest the dependence of spray properties on operating conditions and nozzle geom- etry. Detailed numerical simulations can offer better understanding of the underlying physical mechanisms that lead to the breakup of the injected liquid jet. In this work, detailed numerical simulation results of turbulent liquid jets injected into turbulent gaseous cross flows for different density ratios is presented. A finite volume, balanced force fractional step flow solver to solve the Navier-Stokes equations is employed and coupled to a Refined Level Set Grid method to follow the phase interface. To enable the simulation of atomization of high density ratio fluids, we ensure discrete consistency between the solution of the conservative momentum equation and the level set based continuity equation by employing the Consistent Rescaled Momentum Transport (CRMT) method. The impact of different inflow jet boundary conditions on different jet properties including jet penetration is analyzed and results are compared to those obtained experimentally by Brown & McDonell(2006). In addition, instability analysis is performed to find the most dominant insta- bility mechanism that causes the liquid jet to breakup. Linear instability analysis is achieved using linear theories for Rayleigh-Taylor and Kelvin- Helmholtz instabilities and non-linear analysis is performed using our flow solver with different inflow jet boundary conditions. / Dissertation/Thesis / Ph.D. Mechanical Engineering 2013
|
4 |
Design, Development and Validation of UC Film Cooling Research FacilityKandampalayam Kandasamy Palaniappan, Mouleeswaran January 2017 (has links)
No description available.
|
5 |
Design of robust blind detector with application to watermarkingAnamalu, Ernest Sopuru 14 February 2014 (has links)
One of the difficult issues in detection theory is to design a robust detector that takes into account the actual distribution of the original data. The most commonly used statistical detection model for blind detection is Gaussian distribution. Specifically, linear correlation is an optimal detection method in the presence of Gaussian distributed features. This has been found to be sub-optimal detection metric when density deviates completely from Gaussian distributions. Hence, we formulate a detection algorithm that enhances detection probability by exploiting the true characterises of the original data. To understand the underlying distribution function of data, we employed the estimation techniques such as parametric model called approximated density ratio logistic regression model and semiparameric estimations. Semiparametric model has the advantages of yielding density ratios as well as individual densities. Both methods are applicable to signals such as watermark embedded in spatial domain and outperform the conventional linear correlation non-Gaussian distributed.
|
6 |
Design of robust blind detector with application to watermarkingAnamalu, Ernest Sopuru 14 February 2014 (has links)
One of the difficult issues in detection theory is to design a robust detector that takes into account the actual distribution of the original data. The most commonly used statistical detection model for blind detection is Gaussian distribution. Specifically, linear correlation is an optimal detection method in the presence of Gaussian distributed features. This has been found to be sub-optimal detection metric when density deviates completely from Gaussian distributions. Hence, we formulate a detection algorithm that enhances detection probability by exploiting the true characterises of the original data. To understand the underlying distribution function of data, we employed the estimation techniques such as parametric model called approximated density ratio logistic regression model and semiparameric estimations. Semiparametric model has the advantages of yielding density ratios as well as individual densities. Both methods are applicable to signals such as watermark embedded in spatial domain and outperform the conventional linear correlation non-Gaussian distributed.
|
7 |
GENERATIVE MODELS WITH MARGINAL CONSTRAINTSBingjing Tang (16380291) 16 June 2023 (has links)
<p> Generative models form powerful tools for learning data distributions and simulating new samples. Recent years have seen significant advances in the flexibility and applicability of such models, with Bayesian approaches like nonparametric Bayesian models and deep neural network models such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) finding use in a wide range of domains. However, the black-box nature of these models means that they are often hard to interpret, and they often come with modeling implications that are inconsistent with side knowledge resulting from domain knowledge. This thesis studies situations where the modeler has side knowledge represented as probability distributions on functionals of the objects being modeled, and we study methods to incorporate this particular kind of side knowledge into flexible generative models. This dissertation covers three main parts. </p>
<p><br></p>
<p>The first part focuses on incorporating a special case of the aforementioned side knowledge into flexible nonparametric Bayesian models. Many times, practitioners have additional distributional information about a subset of the coordinates of the observations being modeled. The flexibility of nonparametric Bayesian models usually implies incompatibility with this side information. Such inconsistency triggers the necessity of developing methods to incorporate this side knowledge into flexible nonparametric Bayesian models. We design a specialized generative process to build in this side knowledge and propose a novel sigmoid Gaussian process conditional model. We also develop a corresponding posterior sampling method based on data augmentation to overcome a doubly intractable problem. We illustrate the efficacy of our proposed constrained nonparametric Bayesian model in a variety of real-world scenarios including modeling environmental and earthquake data. </p>
<p><br></p>
<p>The second part of the dissertation discusses neural network approaches to satisfying the said general side knowledge. Further, the generative models considered in this part broaden into black-box models. We formulate this side knowledge incorporation problem as a constrained divergence minimization problem and propose two scalable neural network approaches as its solution. We demonstrate their practicality using various synthetic and real examples. </p>
<p><br></p>
<p> The third part of the dissertation concentrates on a specific generative model of individual pixels of the fMRI data constructed from a latent group image. Usually there is two-fold side knowledge about the latent group image: spatial structure and partial activation zones. The former can be captured by modeling the prior for the group image with Markov random fields. The latter, which is often obtained from previous related studies, is left for future research. We propose a novel Bayesian model with Markov random fields and aim to estimate the maximum a posteriori for the group image. We also derive a variational Bayes algorithm to overcome local optima in the optimization.</p>
|
8 |
The Effect of Density Ratio on Steep Injection Angle Purge Jet Cooling for a Converging Nozzle Guide Vane Endwall at Transonic ConditionsSibold, Ridge Alexander 17 September 2019 (has links)
The study presented herein describes and analyzes a detailed experimental investigation of the effects of density ratio on endwall thermal performance at varying blowing rates for a typical nozzle guide vane platform purge jet cooling scheme. An axisymmetric converging endwall with an upstream doublet staggered cylindrical hole purge jet cooling scheme was employed. Nominal exit flow conditions were engine representative and as follows: {rm Ma}_{Exit} = 0.85, {rm Re}_{Exit,C_{ax}} = 1.5 times {10}^6, and large-scale freestream Tu = 16%. Two blowing ratios were investigated corresponding to the upper and lower engine extrema. Each blowing ratio was investigated amid two density ratios; one representing typical experimental neglect of density ratio, at DR = 1.2, and another engine representative density ratio achieved by mixing foreign gases, DR = 1.95. All tests were conducted on a linear cascade in the Virginia Tech Transonic Blowdown Wind Tunnel using IR thermography and transient data reduction techniques. Oil paint flow visualization techniques were used to gather quantitative information regarding the alteration of endwall flow physics due two different blowing rates of high-density coolant. High resolution endwall adiabatic film cooling effectiveness, Nusselt number, and Net Heat Flux Reduction contour plots were used to analyze the thermal effects.
The effect of density is dependent on the coolant blowing rate and varies greatly from the high to low blowing condition. At the low blowing condition better near-hole film cooling performance and heat transfer reduction is facilitated with increasing density. However, high density coolant at low blowing rates isn't adequately equipped to penetrate and suppress secondary flows, leaving the SS and PS largely exposed to high velocity and temperature mainstream gases. Conversely, it is observed that density ratio only marginally affects the high blowing condition, as the momentum effects become increasingly dominant. Overall it is concluded density ratio has a first order impact on the secondary flow alterations and subsequent heat transfer distributions that occur as a result of coolant injection and should be accounted for in purge jet cooling scheme design and analysis.
Additionally, the effect of increasing high density coolant blowing rate was analyzed. Oil paint flow visualization indicated that significant secondary flow suppression occurs as a result of increasing the blowing rate of high-density coolant. Endwall adiabatic film cooling effectiveness, Nusselt number, and NHFR comparisons confirm this. Low blowing rate coolant has a more favorable thermal impact in the upstream region of the passage, especially near injection. The low momentum of the coolant is eventually dominated and entrained by secondary flows, providing less effectiveness near PS, near SS, and into the throat of the passage. The high momentum present for the high blowing rate, high-density coolant suppresses these secondary flows and provides enhanced cooling in the throat and in high secondary flow regions. However, the increased turbulence impartation due to lift off has an adverse effect on the heat load in the upstream region of the passage. It is concluded that only marginal gains near the throat of the passage are observed with an increase in high density coolant blowing rate, but severe thermal penalty is observed near the passage onset. / Master of Science / Gas turbine technology is used frequently in the burning of natural gas for power production. Increases in engine efficiency are observed with increasing firing temperatures, however this leads to the potential of overheating in the stages following. To prevent failure or melting of components, cooler air is extracted from the upstream compressor section and used to cool these components through various highly complex cooling schemes. The design and operational adequacy of these schemes is highly subject to the mainstream and coolant flow conditions, which are hard to represent in a laboratory setting.
This experimental study explores the effects of various coolant conditions, and their respective response, for a purge jet cooling scheme commonly found in engine. This scheme utilizes two rows of staggered cylindrical holes to inject air into the mainstream from platform, upstream of the nozzle guide vane. It is the hope that this air forms a protective layer, effectively shielding the platform from the hostile mainstream conditions. Currently, little research has been done to quantify these effects of purge flow cooling scheme while mimicking engine geometry, mainstream and coolant conditions.
For this study, an endwall geometry like that found in engine with a purge jet cooling scheme is studied. Commonly, an upstream gap is formed between the combustor lining and first stage vane platform, which is accounted for in this testing. Mainstream and coolant flow conditions can have large impacts on the results gathered, so both were matched to engine conditions. Varying of coolant density and injection rate is studied and quantitative results are gathered. Results indicate coolant fluid density plays a large role in purge jet cooling, and with neglection of this, potential thermal failure points could be overlooked This is exacerbated with less coolant injection. Interestingly, increasing the amount of coolant injected decreases performance across much of the passage, with only marginal gains in regions of complex flow. These results help to better explain the impacts of experimental neglect of coolant density, and aid in the understanding of purge jet coolant injection.
|
9 |
Stochastic density ratio estimation and its application to feature selection / Estimação estocástica da razão de densidades e sua aplicação em seleção de atributosBraga, Ígor Assis 23 October 2014 (has links)
The estimation of the ratio of two probability densities is an important statistical tool in supervised machine learning. In this work, we introduce new methods of density ratio estimation based on the solution of a multidimensional integral equation involving cumulative distribution functions. The resulting methods use the novel V -matrix, a concept that does not appear in previous density ratio estimation methods. Experiments demonstrate the good potential of this new approach against previous methods. Mutual Information - MI - estimation is a key component in feature selection and essentially depends on density ratio estimation. Using one of the methods of density ratio estimation proposed in this work, we derive a new estimator - VMI - and compare it experimentally to previously proposed MI estimators. Experiments conducted solely on mutual information estimation show that VMI compares favorably to previous estimators. Experiments applying MI estimation to feature selection in classification tasks evidence that better MI estimation leads to better feature selection performance. Parameter selection greatly impacts the classification accuracy of the kernel-based Support Vector Machines - SVM. However, this step is often overlooked in experimental comparisons, for it is time consuming and requires familiarity with the inner workings of SVM. In this work, we propose procedures for SVM parameter selection which are economic in their running time. In addition, we propose the use of a non-linear kernel function - the min kernel - that can be applied to both low- and high-dimensional cases without adding another parameter to the selection process. The combination of the proposed parameter selection procedures and the min kernel yields a convenient way of economically extracting good classification performance from SVM. The Regularized Least Squares - RLS - regression method is another kernel method that depends on proper selection of its parameters. When training data is scarce, traditional parameter selection often leads to poor regression estimation. In order to mitigate this issue, we explore a kernel that is less susceptible to overfitting - the additive INK-splines kernel. Then, we consider alternative parameter selection methods to cross-validation that have been shown to perform well for other regression methods. Experiments conducted on real-world datasets show that the additive INK-splines kernel outperforms both the RBF and the previously proposed multiplicative INK-splines kernel. They also show that the alternative parameter selection procedures fail to consistently improve performance. Still, we find that the Finite Prediction Error method with the additive INK-splines kernel performs comparably to cross-validation. / A estimação da razão entre duas densidades de probabilidade é uma importante ferramenta no aprendizado de máquina supervisionado. Neste trabalho, novos métodos de estimação da razão de densidades são propostos baseados na solução de uma equação integral multidimensional. Os métodos resultantes usam o conceito de matriz-V , o qual não aparece em métodos anteriores de estimação da razão de densidades. Experimentos demonstram o bom potencial da nova abordagem com relação a métodos anteriores. A estimação da Informação Mútua - IM - é um componente importante em seleção de atributos e depende essencialmente da estimação da razão de densidades. Usando o método de estimação da razão de densidades proposto neste trabalho, um novo estimador - VMI - é proposto e comparado experimentalmente a estimadores de IM anteriores. Experimentos conduzidos na estimação de IM mostram que VMI atinge melhor desempenho na estimação do que métodos anteriores. Experimentos que aplicam estimação de IM em seleção de atributos para classificação evidenciam que uma melhor estimação de IM leva as melhorias na seleção de atributos. A tarefa de seleção de parâmetros impacta fortemente o classificador baseado em kernel Support Vector Machines - SVM. Contudo, esse passo é frequentemente deixado de lado em avaliações experimentais, pois costuma consumir tempo computacional e requerer familiaridade com as engrenagens de SVM. Neste trabalho, procedimentos de seleção de parâmetros para SVM são propostos de tal forma a serem econômicos em gasto de tempo computacional. Além disso, o uso de um kernel não linear - o chamado kernel min - é proposto de tal forma que possa ser aplicado a casos de baixa e alta dimensionalidade e sem adicionar um outro parâmetro a ser selecionado. A combinação dos procedimentos de seleção de parâmetros propostos com o kernel min produz uma maneira conveniente de se extrair economicamente um classificador SVM com boa performance. O método de regressão Regularized Least Squares - RLS - é um outro método baseado em kernel que depende de uma seleção de parâmetros adequada. Quando dados de treinamento são escassos, uma seleção de parâmetros tradicional em RLS frequentemente leva a uma estimação ruim da função de regressão. Para aliviar esse problema, é explorado neste trabalho um kernel menos suscetível a superajuste - o kernel INK-splines aditivo. Após, são explorados métodos de seleção de parâmetros alternativos à validação cruzada e que obtiveram bom desempenho em outros métodos de regressão. Experimentos conduzidos em conjuntos de dados reais mostram que o kernel INK-splines aditivo tem desempenho superior ao kernel RBF e ao kernel INK-splines multiplicativo previamente proposto. Os experimentos também mostram que os procedimentos alternativos de seleção de parâmetros considerados não melhoram consistentemente o desempenho. Ainda assim, o método Finite Prediction Error com o kernel INK-splines aditivo possui desempenho comparável à validação cruzada.
|
10 |
Stochastic density ratio estimation and its application to feature selection / Estimação estocástica da razão de densidades e sua aplicação em seleção de atributosÍgor Assis Braga 23 October 2014 (has links)
The estimation of the ratio of two probability densities is an important statistical tool in supervised machine learning. In this work, we introduce new methods of density ratio estimation based on the solution of a multidimensional integral equation involving cumulative distribution functions. The resulting methods use the novel V -matrix, a concept that does not appear in previous density ratio estimation methods. Experiments demonstrate the good potential of this new approach against previous methods. Mutual Information - MI - estimation is a key component in feature selection and essentially depends on density ratio estimation. Using one of the methods of density ratio estimation proposed in this work, we derive a new estimator - VMI - and compare it experimentally to previously proposed MI estimators. Experiments conducted solely on mutual information estimation show that VMI compares favorably to previous estimators. Experiments applying MI estimation to feature selection in classification tasks evidence that better MI estimation leads to better feature selection performance. Parameter selection greatly impacts the classification accuracy of the kernel-based Support Vector Machines - SVM. However, this step is often overlooked in experimental comparisons, for it is time consuming and requires familiarity with the inner workings of SVM. In this work, we propose procedures for SVM parameter selection which are economic in their running time. In addition, we propose the use of a non-linear kernel function - the min kernel - that can be applied to both low- and high-dimensional cases without adding another parameter to the selection process. The combination of the proposed parameter selection procedures and the min kernel yields a convenient way of economically extracting good classification performance from SVM. The Regularized Least Squares - RLS - regression method is another kernel method that depends on proper selection of its parameters. When training data is scarce, traditional parameter selection often leads to poor regression estimation. In order to mitigate this issue, we explore a kernel that is less susceptible to overfitting - the additive INK-splines kernel. Then, we consider alternative parameter selection methods to cross-validation that have been shown to perform well for other regression methods. Experiments conducted on real-world datasets show that the additive INK-splines kernel outperforms both the RBF and the previously proposed multiplicative INK-splines kernel. They also show that the alternative parameter selection procedures fail to consistently improve performance. Still, we find that the Finite Prediction Error method with the additive INK-splines kernel performs comparably to cross-validation. / A estimação da razão entre duas densidades de probabilidade é uma importante ferramenta no aprendizado de máquina supervisionado. Neste trabalho, novos métodos de estimação da razão de densidades são propostos baseados na solução de uma equação integral multidimensional. Os métodos resultantes usam o conceito de matriz-V , o qual não aparece em métodos anteriores de estimação da razão de densidades. Experimentos demonstram o bom potencial da nova abordagem com relação a métodos anteriores. A estimação da Informação Mútua - IM - é um componente importante em seleção de atributos e depende essencialmente da estimação da razão de densidades. Usando o método de estimação da razão de densidades proposto neste trabalho, um novo estimador - VMI - é proposto e comparado experimentalmente a estimadores de IM anteriores. Experimentos conduzidos na estimação de IM mostram que VMI atinge melhor desempenho na estimação do que métodos anteriores. Experimentos que aplicam estimação de IM em seleção de atributos para classificação evidenciam que uma melhor estimação de IM leva as melhorias na seleção de atributos. A tarefa de seleção de parâmetros impacta fortemente o classificador baseado em kernel Support Vector Machines - SVM. Contudo, esse passo é frequentemente deixado de lado em avaliações experimentais, pois costuma consumir tempo computacional e requerer familiaridade com as engrenagens de SVM. Neste trabalho, procedimentos de seleção de parâmetros para SVM são propostos de tal forma a serem econômicos em gasto de tempo computacional. Além disso, o uso de um kernel não linear - o chamado kernel min - é proposto de tal forma que possa ser aplicado a casos de baixa e alta dimensionalidade e sem adicionar um outro parâmetro a ser selecionado. A combinação dos procedimentos de seleção de parâmetros propostos com o kernel min produz uma maneira conveniente de se extrair economicamente um classificador SVM com boa performance. O método de regressão Regularized Least Squares - RLS - é um outro método baseado em kernel que depende de uma seleção de parâmetros adequada. Quando dados de treinamento são escassos, uma seleção de parâmetros tradicional em RLS frequentemente leva a uma estimação ruim da função de regressão. Para aliviar esse problema, é explorado neste trabalho um kernel menos suscetível a superajuste - o kernel INK-splines aditivo. Após, são explorados métodos de seleção de parâmetros alternativos à validação cruzada e que obtiveram bom desempenho em outros métodos de regressão. Experimentos conduzidos em conjuntos de dados reais mostram que o kernel INK-splines aditivo tem desempenho superior ao kernel RBF e ao kernel INK-splines multiplicativo previamente proposto. Os experimentos também mostram que os procedimentos alternativos de seleção de parâmetros considerados não melhoram consistentemente o desempenho. Ainda assim, o método Finite Prediction Error com o kernel INK-splines aditivo possui desempenho comparável à validação cruzada.
|
Page generated in 0.0832 seconds