131 |
Towards a Design Tool for TurbomachineryEpp, Duane R. 31 December 2010 (has links)
A two-dimensional thin-layer Navier-Stokes cascade flow solver for turbomachinery is
developed. A second-order finite-difference scheme and a second and fourth-difference
dissipation scheme are used. Periodic and non-reflecting inlet and outlet boundary conditions
are implemented into the approximate-factorization numerical method. Turbulence
is modeled through the one-equation Spalart-Allmaras model. A two-dimensional turbomachinery
cascade structured grid generator is developed to produce six-block H-type
grids.
The validity of this work is tested in various ways. A grid convergence study is
performed showing the effect of grid density. The non-reflecting inlet and outlet boundary
conditions are tested for boundary placement influence. Comparisons of the flow solver
numerical results are performed against experimental results. A Mach number sweep and
angle of attack sweep are performed on two similar transonic turbine cascades.
|
132 |
Towards a Design Tool for TurbomachineryEpp, Duane R. 31 December 2010 (has links)
A two-dimensional thin-layer Navier-Stokes cascade flow solver for turbomachinery is
developed. A second-order finite-difference scheme and a second and fourth-difference
dissipation scheme are used. Periodic and non-reflecting inlet and outlet boundary conditions
are implemented into the approximate-factorization numerical method. Turbulence
is modeled through the one-equation Spalart-Allmaras model. A two-dimensional turbomachinery
cascade structured grid generator is developed to produce six-block H-type
grids.
The validity of this work is tested in various ways. A grid convergence study is
performed showing the effect of grid density. The non-reflecting inlet and outlet boundary
conditions are tested for boundary placement influence. Comparisons of the flow solver
numerical results are performed against experimental results. A Mach number sweep and
angle of attack sweep are performed on two similar transonic turbine cascades.
|
133 |
Transitive Factorizations of Permutations and Eulerian Maps in the PlaneSerrano, Luis January 2005 (has links)
The problem of counting ramified covers of a Riemann surface up to homeomorphism was proposed by Hurwitz in the late 1800's. This problem translates combinatorially into factoring a permutation with a specified cycle type, with certain conditions on the cycle types of the factors, such as minimality and transitivity.
Goulden and Jackson have given a proof for the number of minimal, transitive factorizations of a permutation into transpositions. This proof involves a partial differential equation for the generating series, called the Join-Cut equation. Furthermore, this argument is generalized to surfaces of higher genus. Recently, Bousquet-Mélou and Schaeffer have found the number of minimal, transitive factorizations of a permutation into arbitrary unspecified factors. This was proved by a purely combinatorial argument, based on a direct bijection between factorizations and certain objects called <em>m</em>-Eulerian trees.
In this thesis, we will give a new proof of the result by Bousquet-Mélou and Schaeffer, introducing a simple partial differential equation. We apply algebraic methods based on Lagrange's theorem, and combinatorial methods based on a new use of Bousquet-Mélou and Schaeffer's <em>m</em>-Eulerian trees. Some partial results are also given for a refinement of this problem, in which the number of cycles in each factor is specified. This involves Lagrange's theorem in many variables.
|
134 |
Transitive Factorizations of Permutations and Eulerian Maps in the PlaneSerrano, Luis January 2005 (has links)
The problem of counting ramified covers of a Riemann surface up to homeomorphism was proposed by Hurwitz in the late 1800's. This problem translates combinatorially into factoring a permutation with a specified cycle type, with certain conditions on the cycle types of the factors, such as minimality and transitivity.
Goulden and Jackson have given a proof for the number of minimal, transitive factorizations of a permutation into transpositions. This proof involves a partial differential equation for the generating series, called the Join-Cut equation. Furthermore, this argument is generalized to surfaces of higher genus. Recently, Bousquet-Mélou and Schaeffer have found the number of minimal, transitive factorizations of a permutation into arbitrary unspecified factors. This was proved by a purely combinatorial argument, based on a direct bijection between factorizations and certain objects called <em>m</em>-Eulerian trees.
In this thesis, we will give a new proof of the result by Bousquet-Mélou and Schaeffer, introducing a simple partial differential equation. We apply algebraic methods based on Lagrange's theorem, and combinatorial methods based on a new use of Bousquet-Mélou and Schaeffer's <em>m</em>-Eulerian trees. Some partial results are also given for a refinement of this problem, in which the number of cycles in each factor is specified. This involves Lagrange's theorem in many variables.
|
135 |
Bayesian Semi-parametric Factor ModelsBhattacharya, Anirban January 2012 (has links)
<p>Identifying a lower-dimensional latent space for representation of high-dimensional observations is of significant importance in numerous biomedical and machine learning applications. In many such applications, it is now routine to collect data where the dimensionality of the outcomes is comparable or even larger than the number of available observations. Motivated in particular by the problem of predicting the risk of impending diseases from massive gene expression and single nucleotide polymorphism profiles, this dissertation focuses on building parsimonious models and computational schemes for high-dimensional continuous and unordered categorical data, while also studying theoretical properties of the proposed methods. Sparse factor modeling is fast becoming a standard tool for parsimonious modeling of such massive dimensional data and the content of this thesis is specifically directed towards methodological and theoretical developments in Bayesian sparse factor models.</p><p>The first three chapters of the thesis studies sparse factor models for high-dimensional continuous data. A class of shrinkage priors on factor loadings are introduced with attractive computational properties, with operating characteristics explored through a number of simulated and real data examples. In spite of the methodological advances over the past decade, theoretical justifications in high-dimensional factor models are scarce in the Bayesian literature. Part of the dissertation focuses on exploring estimation of high-dimensional covariance matrices using a factor model and studying the rate of posterior contraction as both the sample size & dimensionality increases. </p><p>To relax the usual assumption of a linear relationship among the latent and observed variables in a standard factor model, extensions to a non-linear latent factor model are also considered.</p><p>Although Gaussian latent factor models are routinely used for modeling of dependence in continuous, binary and ordered categorical data, it leads to challenging computation and complex modeling structures for unordered categorical variables. As an alternative, a novel class of simplex factor models for massive-dimensional and enormously sparse contingency table data is proposed in the second part of the thesis. An efficient MCMC scheme is developed for posterior computation and the methods are applied to modeling dependence in nucleotide sequences and prediction from high-dimensional categorical features. Building on a connection between the proposed model & sparse tensor decompositions, we propose new classes of nonparametric Bayesian models for testing associations between a massive dimensional vector of genetic markers and a phenotypical outcome.</p> / Dissertation
|
136 |
Application Of Two Receptor Models For The Investigation Of Sites Contaminated With Polychlorinated Biphenyls: Positive Matrix Factorization And Chemical Mass BalanceDemircioglu, Filiz 01 June 2010 (has links) (PDF)
This study examines the application of two receptor models, namely Positive Matrix Factorization (PMF) and Chemical Mass Balance (CMB), on the investigation of sites contaminated with PCBs. Both models are typically used for apportionment of pollution sources in atmospheric pollution studies, however have gained popularity in the last decade on the investigation of PCBs in soil/sediments. The aim of the study is four-fold / (i) to identify the status of PCB pollution in Lake Eymir area via sampling and analysis of PCBs in collected soil/sediment samples, (ii) to modify the CMB model software in terms of efficiency and user-friendliness (iii) to apply the CMB model to Lake Eymir area PCB data for apportionment of the sources as well as to gather preliminary information regarding degradation of PCBs by considering the history of pollution in the area (iv) to explore the use of PMF for both source apportionment and investigation of fate of PCBs in the environment via use of Monte-Carlo simulated artificial data sets.
Total PCB concentrations (Aroclor based) were found to be in the range of below detection limit to 76.3 ng/g dw with a median of. 1.7 ng/g dw for samples collected from the channel between Lake Mogan and Lake Eymir. Application of the CMB model yield contribution of highly chlorinated PCB mixtures (Aroclor 1254 and Aroclor 1260 / typically used in transformers) as sources. The modified CMB model software provided user more efficient and user friendly working environment. Two uncertainty equations, developed and existing in literature, were found to be effective for better resolution of sources by the PMF model.
|
137 |
Probabilistic Matrix Factorization Based Collaborative Filtering With Implicit Trust Derived From Review Ratings InformationErcan, Eda 01 September 2010 (has links) (PDF)
Recommender systems aim to suggest relevant items that are likely to be of interest to the users using a variety of information resources such as user pro
|
138 |
Ρευστομηχανική και gridΚωνσταντινίδης, Νικόλαος 30 April 2014 (has links)
Η ανάγκη για την επίλυση μεγάλων προβλημάτων και η εξέλιξη της τεχνολογίας του διαδικτύου, είχε ως αποτέλεσμα την διαρκή ανάγκη για την εύρεση όλο και περισσότερων πόρων. Η ανάγκη αυτή οδήγησε στην δημιουργία δομών συνεργαζόμενων υπολογιστικών συστημάτων, με απώτερο σκοπό την επίλυση προβλημάτων που απαιτούν μεγάλη υπολογιστική ισχύ ή την αποθήκευση μεγάλου όγκου δεδομένων.
Η ύπαρξη τέτοιων δομών αλλά και κεντρικών μονάδων επεξεργασίας με περισσότερους από έναν επεξεργαστές, δημιούργησε πρωτόκολλα για την δημιουργία εφαρμογών που θα εκτελούνται και θα επιλύουν ένα πρόβλημα σε περισσότερους από έναν επεξεργαστές, ώστε να επιτευχθεί η μείωση του χρόνου εκτέλεσης. Ένα παράδειγμα τέτοιου πρωτοκόλλου είναι αυτό της ανταλλαγής μηνυμάτων (MPI).
Σκοπός της παρούσας διπλωματικής εργασίας είναι η τροποποίηση μιας υπάρχουσας εφαρμογή, που απαιτεί σημαντική υπολογιστική ισχύ, με σκοπό την εκμετάλλευση συστημάτων όπως αυτά που περιγράφηκαν προηγούμενα. Μέσα από αυτή την διαδικασία θα γίνει ανάλυση των πλεονεκτημάτων και των μειονεκτημάτων του παράλληλου προγραμματισμού. / The need to solve large problems and the development of internet technology, has resulted in the need to find more and more resources. This need led to the creation of structures collaborating systems, with a view to solving problems that require large computing power or storage of large amounts of data.
The existence of such structures and central processing units with more than one processor, created protocols for the develop applications that will run and will solve a problem in more than one processor in order to achieve the reduction in execution time. An example of such a protocol is that of messaging (MPI).
The purpose of this diploma thesis is to modify an existing application that requires significant computing power to exploit systems such as those described above. Through this process will analyze the advantages and disadvantages of parallel programming.
|
139 |
Examination of Initialization Techniques for Nonnegative Matrix FactorizationFrederic, John 21 November 2008 (has links)
While much research has been done regarding different Nonnegative Matrix Factorization (NMF) algorithms, less time has been spent looking at initialization techniques. In this thesis, four different initializations are considered. After a brief discussion of NMF, the four initializations are described and each one is independently examined, followed by a comparison of the techniques. Next, each initialization's performance is investigated with respect to the changes in the size of the data set. Finally, a method by which smaller data sets may be used to determine how to treat larger data sets is examined.
|
140 |
Scalable Nonparametric Bayes LearningBanerjee, Anjishnu January 2013 (has links)
<p>Capturing high dimensional complex ensembles of data is becoming commonplace in a variety of application areas. Some examples include</p><p>biological studies exploring relationships between genetic mutations and diseases, atmospheric and spatial data, and internet usage and online behavioral data. These large complex data present many challenges in their modeling and statistical analysis. Motivated by high dimensional data applications, in this thesis, we focus on building scalable Bayesian nonparametric regression algorithms and on developing models for joint distributions of complex object ensembles.</p><p>We begin with a scalable method for Gaussian process regression, a commonly used tool for nonparametric regression, prediction and spatial modeling. A very common bottleneck for large data sets is the need for repeated inversions of a big covariance matrix, which is required for likelihood evaluation and inference. Such inversion can be practically infeasible and even if implemented, highly numerically unstable. We propose an algorithm utilizing random projection ideas to construct flexible, computationally efficient and easy to implement approaches for generic scenarios. We then further improve the algorithm incorporating some structure and blocking ideas in our random projections and demonstrate their applicability in other contexts requiring inversion of large covariance matrices. We show theoretical guarantees for performance as well as substantial improvements over existing methods with simulated and real data. A by product of the work is that we discover hitherto unknown equivalences between approaches in machine learning, random linear algebra and Bayesian statistics. We finally connect random projection methods for large dimensional predictors and large sample size under a unifying theoretical framework.</p><p>The other focus of this thesis is joint modeling of complex ensembles of data from different domains. This goes beyond traditional relational modeling of ensembles of one type of data and relies on probability mixing measures over tensors. These models have added flexibility over some existing product mixture model approaches in letting each component of the ensemble have its own dependent cluster structure. We further investigate the question of measuring dependence between variables of different types and propose a very general novel scaled measure based on divergences between the joint and marginal distributions of the objects. Once again, we show excellent performance in both simulated and real data scenarios.</p> / Dissertation
|
Page generated in 0.1165 seconds