Spelling suggestions: "subject:"nonorthogonal"" "subject:"onorthogonal""
391 |
The search for a triple of mutually orthogonal Latin squares of order ten: looking through pairs of dimension thirty-five and lessDelisle, Erin 24 August 2010 (has links)
A computer generation of all pairs of mutually orthogonal Latin squares of order ten and dimension 35 or less is undertaken. All such pairs are successfully generated up to main class equivalence. No pairs of mutually orthogonal Latin squares of order ten exist for dimension 33. Six dimension 34 pairs, which are counterexamples to a conjecture by Moorehouse, are found. Eighty-five pairs can be formed with dimension 35. None of the pairs can be extended to a triple. If a triple of mutually orthogonal Latin squares exists for order ten, the pairs of Latin squares in the triple must be of dimension 36 or 37.
|
392 |
Study of coherent structures in turbulent flows using Proper Orthogonal Decomposition2014 November 1900 (has links)
For many decades, turbulence has been the subject of extensive numerical research and experimental work. A bottleneck problem in turbulence research has been to detect and characterize the energetic, space and time-dependent structures and give a mathematical definition to each topology. This research presents a fundamental study of coherent structures, embedded in turbulent flows, by use of Proper Orthogonal Decomposition (POD). The target is to detect dominant features which contain the largest fraction of the total kinetic energy and hence contribute more to a turbulent flow. POD is proven to be a robust methodology in multivariate analysis of non-linear problems. This method also helps to obtain a low-dimensional approximation of a high-dimensional process, like a turbulent flow.
This manuscript-based dissertation consists of five chapters. The first chapter starts with a brief introduction to turbulence, available simulation techniques, limitations and practical applications. Next, POD is introduced and the step-by-step approach is explained in detail.
Three submitted manuscripts are presented in the subsequent chapters. Each chapter starts with introducing the study case and explaining the contribution of the study to the whole topic and also has its topic-relevant literature review. Each article consists of two parts: flow simulation and verification of the results at the onset, followed by POD analysis and reconstruction of the turbulent flow fields. For flow simulation, Large Eddy Simulation (LES) was performed to obtain databases for POD analysis. The simulations were validated by making comparison with available experimental and numerical studies. For each case, coherent topologies are characterized and the contribution of kinetic energy for each structure is determined and compared with previous literature.
The first manuscript focused on investigating the large-scale dynamics in the wake of an infinite square cylinder. This case is the first step towards the targeting study case of this research, i.e. flow over rib roughened walls. The main purpose the first step is to establish a benchmark for comparison to the more complicated cases of a square cylinder with a nearby wall and flow over a rib-roughened surface. For POD analysis, the three-dimensional velocity field is obtained from LES of the flow around an infinite square cylinder at a Reynolds number of Re = 500. The POD algorithm is examined and the total energy of the flow is found to be well captured by only a small number of eigenmodes. From the energy spectrum, it is learned that each eigenmode represents a particular flow characteristic embedded in the turbulent wake, and eigenmodes with analogous characteristics can be bundled as pairs. Qualitative analysis of the dominant modes provided insight as to the spatial distribution of dominant structures in the turbulent wake. Another outcome of this chapter is to develop physical interpretations of the energetic structures by examining the temporal coefficients and tracking their life-cycle. It was observed that the paired temporal coefficients are approximately sinusoidal with similar order of magnitude and frequency and a phase shift. Lastly, it was observed that the turbulent flow field can be approximated by a linear combination of the mean flow and a finite number of spatial modes.
The second manuscript analyses the influence of a solid wall on the wake dynamics of an infinite square cylinder. Different cases have been studied by changing the distance between the cylinder and the bottom wall. From the simulation results, it is learned that the value of drag and lift coefficients can be significantly affected by a nearby solid wall. From the energy decay spectrum it is observed that the energy decay rate varies for different gap ratios and accordingly a physical explanation is developed. Visualization of coherent structures for each case shows that for larger gaps, although the structures are distorted and inclined away from the wall, the travelling wave characteristic persists. Lastly, it is observed that as the gap ratio gets smaller, energetic structures originated by the wall begin to appear in the lower index modes.
The last manuscript presents a numerical study of the structures in turbulent Couette flow with roughness on one wall, which as mentioned earlier, is the targeting study case of this research. Flow over both roughened and smooth surfaces was examined in a single study. Comparison was made with experiments and other numerical studies to verify the LES results. The mean velocity distribution across the channel shows that the rib roughness on the bottom wall has a strong effect on the velocity profile on the opposite wall. The energetic coherent dynamics of turbulent flow were investigated by the use of POD. The energy decay spectrum was analysed and the influence of a roughened wall and each roughness element on formation of those structures was investigated. Coherent POD modes on a spanwise sampling plane are detected. A secondary swirling motion is visualized, for the first two modes and counter-rotating cells are observed in the lower region of the channel above the rough wall for the higher modes. At the end, a quantitative analysis of the POD temporal coefficients was performed, which characterize the life-cycle of each coherent dynamic. A motivating outcome of this analysis is to decompose the time trace curves into quasi-periodic and fluctuations curves and to detect a linkage between these life cycles and physical meaning and location of each energetic pattern.
At the end, in a closuring chapter, concluding remarks of this research work are presented in more detail and some potential extensions have been proposed for future researchers.
|
393 |
A POD-Galerkin approach to the atmospheric dynamics of MarsMartínez-Alvarado, Oscar January 2007 (has links)
The observation of less chaoticity and enhanced interannual periodicity of transient waves in the Martian atmosphere in comparison with that of the Earth suggests the hypothesis of a low-dimensional underlying atmospheric attractor. Grounded on this hypothesis, two questions can be asked: is there a small set of atmospheric modes, measured and classified by a suitable norm, capable of describing the atmosphere of Mars? If this set exists, are those atmospheric modes able to reproduce the dynamical behaviour of the atmosphere of Mars? The answer to these questions, constituting the central focus of this thesis, has led to the first application of POD-Galerkin methods to a state-of-the-art Mars general circulation model. The proper orthogonal decomposition (POD) as a method for extracting coherent structures, called empirical orthogonal functions (EOFs), provided a means to answer the first question in the positive. An important amount of atmospheric total energy (TE) was found to be concentrated in a few EOFs (e.g., 90% TE in 20 EOFs). The most energetic EOFs were identified with atmospheric motions such as thermal tides and transient waves. The Galerkin projection of the hydrostatic primitive equations onto the span of the EOFs provided a systematic method to establish physically plausible interactions between the most energetic EOFs. These interactions were complemented with closure schemes representing interactions with unresolved modes. This requirement proved to be essential in order to obtain bounded behaviour. In the diagnostic analysis, represented by the POD alone, increasing the number of EOFs directly leads to a better approximation of the atmospheric state. In contrast, the dynamic reconstruction of the atmospheric evolution does not depend only on the number of included EOFs. Other important factors to obtain realistic evolution are the inclusion of every mode involved in the description of a particular kind of motion (diurnal tide, semidiurnal tide or transients) and the retention of higher order modes that may interact strongly with the modes of interest. Once these conditions are satisfied the behaviour of the reduced models is greatly improved. Implications of these findings for future work are discussed.
|
394 |
Transient reduced-order convective heat transfer modeling for a data centerGhosh, Rajat 12 January 2015 (has links)
A measurement-based reduced-order heat transfer modeling framework is developed to optimize cooling costs of dynamic and virtualized data centers. The reduced-order model is based on a proper orthogonal decomposition-based model order reduction technique. For data center heat transfer modeling, the framework simulates air temperatures and CPU temperatures as a parametric response surface with different cooling infrastructure design variables as the input parameters. The parametric framework enables an efficient design optimization tool and is used to solve several important problems related to energy-efficient thermal design of data centers.
The first of these problems is about determining optimal response time during emergencies such as power outages in data centers. To solve this problem, transient air temperatures are modeled with time as a parameter. This parametric prediction framework is useful as a near-real-time thermal prognostic tool.
The second problem pertains to reducing temperature monitoring cost in data centers. To solve this problem, transient air temperatures are modeled with spatial location as the parameter. This parametric model improves spatial resolution of measured temperature data and thereby reduces sensor requisition for transient temperature monitoring in data centers.
The third problem is related to determining optimal cooling set points in response to dynamically-evolving heat loads in a data center. To solve this problem, transient air temperatures are modeled with heat load and time as the parameters. This modeling framework is particularly suitable for life-cycle design of data center cooling infrastructure.
The last problem is related to determining optimal cooling set points in response to dynamically-evolving computing workload in a virtualized data center. To solve this problem, transient CPU temperatures under a given computing load profile are modeled with cooling resource set-points as the parameters.
|
395 |
Modelling of 3D anisotropic turbulent flow in compound channelsVyas, Keyur January 2007 (has links)
The present research focuses on the development and computer implementation of a novel threedimensional, anisotropic turbulence model not only capable of handling complex geometries but also the turbulence driven secondary currents. The model equations comprise advanced algebraic Reynolds stress models in conjunction with Reynolds Averaged Navier-Stokes equations. In order to tackle the complex geometry of compound meandering channels, the body-fitted orthogonal coordinate system is used. The finite volume method with collocated arrangement of variables is used for discretization of the governing equations. Pressurevelocity coupling is achieved by the standard iterative SIMPLE algorithm. A central differencing scheme and upwind differencing scheme are implemented for approximation of diffusive and convective fluxes on the control volume faces respectively. A set of algebraic equations, derived after discretization, are solved with help of Stones implicit matrix solver. The model is validated against standard benchmarks on simple and compound straight channels. For the case of compound meandering channels with varying sinuosity and floodplain height, the model results are compared with the published experimental data. It is found that the present method is able to predict the mean velocity distribution, pressure and secondary flow circulations with reasonably good accuracy. In terms of engineering applications, the model is also tested to understand the importance of turbulence driven secondary currents in slightly curved channel. The development of this unique model has opened many avenues of future research such as flood risk management, the effects of trees near the bank on the flow mechanisms and prediction of pollutant transport.
|
396 |
Precoder Designs for Receivers with Channel Estimators in Fading ChannelsHasegawa, Fumihiro 31 July 2008 (has links)
Diversity transmission is an effective technique to combat fading channels and this thesis introduces two main ideas. Firstly, a novel precoding technique is proposed to achieve diversity transmission and improve bit error rate (BER) performance over the existing linear constellation precoding (LCP) techniques. Experimental and theoretical results are presented to show that the proposed precoding schemes can outperform the existing LCP schemes in various fading channels and additive white Gaussian noise channels. Secondly, an interleaving technique to further improve the BER performance is proposed. The proposed diversity transmission techniques are implemented for both single-carrier and orthogonal frequency division multiplexing (OFDM) systems. The second part of the thesis focuses on the pairwise error probability analysis of the proposed and LCP schemes when receivers have imperfect channel state information (CSI). The BER performance of the proposed precoding and interleaver scheme are investigated in OFDM systems with minimum mean square error channel estimators and single-carrier systems with basis expansion model based channel estimators. It is demonstrated that while precoding schemes designed for receivers with perfect CSI yield near-optimum BER performance in the former system, the proposed phase-shift keying based precoding schemes perform well in the latter system. In both cases, the proposed precoding scheme, combined with the novel interleaving technique, outperforms the existing LCP schemes.
|
397 |
Adaptive Range Counting and Other Frequency-Based Range Query ProblemsWilkinson, Bryan T. January 2012 (has links)
We consider variations of range searching in which, given a query range, our goal is to compute some function based on frequencies of points that lie in the range. The most basic such computation involves counting the number of points in a query range. Data structures that compute this function solve the well-studied range counting problem. We consider adaptive and approximate data structures for the 2-D orthogonal range counting problem under the w-bit word RAM model. The query time of an adaptive range counting data structure is sensitive to k, the number of points being counted. We give an adaptive data structure that requires O(n loglog n) space and O(loglog n + log_w k) query time. Non-adaptive data structures on the other hand require Ω(log_w n) query time (Pătraşcu, 2007). Our specific bounds are interesting for two reasons. First, when k=O(1), our bounds match the state of the art for the 2-D orthogonal range emptiness problem (Chan et al., 2011). Second, when k=Θ(n), our data structure is tight to the aforementioned Ω(log_w n) query time lower bound.
We also give approximate data structures for 2-D orthogonal range counting whose bounds match the state of the art for the 2-D orthogonal range emptiness problem. Our first data structure requires O(n loglog n) space and O(loglog n) query time. Our second data structure requires O(n) space and O(log^ε n) query time for any fixed constant ε>0. These data structures compute an approximation k' such that (1-δ)k≤k'≤(1+δ)k for any fixed constant δ>0.
The range selection query problem in an array involves finding the kth lowest element in a given subarray. Range selection in an array is very closely related to 3-sided 2-D orthogonal range counting. An extension of our technique for 3-sided 2-D range counting yields an efficient solution to adaptive range selection in an array. In particular, we present an adaptive data structure that requires O(n) space and O(log_w k) query time, exactly matching a recent lower bound (Jørgensen and Larsen, 2011).
We next consider a variety of frequency-based range query problems in arrays. We give efficient data structures for the range mode and least frequent element query problems and also exhibit the hardness of these problems by reducing Boolean matrix multiplication to the construction and use of a range mode or least frequent element data structure. We also give data structures for the range α-majority and α-minority query problems. An α-majority is an element whose frequency in a subarray is greater than an α fraction of the size of the subarray; any other element is an α-minority. Surprisingly, geometric insights prove to be useful even in the design of our 1-D range α-majority and α-minority data structures.
|
398 |
On The Wkb Asymptotic Solutionsof Differential Equations Of The Hypergeometric TypeAksoy, Betul 01 December 2004 (has links) (PDF)
WKB procedure is used in the study of asymptotic solutions of differential equations of the hypergeometric type. Hence asymptotic forms of classical orthogonal polynomials associated with the names Jacobi, Laguerre and Hermite have been derived. In particular, the asymptotic expansion of the Jacobi polynomials $P^{(alpha, beta)}_n(x)$ as $n$ tends to infinity is emphasized.
|
399 |
Improving interpretation by orthogonal variation : Multivariate analysis of spectroscopic dataStenlund, Hans January 2011 (has links)
The desire to use the tools and concepts of chemometrics when studying problems in the life sciences, especially biology and medicine, has prompted chemometricians to shift their focus away from their field‘s traditional emphasis on model predictivity and towards the more contemporary objective of optimizing information exchange via model interpretation. The complex data structures that are captured by modern advanced analytical instruments open up new possibilities for extracting information from complex data sets. This in turn imposes higher demands on the quality of data and the modeling techniques used. The introduction of the concept of orthogonal variation in the late 1990‘s led to a shift of focus within chemometrics; the information gained from analysis of orthogonal structures complements that obtained from the predictive structures that were the discipline‘s previous focus. OPLS, which was introduced in the beginning of 2000‘s, refined this view by formalizing the model structure and the separation of orthogonal variations. Orthogonal variation stems from experimental/analytical issues such as time trends, process drift, storage, sample handling, and instrumental differences, or from inherent properties of the sample such as age, gender, genetics, and environmental influence. The usefulness and versatility of OPLS has been demonstrated in over 500 citations, mainly in the fields of metabolomics and transcriptomics but also in NIR, UV and FTIR spectroscopy. In all cases, the predictive precision of OPLS is identical to that of PLS, but OPLS is superior when it comes to the interpretation of both predictive and orthogonal variation. Thus, OPLS models the same data structures but provides increased scope for interpretation, making it more suitable for contemporary applications in the life sciences. This thesis discusses four different research projects, including analyses of NIR, FTIR and NMR spectroscopic data. The discussion includes comparisons of OPLS and PLS models of complex datasets in which experimental variation conceals and confounds relevant information. The PLS and OPLS methods are discussed in detail. In addition, the thesis describes new OPLS-based methods developed to accommodate hyperspectral images for supervised modeling. Proper handling of orthogonal structures revealed the weaknesses in the analytical chains examined. In all of the studies described, the orthogonal structures were used to validate the quality of the generated models as well as gaining new knowledge. These aspects are crucial in order to enhance the information exchange from both past and future studies.
|
400 |
Expansion methods applied to distributions and risk measurement in financial marketsMarumo, Kohei January 2007 (has links)
Obtaining the distribution of the profit and loss (PL) of a portfolio is a key problem in market risk measurement. However, existing methods, such as those based on the Normal distribution, and historical simulation methods, which use empirical distribution of risk factors, face difficulties in dealing with at least one of the following three problems: describing the distributional properties of risk factors appropriately (description problem); deriving distributions of risk factors with time horizon longer than one day (time aggregation problem); and deriving the distribution of the PL given the distributional properties of the risk factors (risk aggregation problem). Here, we show that expansion methods can provide reasonable solutions to all three problems. Expansion methods approximate a probability density function by a sum of orthogonal polynomials multiplied by an associated weight function. One of the most important advantages of expansion methods is that they only require moments of the target distribution up to some order to obtain an approximation. Therefore they have the potential to be applied in a wide range of situations, including in attempts to solve the three problems listed above. On the other hand, it is also known that expansions lack robustness: they often exhibit unignorable negative density and their approximation quality can be extremely poor. This limits applications of expansion methods in existing studies. In this thesis, we firstly develop techniques to provide robustness, with which expansion methods result in a practical approximation quality in a wider range of examples than investigated to date. Specifically, we investigate three techniques: standardisation, use of Laguerre expansion and optimisation. Standardisation applies expansion methods to a variable which is transformed so that its first and second moments are the same as those of the weight function. Use of Laguerre expansions applies those expansions to a risk factor so that heavy tails can be captured better. Optimisation considers expansions with coefficients of polynomials optimised so that the difference between the approximation and the target distribution is minimised with respect to mean integrated squared error. We show, by numerical examples using data sets of stock index returns and log differences of implied volatility, and GARCH models, that expansions with our techniques are more robust than conventional expansion methods. As such, marginal distributions of risk factors can be approximated by expansion methods. This solves a part of the description problem: the information on the marginal distributions of risk factors can be summarised by their moments. Then we show that the dependence structure among risk factors can be summarised in terms of their cross-moments. This solves the other part of the description problem. We also use the fact that moments of risk factors can be aggregated using their moments and cross-moments, to show that expansion methods can be applied to both the time and risk aggregation problems. Furthermore, we introduce expansion methods for multivariate distributions, which can also be used to approximate conditional expectations and copula densities by rational functions.
|
Page generated in 0.0615 seconds