Spelling suggestions: "subject:"basis"" "subject:"oasis""
161 |
Fast Order Basis and Kernel Basis Computation and Related ProblemsZhou, Wei 28 November 2012 (has links)
In this thesis, we present efficient deterministic algorithms
for polynomial matrix computation problems, including the computation
of order basis, minimal kernel basis, matrix inverse, column basis,
unimodular completion, determinant, Hermite normal form, rank and
rank profile for matrices of univariate polynomials over a field.
The algorithm for kernel basis computation also immediately provides
an efficient deterministic algorithm for solving linear systems. The
algorithm for column basis also gives efficient deterministic algorithms
for computing matrix GCDs, column reduced forms, and Popov normal
forms for matrices of any dimension and any rank.
We reduce all these problems to polynomial matrix multiplications.
The computational costs of our algorithms are then similar to the
costs of multiplying matrices, whose dimensions match the input matrix
dimensions in the original problems, and whose degrees equal the average
column degrees of the original input matrices in most cases. The use
of the average column degrees instead of the commonly used matrix
degrees, or equivalently the maximum column degrees, makes our computational
costs more precise and tighter. In addition, the shifted minimal bases
computed by our algorithms are more general than the standard minimal
bases.
|
162 |
Stable Bases for Kernel Based MethodsPazouki, Maryam 13 June 2012 (has links)
No description available.
|
163 |
Fast Order Basis and Kernel Basis Computation and Related ProblemsZhou, Wei 28 November 2012 (has links)
In this thesis, we present efficient deterministic algorithms
for polynomial matrix computation problems, including the computation
of order basis, minimal kernel basis, matrix inverse, column basis,
unimodular completion, determinant, Hermite normal form, rank and
rank profile for matrices of univariate polynomials over a field.
The algorithm for kernel basis computation also immediately provides
an efficient deterministic algorithm for solving linear systems. The
algorithm for column basis also gives efficient deterministic algorithms
for computing matrix GCDs, column reduced forms, and Popov normal
forms for matrices of any dimension and any rank.
We reduce all these problems to polynomial matrix multiplications.
The computational costs of our algorithms are then similar to the
costs of multiplying matrices, whose dimensions match the input matrix
dimensions in the original problems, and whose degrees equal the average
column degrees of the original input matrices in most cases. The use
of the average column degrees instead of the commonly used matrix
degrees, or equivalently the maximum column degrees, makes our computational
costs more precise and tighter. In addition, the shifted minimal bases
computed by our algorithms are more general than the standard minimal
bases.
|
164 |
Grobuer Basis Algorithms for Polynomial Ideal Theory over Noetherian Commutative RingsFrancis, Maria January 2017 (has links) (PDF)
One of the fundamental problems in commutative algebra and algebraic geometry is to understand the nature of the solution space of a system of multivariate polynomial equations over a field k, such as real or complex numbers. An important algorithmic tool in this study is the notion of Groebner bases (Buchberger (1965)). Given a system of polynomial equations, f1= 0,..., fm = 0, Groebner basis is a “canonical" generating set of the ideal generated by f1,...., fm, that can answer, constructively, many questions in computational ideal theory. It generalizes several concepts of univariate polynomials like resultants to the multivariate case, and answers decisively the ideal membership problem. The dimension of the solution set of an ideal I called the affine variety, an important concept in algebraic geometry, is equal to the Krull dimension of the corresponding coordinate ring, k[x1,...,xn]/I. Groebner bases were first introduced to compute k-vector space bases of k[x1,....,xn]/I and use that to characterize zero-dimensional solution sets. Since then, Groebner basis techniques have provided a generic algorithmic framework for computations in control theory, cryptography,
formal verification, robotics, etc, that involve multivariate polynomials over fields.
The main aim of this thesis is to study problems related to computational ideal theory over Noetherian commutative rings (e.g: the ring of integers, Z, the polynomial ring over a field, k[y1,…., ym], etc) using the theory of Groebner bases. These problems surface in many domains including lattice based cryptography, control systems, system-on-chip design, etc.
Although, formal and standard techniques are available for polynomial rings over fields, the presence of zero divisors and non units make developing similar techniques for polynomial rings over rings challenging.
Given a polynomial ring over a Noetherian commutative ring, A and an ideal I in A[x1,..., xn], the first fundamental problem that we study is whether the residue class polynomial ring, A[x1,..., xn]/I is a free A-module or not. Note that when A=k, the answer is always ‘yes’ and the k-vector space basis of k[x1,..., xn]/I plays an important role in computational ideal theory over fields. In our work, we give a Groebner basis characterization for A[x1,...,xn]/I to have a free A-module representation w.r.t. a monomial ordering. For such A-algebras, we give an algorithm to compute its A-module basis. This extends the Macaulay-Buchberger basis theorem to polynomial rings over Noetherian commutative rings. These results help us develop a theory of border bases in A[x1,...,xn] when the residue class polynomial ring is
finitely generated. The theory of border bases is handled as two separate cases: (i) A[x1,...,xn]/I is free and (ii) A[x1,...,xn]/I has torsion submodules.
For the special case of A = Z, we show how short reduced Groebner bases and the characterization for a free A-module representation help identify the cases
when Z[x1,...,xn]/I is isomorphic to ZN for some positive integer N. Ideals in such Z-algebras are called ideal lattices. These structures are interesting since this means we can use the algebraic structure, Z[x1,...,xn]/I as a representation for point lattices and extend all the computationally hard problems in point lattice theory to Z[x1,...,xn]/I . Univariate ideal lattices are widely used in lattice based cryptography for they are a more compact representation for lattices than matrices. In this thesis, we give a characterization for multivariate ideal lattices and construct collision resistant hash functions based on them using Groebner basis techniques. For the construction of hash functions, we define a worst case problem,
shortest substitution problem w.r.t. an ideal in Z[x1,...,xn], and establish hardness results for this problem.
Finally, we develop an approach to compute the Krull dimension of A[x1,...,xn]/I using Groebner bases, when A is a Noetherian integral domain. When A is a field, the Krull dimension of A[x1,...,xn]/I has several equivalent algorithmic definitions by which it can be computed. But this is not true in the case of arbitrary Noetherian rings. We introduce the notion of combinatorial dimension of A[x1,...,xn]/I and give a Groebner basis method to compute it for residue class polynomial rings that have a free A-module representation w.r.t. a lexicographic ordering. For such A-algebras, we derive a relation between Krull dimension and combinatorial dimension of A[x1,...,xn]/I. For A-algebras that have a free A-module representation w.r.t. degree compatible monomial orderings, we introduce the concepts of Hilbert function, Hilbert series and Hilbert polynomials and show that Groebner basis methods can be used to compute these quantities. We then proceed to show that the combinatorial dimension of such A-algebras is equal to the degree of the Hilbert polynomial. This enables us to extend the relation between Krull dimension and combinatorial dimension to A-algebras with a free A-module representation w.r.t. a degree compatible ordering as well.
|
165 |
Novel methods for species distribution mapping including spatial models in complex regionsScott-Hayward, Lindesay Alexandra Sarah January 2013 (has links)
Species Distribution Modelling (SDM) plays a key role in a number of biological applications: assessment of temporal trends in distribution, environmental impact assessment and spatial conservation planning. From a statistical perspective, this thesis develops two methods for increasing the accuracy and reliability of maps of density surfaces and provides a solution to the problem of how to collate multiple density maps of the same region, obtained from differing sources. From a biological perspective, these statistical methods are used to analyse two marine mammal datasets to produce accurate maps for use in spatial conservation planning and temporal trend assessment. The first new method, Complex Region Spatial Smoother [CReSS; Scott-Hayward et al., 2013], improves smoothing in areas where the real distance an animal must travel (`as the animal swims') between two points may be greater than the straight line distance between them, a problem that occurs in complex domains with coastline or islands. CReSS uses estimates of the geodesic distance between points, model averaging and local radial smoothing. Simulation is used to compare its performance with other traditional and recently-developed smoothing techniques: Thin Plate Splines (TPS, Harder and Desmarais [1972]), Geodesic Low rank TPS (GLTPS; Wang and Ranalli [2007]) and the Soap lm smoother (SOAP; Wood et al. [2008]). GLTPS cannot be used in areas with islands and SOAP can be very hard to parametrise. CReSS outperforms all of the other methods on a range of simulations, based on their fit to the underlying function as measured by mean squared error, particularly for sparse data sets. Smoothing functions need to be flexible when they are used to model density surfaces that are highly heterogeneous, in order to avoid biases due to under- or over-fitting. This issue was addressed using an adaptation of a Spatially Adaptive Local Smoothing Algorithm (SALSA, Walker et al. [2010]) in combination with the CReSS method (CReSS-SALSA2D). Unlike traditional methods, such as Generalised Additive Modelling, the adaptive knot selection approach used in SALSA2D naturally accommodates local changes in the smoothness of the density surface that is being modelled. At the time of writing, there are no other methods available to deal with this issue in topographically complex regions. Simulation results show that CReSS-SALSA2D performs better than CReSS (based on MSE scores), except at very high noise levels where there is an issue with over-fitting. There is an increasing need for a facility to combine multiple density surface maps of individual species in order to make best use of meta-databases, to maintain existing maps, and to extend their geographical coverage. This thesis develops a framework and methods for combining species distribution maps as new information becomes available. The methods use Bayes Theorem to combine density surfaces, taking account of the levels of precision associated with the different sets of estimates, and kernel smoothing to alleviate artefacts that may be created where pairs of surfaces join. The methods were used as part of an algorithm (the Dynamic Cetacean Abundance Predictor) designed for BAE Systems to aid in risk mitigation for naval exercises. Two case studies show the capabilities of CReSS and CReSS-SALSA2D when applied to real ecological data. In the first case study, CReSS was used in a Generalised Estimating Equation framework to identify a candidate Marine Protected Area for the Southern Resident Killer Whale population to the south of San Juan Island, off the Pacific coast of the United States. In the second case study, changes in the spatial and temporal distribution of harbour porpoise and minke whale in north-western European waters over a period of 17 years (1994-2010) were modelled. CReSS and CReSS-SALSA2D performed well in a large, topographically complex study area. Based on simulation results, maps produced using these methods are more accurate than if a traditional GAM-based method is used. The resulting maps identified particularly high densities of both harbour porpoise and minke whale in an area off the west coast of Scotland in 2010, that might be a candidate for inclusion into the Scottish network of Nature Conservation Marine Protected Areas.
|
166 |
基差風險之探討-以Basis Swap為例陳憶芳 Unknown Date (has links)
西元二零零八年(民國九十七年)發生的全球性金融海嘯在歷史上絕對會是重要的金融事件之一,初期的徵兆點可追溯自二零零七年美國的次貸事件惡化,此次金融海嘯會在何時畫下句點,各領域的大師眾說紛紜:有人聲稱這次金融海嘯是二次世界大戰以來發生之最嚴峻的一次,將持續至少四~五年;有人說經濟大蕭條會延宕到二零一零年;樂觀的多頭大師則云今年(西元二零零九年)下半年即可嗅到景氣回春的訊息,然而在我寫這論文時,景氣蕭條的氛圍仍持續瀰漫著。雖然這篇論文並不是要探討這次歷史性金融海嘯的前因後果,但是引發我探討這篇論文主題的基差風險,卻源於美國次貸事件惡化後,金融市場反應的LIBOR 3M與LIBOR 6M間基差報價擴大現象而有所關聯。
本文針對交易室的投資組合中,在利率產品交易裡使用的浮動利率指標(Floating Rate Index),不論是LIBOR 1M、LIBOR 3M、LIBOR 6M或LIBOR 12M,原本用來評價內含這些浮動利率指標產品的曲線(Revaluation Curve)均是由3M Swap Rate (USD LIBOR 3M)市場價格所建構而成;在納入基差因子,實際反映市場基差報價來分別建構不同浮動利率指標交易對應的不同評價曲線,比較納入基差因子前後及模擬各種基差市場價格情境下,對投資組合進行評價的損益差異,來說明忽略基差風險對投資組合的影響程度。
|
167 |
Uma regra para a polarização de funções de base geradas pelo método da coordenada geradora / A rule for polarization of gaussian basis functions obtained with the generate coordinate methodMaringolo, Milena Palhares 22 October 2010 (has links)
O Método da Coordenada Geradora Hartree-Fock Polinomial (pMCG-HF), desenvolvido por R.C. Barbosa e A.B.F. da Silva [1], é uma ferramenta matemática valiosa que permite gerar funções de base (também conhecidas como conjuntos de base). As funções de base geradas por este método têm um bom comportamento e são capazes de calcular valores precisos de propriedades eletrônicas moleculares. Porém, depois de gerar funções de base do hidrogênio até o flúor [2], fez-se necessário a adição de expoentes à função de base, correspondentes a cada átomo, para melhor adaptação à realização dos cálculos moleculares. Estas funções adicionais são o que chamamos de funções de polarização. A adição de funções de polarização, através de otimização computacional, é muito custosa, deste modo o desenvolvimento de uma regra de polarização para se esquivar desta otimização é de grande importância e por isso se transforma na beleza e no objetivo deste trabalho. Portanto, nesta dissertação, estudar-se-á um procedimento para escolher funções de polarização que reduza drasticamente o tempo computacional, no sentido de permitir uma seleção, mais simples, de expoentes da própria função de base primitiva para serem usadas nas funções de polarização p, d, f, g, etc. para a obtenção de propriedades moleculares calculadas através de métodos químico-quânticos / The polynomial generate coordinate method pGCM developed by R.C. Barbosa and A.B.F. da Silva [1] is an remarkble mathematic tool for the generation of basis functions (also known as basis sets). The basis sets generated from this method have a good behavior and are able to produce accurate values for electronic molecular properties. In fact, after generating a basis set [2] we need to add a set of exponent functions in order to better adequate a basis set to perform molecular calculations. These sets of additional functions are called polarizations functions. This work provides a methodology where the polarization functions are obtained from the initial basis set (the primitive set) without optimizing them separately by using optimization algorithms that are, computationally speaking, very costly. This procedure reduces drastically the computational time used to find polarization functions to be used in molecular quantum chemical calculations. Our methodology permits to choose the polarization functions directly from the primitive orbital exponents of each atomic symmetry s, p, d, f etc. in a very simple manner. The finding of polarization functions using our methodology was performed with several quantum chemical methods.
|
168 |
Predictor development for controlling real-time applications over the InternetKommaraju, Mallik 25 April 2007 (has links)
Over the past decade there has been a growing demand for interactive multimedia
applications deployed over public IP networks. To achieve acceptable Quality of Ser-
vice (QoS) without significantly modifying the existing infrastructure, the end-to-end
applications need to optimize their behavior and adapt according to network char-
acteristics. Most existing application optimization techniques are based on reactive
strategies, i.e. reacting to occurrences of congestion. We propose the use of predic-
tive control to address the problem in an anticipatory manner. This research deals
with developing models to predict end-to-end single flow characteristics of Wide Area
Networks (WANs).
A novel signal, in the form of single flow packet accumulation, is proposed for
feedback purposes. This thesis presents a variety of effective predictors for the above
signal using Auto-Regressive (AR) models, Radial Basis Functions (RBF) and Sparse
Basis Functions (SBF). The study consists of three sections. We first develop time-
series models to predict the accumulation signal. Since encoder bit-rate is the most
logical and generic control input, a statistical analysis is conducted to analyze the
effect of input bit-rate on end-to-end delay and the accumulation signal. Finally,
models are developed using this bit-rate as an input to predict the resulting accu-
mulation signal. The predictors are evaluated based on Noise-to-Signal Ratio (NSR)
along with their accuracy with increasing accumulation levels. In time-series models, RBF gave the best NSR closely followed by AR models. Analysis based on accu-
racy with increasing accumulation levels showed AR to be better in some cases. The
study on effect of bit-rate revealed that bit-rate may not be a good control input on
all paths. Models such as Auto-Regressive with Exogenous input (ARX) and RBF
were used to develop models to predict the accumulation signal using bit-rate as a
modeling input. ARX and RBF models were found to give comparable accuracy, with
RBF being slightly better.
|
169 |
Reduced basis methods for parametrized partial differential equationsEftang, Jens Lohne January 2011 (has links)
No description available.
|
170 |
Uma regra para a polarização de funções de base geradas pelo método da coordenada geradora / A rule for polarization of gaussian basis functions obtained with the generate coordinate methodMilena Palhares Maringolo 22 October 2010 (has links)
O Método da Coordenada Geradora Hartree-Fock Polinomial (pMCG-HF), desenvolvido por R.C. Barbosa e A.B.F. da Silva [1], é uma ferramenta matemática valiosa que permite gerar funções de base (também conhecidas como conjuntos de base). As funções de base geradas por este método têm um bom comportamento e são capazes de calcular valores precisos de propriedades eletrônicas moleculares. Porém, depois de gerar funções de base do hidrogênio até o flúor [2], fez-se necessário a adição de expoentes à função de base, correspondentes a cada átomo, para melhor adaptação à realização dos cálculos moleculares. Estas funções adicionais são o que chamamos de funções de polarização. A adição de funções de polarização, através de otimização computacional, é muito custosa, deste modo o desenvolvimento de uma regra de polarização para se esquivar desta otimização é de grande importância e por isso se transforma na beleza e no objetivo deste trabalho. Portanto, nesta dissertação, estudar-se-á um procedimento para escolher funções de polarização que reduza drasticamente o tempo computacional, no sentido de permitir uma seleção, mais simples, de expoentes da própria função de base primitiva para serem usadas nas funções de polarização p, d, f, g, etc. para a obtenção de propriedades moleculares calculadas através de métodos químico-quânticos / The polynomial generate coordinate method pGCM developed by R.C. Barbosa and A.B.F. da Silva [1] is an remarkble mathematic tool for the generation of basis functions (also known as basis sets). The basis sets generated from this method have a good behavior and are able to produce accurate values for electronic molecular properties. In fact, after generating a basis set [2] we need to add a set of exponent functions in order to better adequate a basis set to perform molecular calculations. These sets of additional functions are called polarizations functions. This work provides a methodology where the polarization functions are obtained from the initial basis set (the primitive set) without optimizing them separately by using optimization algorithms that are, computationally speaking, very costly. This procedure reduces drastically the computational time used to find polarization functions to be used in molecular quantum chemical calculations. Our methodology permits to choose the polarization functions directly from the primitive orbital exponents of each atomic symmetry s, p, d, f etc. in a very simple manner. The finding of polarization functions using our methodology was performed with several quantum chemical methods.
|
Page generated in 0.0427 seconds