• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 375
  • 124
  • 60
  • 50
  • 46
  • 35
  • 19
  • 15
  • 13
  • 12
  • 11
  • 11
  • 6
  • 5
  • 4
  • Tagged with
  • 884
  • 187
  • 131
  • 130
  • 123
  • 86
  • 78
  • 64
  • 62
  • 57
  • 54
  • 53
  • 50
  • 49
  • 44
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Grobuer Basis Algorithms for Polynomial Ideal Theory over Noetherian Commutative Rings

Francis, Maria January 2017 (has links) (PDF)
One of the fundamental problems in commutative algebra and algebraic geometry is to understand the nature of the solution space of a system of multivariate polynomial equations over a field k, such as real or complex numbers. An important algorithmic tool in this study is the notion of Groebner bases (Buchberger (1965)). Given a system of polynomial equations, f1= 0,..., fm = 0, Groebner basis is a “canonical" generating set of the ideal generated by f1,...., fm, that can answer, constructively, many questions in computational ideal theory. It generalizes several concepts of univariate polynomials like resultants to the multivariate case, and answers decisively the ideal membership problem. The dimension of the solution set of an ideal I called the affine variety, an important concept in algebraic geometry, is equal to the Krull dimension of the corresponding coordinate ring, k[x1,...,xn]/I. Groebner bases were first introduced to compute k-vector space bases of k[x1,....,xn]/I and use that to characterize zero-dimensional solution sets. Since then, Groebner basis techniques have provided a generic algorithmic framework for computations in control theory, cryptography, formal verification, robotics, etc, that involve multivariate polynomials over fields. The main aim of this thesis is to study problems related to computational ideal theory over Noetherian commutative rings (e.g: the ring of integers, Z, the polynomial ring over a field, k[y1,…., ym], etc) using the theory of Groebner bases. These problems surface in many domains including lattice based cryptography, control systems, system-on-chip design, etc. Although, formal and standard techniques are available for polynomial rings over fields, the presence of zero divisors and non units make developing similar techniques for polynomial rings over rings challenging. Given a polynomial ring over a Noetherian commutative ring, A and an ideal I in A[x1,..., xn], the first fundamental problem that we study is whether the residue class polynomial ring, A[x1,..., xn]/I is a free A-module or not. Note that when A=k, the answer is always ‘yes’ and the k-vector space basis of k[x1,..., xn]/I plays an important role in computational ideal theory over fields. In our work, we give a Groebner basis characterization for A[x1,...,xn]/I to have a free A-module representation w.r.t. a monomial ordering. For such A-algebras, we give an algorithm to compute its A-module basis. This extends the Macaulay-Buchberger basis theorem to polynomial rings over Noetherian commutative rings. These results help us develop a theory of border bases in A[x1,...,xn] when the residue class polynomial ring is finitely generated. The theory of border bases is handled as two separate cases: (i) A[x1,...,xn]/I is free and (ii) A[x1,...,xn]/I has torsion submodules. For the special case of A = Z, we show how short reduced Groebner bases and the characterization for a free A-module representation help identify the cases when Z[x1,...,xn]/I is isomorphic to ZN for some positive integer N. Ideals in such Z-algebras are called ideal lattices. These structures are interesting since this means we can use the algebraic structure, Z[x1,...,xn]/I as a representation for point lattices and extend all the computationally hard problems in point lattice theory to Z[x1,...,xn]/I . Univariate ideal lattices are widely used in lattice based cryptography for they are a more compact representation for lattices than matrices. In this thesis, we give a characterization for multivariate ideal lattices and construct collision resistant hash functions based on them using Groebner basis techniques. For the construction of hash functions, we define a worst case problem, shortest substitution problem w.r.t. an ideal in Z[x1,...,xn], and establish hardness results for this problem. Finally, we develop an approach to compute the Krull dimension of A[x1,...,xn]/I using Groebner bases, when A is a Noetherian integral domain. When A is a field, the Krull dimension of A[x1,...,xn]/I has several equivalent algorithmic definitions by which it can be computed. But this is not true in the case of arbitrary Noetherian rings. We introduce the notion of combinatorial dimension of A[x1,...,xn]/I and give a Groebner basis method to compute it for residue class polynomial rings that have a free A-module representation w.r.t. a lexicographic ordering. For such A-algebras, we derive a relation between Krull dimension and combinatorial dimension of A[x1,...,xn]/I. For A-algebras that have a free A-module representation w.r.t. degree compatible monomial orderings, we introduce the concepts of Hilbert function, Hilbert series and Hilbert polynomials and show that Groebner basis methods can be used to compute these quantities. We then proceed to show that the combinatorial dimension of such A-algebras is equal to the degree of the Hilbert polynomial. This enables us to extend the relation between Krull dimension and combinatorial dimension to A-algebras with a free A-module representation w.r.t. a degree compatible ordering as well.
162

Novel methods for species distribution mapping including spatial models in complex regions

Scott-Hayward, Lindesay Alexandra Sarah January 2013 (has links)
Species Distribution Modelling (SDM) plays a key role in a number of biological applications: assessment of temporal trends in distribution, environmental impact assessment and spatial conservation planning. From a statistical perspective, this thesis develops two methods for increasing the accuracy and reliability of maps of density surfaces and provides a solution to the problem of how to collate multiple density maps of the same region, obtained from differing sources. From a biological perspective, these statistical methods are used to analyse two marine mammal datasets to produce accurate maps for use in spatial conservation planning and temporal trend assessment. The first new method, Complex Region Spatial Smoother [CReSS; Scott-Hayward et al., 2013], improves smoothing in areas where the real distance an animal must travel (`as the animal swims') between two points may be greater than the straight line distance between them, a problem that occurs in complex domains with coastline or islands. CReSS uses estimates of the geodesic distance between points, model averaging and local radial smoothing. Simulation is used to compare its performance with other traditional and recently-developed smoothing techniques: Thin Plate Splines (TPS, Harder and Desmarais [1972]), Geodesic Low rank TPS (GLTPS; Wang and Ranalli [2007]) and the Soap lm smoother (SOAP; Wood et al. [2008]). GLTPS cannot be used in areas with islands and SOAP can be very hard to parametrise. CReSS outperforms all of the other methods on a range of simulations, based on their fit to the underlying function as measured by mean squared error, particularly for sparse data sets. Smoothing functions need to be flexible when they are used to model density surfaces that are highly heterogeneous, in order to avoid biases due to under- or over-fitting. This issue was addressed using an adaptation of a Spatially Adaptive Local Smoothing Algorithm (SALSA, Walker et al. [2010]) in combination with the CReSS method (CReSS-SALSA2D). Unlike traditional methods, such as Generalised Additive Modelling, the adaptive knot selection approach used in SALSA2D naturally accommodates local changes in the smoothness of the density surface that is being modelled. At the time of writing, there are no other methods available to deal with this issue in topographically complex regions. Simulation results show that CReSS-SALSA2D performs better than CReSS (based on MSE scores), except at very high noise levels where there is an issue with over-fitting. There is an increasing need for a facility to combine multiple density surface maps of individual species in order to make best use of meta-databases, to maintain existing maps, and to extend their geographical coverage. This thesis develops a framework and methods for combining species distribution maps as new information becomes available. The methods use Bayes Theorem to combine density surfaces, taking account of the levels of precision associated with the different sets of estimates, and kernel smoothing to alleviate artefacts that may be created where pairs of surfaces join. The methods were used as part of an algorithm (the Dynamic Cetacean Abundance Predictor) designed for BAE Systems to aid in risk mitigation for naval exercises. Two case studies show the capabilities of CReSS and CReSS-SALSA2D when applied to real ecological data. In the first case study, CReSS was used in a Generalised Estimating Equation framework to identify a candidate Marine Protected Area for the Southern Resident Killer Whale population to the south of San Juan Island, off the Pacific coast of the United States. In the second case study, changes in the spatial and temporal distribution of harbour porpoise and minke whale in north-western European waters over a period of 17 years (1994-2010) were modelled. CReSS and CReSS-SALSA2D performed well in a large, topographically complex study area. Based on simulation results, maps produced using these methods are more accurate than if a traditional GAM-based method is used. The resulting maps identified particularly high densities of both harbour porpoise and minke whale in an area off the west coast of Scotland in 2010, that might be a candidate for inclusion into the Scottish network of Nature Conservation Marine Protected Areas.
163

基差風險之探討-以Basis Swap為例

陳憶芳 Unknown Date (has links)
西元二零零八年(民國九十七年)發生的全球性金融海嘯在歷史上絕對會是重要的金融事件之一,初期的徵兆點可追溯自二零零七年美國的次貸事件惡化,此次金融海嘯會在何時畫下句點,各領域的大師眾說紛紜:有人聲稱這次金融海嘯是二次世界大戰以來發生之最嚴峻的一次,將持續至少四~五年;有人說經濟大蕭條會延宕到二零一零年;樂觀的多頭大師則云今年(西元二零零九年)下半年即可嗅到景氣回春的訊息,然而在我寫這論文時,景氣蕭條的氛圍仍持續瀰漫著。雖然這篇論文並不是要探討這次歷史性金融海嘯的前因後果,但是引發我探討這篇論文主題的基差風險,卻源於美國次貸事件惡化後,金融市場反應的LIBOR 3M與LIBOR 6M間基差報價擴大現象而有所關聯。 本文針對交易室的投資組合中,在利率產品交易裡使用的浮動利率指標(Floating Rate Index),不論是LIBOR 1M、LIBOR 3M、LIBOR 6M或LIBOR 12M,原本用來評價內含這些浮動利率指標產品的曲線(Revaluation Curve)均是由3M Swap Rate (USD LIBOR 3M)市場價格所建構而成;在納入基差因子,實際反映市場基差報價來分別建構不同浮動利率指標交易對應的不同評價曲線,比較納入基差因子前後及模擬各種基差市場價格情境下,對投資組合進行評價的損益差異,來說明忽略基差風險對投資組合的影響程度。
164

Uma regra para a polarização de funções de base geradas pelo método da coordenada geradora / A rule for polarization of gaussian basis functions obtained with the generate coordinate method

Maringolo, Milena Palhares 22 October 2010 (has links)
O Método da Coordenada Geradora Hartree-Fock Polinomial (pMCG-HF), desenvolvido por R.C. Barbosa e A.B.F. da Silva [1], é uma ferramenta matemática valiosa que permite gerar funções de base (também conhecidas como conjuntos de base). As funções de base geradas por este método têm um bom comportamento e são capazes de calcular valores precisos de propriedades eletrônicas moleculares. Porém, depois de gerar funções de base do hidrogênio até o flúor [2], fez-se necessário a adição de expoentes à função de base, correspondentes a cada átomo, para melhor adaptação à realização dos cálculos moleculares. Estas funções adicionais são o que chamamos de funções de polarização. A adição de funções de polarização, através de otimização computacional, é muito custosa, deste modo o desenvolvimento de uma regra de polarização para se esquivar desta otimização é de grande importância e por isso se transforma na beleza e no objetivo deste trabalho. Portanto, nesta dissertação, estudar-se-á um procedimento para escolher funções de polarização que reduza drasticamente o tempo computacional, no sentido de permitir uma seleção, mais simples, de expoentes da própria função de base primitiva para serem usadas nas funções de polarização p, d, f, g, etc. para a obtenção de propriedades moleculares calculadas através de métodos químico-quânticos / The polynomial generate coordinate method pGCM developed by R.C. Barbosa and A.B.F. da Silva [1] is an remarkble mathematic tool for the generation of basis functions (also known as basis sets). The basis sets generated from this method have a good behavior and are able to produce accurate values for electronic molecular properties. In fact, after generating a basis set [2] we need to add a set of exponent functions in order to better adequate a basis set to perform molecular calculations. These sets of additional functions are called polarizations functions. This work provides a methodology where the polarization functions are obtained from the initial basis set (the primitive set) without optimizing them separately by using optimization algorithms that are, computationally speaking, very costly. This procedure reduces drastically the computational time used to find polarization functions to be used in molecular quantum chemical calculations. Our methodology permits to choose the polarization functions directly from the primitive orbital exponents of each atomic symmetry s, p, d, f etc. in a very simple manner. The finding of polarization functions using our methodology was performed with several quantum chemical methods.
165

Predictor development for controlling real-time applications over the Internet

Kommaraju, Mallik 25 April 2007 (has links)
Over the past decade there has been a growing demand for interactive multimedia applications deployed over public IP networks. To achieve acceptable Quality of Ser- vice (QoS) without significantly modifying the existing infrastructure, the end-to-end applications need to optimize their behavior and adapt according to network char- acteristics. Most existing application optimization techniques are based on reactive strategies, i.e. reacting to occurrences of congestion. We propose the use of predic- tive control to address the problem in an anticipatory manner. This research deals with developing models to predict end-to-end single flow characteristics of Wide Area Networks (WANs). A novel signal, in the form of single flow packet accumulation, is proposed for feedback purposes. This thesis presents a variety of effective predictors for the above signal using Auto-Regressive (AR) models, Radial Basis Functions (RBF) and Sparse Basis Functions (SBF). The study consists of three sections. We first develop time- series models to predict the accumulation signal. Since encoder bit-rate is the most logical and generic control input, a statistical analysis is conducted to analyze the effect of input bit-rate on end-to-end delay and the accumulation signal. Finally, models are developed using this bit-rate as an input to predict the resulting accu- mulation signal. The predictors are evaluated based on Noise-to-Signal Ratio (NSR) along with their accuracy with increasing accumulation levels. In time-series models, RBF gave the best NSR closely followed by AR models. Analysis based on accu- racy with increasing accumulation levels showed AR to be better in some cases. The study on effect of bit-rate revealed that bit-rate may not be a good control input on all paths. Models such as Auto-Regressive with Exogenous input (ARX) and RBF were used to develop models to predict the accumulation signal using bit-rate as a modeling input. ARX and RBF models were found to give comparable accuracy, with RBF being slightly better.
166

Reduced basis methods for parametrized partial differential equations

Eftang, Jens Lohne January 2011 (has links)
No description available.
167

Uma regra para a polarização de funções de base geradas pelo método da coordenada geradora / A rule for polarization of gaussian basis functions obtained with the generate coordinate method

Milena Palhares Maringolo 22 October 2010 (has links)
O Método da Coordenada Geradora Hartree-Fock Polinomial (pMCG-HF), desenvolvido por R.C. Barbosa e A.B.F. da Silva [1], é uma ferramenta matemática valiosa que permite gerar funções de base (também conhecidas como conjuntos de base). As funções de base geradas por este método têm um bom comportamento e são capazes de calcular valores precisos de propriedades eletrônicas moleculares. Porém, depois de gerar funções de base do hidrogênio até o flúor [2], fez-se necessário a adição de expoentes à função de base, correspondentes a cada átomo, para melhor adaptação à realização dos cálculos moleculares. Estas funções adicionais são o que chamamos de funções de polarização. A adição de funções de polarização, através de otimização computacional, é muito custosa, deste modo o desenvolvimento de uma regra de polarização para se esquivar desta otimização é de grande importância e por isso se transforma na beleza e no objetivo deste trabalho. Portanto, nesta dissertação, estudar-se-á um procedimento para escolher funções de polarização que reduza drasticamente o tempo computacional, no sentido de permitir uma seleção, mais simples, de expoentes da própria função de base primitiva para serem usadas nas funções de polarização p, d, f, g, etc. para a obtenção de propriedades moleculares calculadas através de métodos químico-quânticos / The polynomial generate coordinate method pGCM developed by R.C. Barbosa and A.B.F. da Silva [1] is an remarkble mathematic tool for the generation of basis functions (also known as basis sets). The basis sets generated from this method have a good behavior and are able to produce accurate values for electronic molecular properties. In fact, after generating a basis set [2] we need to add a set of exponent functions in order to better adequate a basis set to perform molecular calculations. These sets of additional functions are called polarizations functions. This work provides a methodology where the polarization functions are obtained from the initial basis set (the primitive set) without optimizing them separately by using optimization algorithms that are, computationally speaking, very costly. This procedure reduces drastically the computational time used to find polarization functions to be used in molecular quantum chemical calculations. Our methodology permits to choose the polarization functions directly from the primitive orbital exponents of each atomic symmetry s, p, d, f etc. in a very simple manner. The finding of polarization functions using our methodology was performed with several quantum chemical methods.
168

On The Complexity Of Grobner Basis And Border Basis Detection

Prabhanjan, V A 08 1900 (has links) (PDF)
The theory of Grobner bases has garnered the interests of a large number of researchers in computational algebra due to its applications not only in mathematics but also in areas like control systems, robotics, cryptography to name a few. It is well known that the computation of Grobner bases takes time doubly exponential in the number of indeterminates rendering it impractical in all but a few places.The current known algorithms for Grobner bases depend on the term order over which Grobner bases is computed. In this thesis, we study computational complexity of some problems in computational ideal theory. We also study the algebraic formulation of combinatorial optimization problems. Gritzmann and Sturmfels (1993) posed the following question: Given a set of generators, decide whether it is a Gr¨obner bases with respect to some term order. This problem, termed as the Grobner Basis Detection(GBD)problem, was introduced as an application of Minkowski addition of polytopes. It was shown by Sturmfels and Wiegelmann (1997) that GBD is NP-hard. We study the problem for the case of zero-dimensional ideals and show that the problem is hard even in this special case. We study the detection problem in the case of border bases which are an alternative to Grobner bases in the case of zero dimensional ideals. We propose the Border Basis Detection(BBD) problem which is defined as follows: Given a set of generators of an ideal, decide whether that set of generators is a border basis of the ideal with respect to some order ideal. It is shown that BBD is NP-complete. We also formulate the rainbow connectivity problem as a system of polynomial equations such that solving the polynomial system yields a solution to it. We give an alternate formulation of the rainbow connectivity problem as a membership problem in polynomial ideals.
169

Post-Crisis Valuation of Derivatives / Oceňování derivátů v postkrizovém období / Post crisis valuation of derivatives

Baran, Jaroslav January 2016 (has links)
In this study we analyse relationship between classical approach to valuation of linear interest rate derivatives and post-crisis approach when the valuation better reflects credit and liquidity risk and economic costs of the transaction on top of the risk-free rate. We discuss the method of collateralization to diminish counterparty credit risk, its impact on derivatives pricing, and how overnight indexed swap (OIS) rates became market standard for discounting future derivatives' cash flows. We show that using one yield curve to both estimating the forward rates and discounting the expected future cash flows is no longer possible in arbitrage free market. We review in detail three fundamental interest rate derivatives (interest rate swap, basis swap and cross-currency swap) and we derive discount factors used for calculating the present value of expected future cash flows that are consistent with market quotes. We also investigate drivers behind basis spreads, in particular, credit and liquidity risk, and supply and demand forces, and show how they impact valuation of derivatives. We analyse Czech swap rates and propose an estimation of CZK OIS curve and approximate discount rates in case of cross-currency swaps. Finally, we discuss inflation markets and consistent valuation of inflation swaps.
170

Bázové posloupnosti v Banachových prostorech / Basic sequences in Banach spaces

Zindulka, Mikuláš January 2021 (has links)
An ordering on bases in Banach spaces is defined as a natural generalization of the notion of equivalence. Its theory is developed with emphasis on its behavior with respect to shrinking and boundedly-complete bases. We prove that a bounded operator mapping a shrinking basis to a boundedly-complete one is weakly compact. A well-known result concerning the factorization of a weakly compact operator through a reflexive space is then reinterpreted in terms of the ordering. Next, we introduce a class of Banach spaces whose norm is constructed from a given two-dimensional norm N. We prove that any such space XN is isomorphic to an Orlicz sequence space. A key step in obtaining this correspondence is to describe the unit circle in the norm N with a convex function ϕ. The canonical unit vectors form a basis of a subspace YN of XN . We characterize the equivalence of these bases and the situation when the basis is boundedly-complete. The criteria are formulated in terms of the norm N and the function ϕ. 1

Page generated in 0.0515 seconds