• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 30
  • 15
  • 9
  • 4
  • 3
  • Tagged with
  • 65
  • 65
  • 40
  • 34
  • 27
  • 24
  • 22
  • 18
  • 16
  • 13
  • 11
  • 11
  • 10
  • 8
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Development of a Symbolic Computer Algebra Toolbox for 2D Fourier Transforms in Polar Coordinates

Dovlo, Edem 29 September 2011 (has links)
The Fourier transform is one of the most useful tools in science and engineering and can be expanded to multi-dimensions and curvilinear coordinates. Multidimensional Fourier transforms are widely used in image processing, tomographic reconstructions and in fact any application that requires a multidimensional convolution. By examining a function in the frequency domain, additional information and insights may be obtained. In this thesis, the development of a symbolic computer algebra toolbox to compute two dimensional Fourier transforms in polar coordinates is discussed. Among the many operations implemented in this toolbox are different types of convolutions and procedures that allow for managing the toolbox effectively. The implementation of the two dimensional Fourier transform in polar coordinates within the toolbox is shown to be a combination of two significantly simpler transforms. The toolbox is also tested throughout the thesis to verify its capabilities.
12

Hermite Forms of Polynomial Matrices

Gupta, Somit January 2011 (has links)
This thesis presents a new algorithm for computing the Hermite form of a polynomial matrix. Given a nonsingular n by n matrix A filled with degree d polynomials with coefficients from a field, the algorithm computes the Hermite form of A in expected number of field operations similar to that of matrix multiplication. The algorithm is randomized of the Las Vegas type.
13

Development of a Symbolic Computer Algebra Toolbox for 2D Fourier Transforms in Polar Coordinates

Dovlo, Edem January 2011 (has links)
The Fourier transform is one of the most useful tools in science and engineering and can be expanded to multi-dimensions and curvilinear coordinates. Multidimensional Fourier transforms are widely used in image processing, tomographic reconstructions and in fact any application that requires a multidimensional convolution. By examining a function in the frequency domain, additional information and insights may be obtained. In this thesis, the development of a symbolic computer algebra toolbox to compute two dimensional Fourier transforms in polar coordinates is discussed. Among the many operations implemented in this toolbox are different types of convolutions and procedures that allow for managing the toolbox effectively. The implementation of the two dimensional Fourier transform in polar coordinates within the toolbox is shown to be a combination of two significantly simpler transforms. The toolbox is also tested throughout the thesis to verify its capabilities.
14

A generative approach to a virtual material testing laboratory

McCutchan, John 09 1900 (has links)
<p> This thesis presents a virtual material testing laboratory that is highly generic and flexible in terms of both the material behaviour and experiments that it supports. Generic and flexible material behaviour was accomplished via symbolic computation, generative programming techniques and an abstraction layer that effectively hides the material model specific portions of the numerical algorithms. To specify a given member of the family of material models a domain specific language (DSL) was created. A compiler, which uses the Maple computer algebra system, transforms the DSL into an abstract material class. Three different numerical algorithms, including a return map algorithm, are presented in the thesis to illustrate the advantage of the abstract material model. To accomplish the goal of generic and flexible experiments the finite element method was employed and an API that supports both load and displacement controlled experiments, as well as the capability for the experiments to modify their state over time, was developed. The virtual laboratory provides a family of material models with the following behaviours: elastic, viscous, shear-thinning, shear-thickening, strain hardening, viscoelastic, viscoplastic and plastic. As well, the developed framework, by using the Ruby programming language, provides support for a wide variety of programmable experiments, including: uniaxial, biaxial, multiaxial extension and compression, shear and triaxial. </p> / Thesis / Candidate in Philosophy
15

Quelques applications de l'algébre différentielle et aux différences pour le télescopage créatif

Chen, Shaoshi 16 February 2011 (has links) (PDF)
Depuis les années 90, la méthode de création télescopique de Zeilberger a joué un rôle important dans la preuve automatique d'identités mettant en jeu des fonctions spéciales. L'objectif de long terme que nous attaquons dans ce travail est l'obtension d'algorithmes et d'implantations rapides pour l'intégration et la sommation définies dans le cadre de cette création télescopique. Nos contributions incluent de nouveaux algorithmes pratiques et des critères théoriques pour tester la terminaison d'algorithmes existants. Sur le plan pratique, nous nous focalisons sur la construction de télescopeurs minimaux pour les fonctions rationnelles en deux variables, laquelle a de nombreuses applications en lien avec les fonctions algébriques et les diagonales de séries génératrices rationnelles. En considérant cette classe d'entrées contraintes, nous parvenons à mâtiner la méthode générale de création télescopique avec réduction bien connue d'Hermite, issue de l'intégration symbolique. En outre, nous avons obtenu pour cette sous-classe quelques améliorations des algorithmes classiques d'Almkvist et Zeilberger. Nos résultats expérimentaux ont montré que les algorithmes à base de réduction d'Hermite battent tous les autres algorithmes connus, à la fois en ce qui concerne la complexité au pire et en ce qui concerne les mesures de temps sur nos implantations. Sur le plan théorique, notre premier résultat est motivé par la conjecture de Wilf et Zeilberger au sujet des fonctions hyperexponentielles-hypergéométriques holonomes. Nous présentons un théorème de structure pour les fonctions hyperexponentielles-hypergéométriques de plusieurs variables, indiquant qu'une telle fonction peut s'écrire comme le produit de fonctions usuelles. Ce théorème étend à la fois le théorème d'Ore et Sato pour les termes hypergéométriques en plusieurs variables et le résultat récent par Feng, Singer et Wu. Notre second résultat est relié au problème de l'existence de télescopeurs. Dans le cas discret à deux variables, Abramov a obtenu un critère qui indique quand un terme hypergéométrique a un télescopeur. Des résultats similaires ont été obtenus pour le $q$-décalage par Chen, Hou et Mu. Ces résultats sont fondamentaux pour la terminaison des algorithmes s'inspirant de celui de Zeilberger. Dans les autres cas mixtes continus/discrets, nous avons obtenu deux critères pour l'existence de télescopeurs pour des fonctions hyperexponentielles-hypergéométriques en deux variables. Nos critères s'appuient sur une représentation standard des fonctions hyperexponentielles-hypergéométriques en deux variables, sur sur deux décompositions additives.
16

Volcans et calcul d'isogénies / Volcanoes and isogeny computing

Hugounenq, Cyril 25 September 2017 (has links)
Le problème du calcul d'isogénies est apparu dans l'algorithme SEA de comptage de points de courbes elliptiques définies sur des corps finis. L'apparition de nouvelles applications du calcul d'isogénies (crypto système à trappe, fonction de hachage, accélération de la multiplication scalaire, crypto système post quantique) ont motivé par ailleurs la recherche d'algorithmes plus rapides en dehors du contexte SEA. L'algorithme de Couveignes (1996), malgré ses améliorations par De Feo (2011), présente la meilleure complexité en le degré de l'isogénie mais ne peut s'appliquer dans le cas de grande caractéristique.L'objectif de cette thèse est donc de présenter une modification de l'algorithme de Couveignes (1996) utilisable en toute caractéristique avec une complexité en le degré de l'isogénie similaire à celui de Couveignes (1996).L'amélioration de l'algorithme de Couveignes (1996) se fait à travers deux axes: la construction de tours d'extensions de degré $ell$ efficaces pour rendre les opérations plus rapides, à l'image des travaux de De Feo (2011), et la détermination d'ensemble de points d'ordre $ell^k$ stables sous l'action d'isogénies.L'apport majeur de cette thèse est fait sur le second axe pour lequel nous étudions les graphes d'isogénies dans lesquels les points représentent les courbes elliptiques et les arrêtes représentent les isogénies. Nous utilisons pour notre travail les résultats précédents de Kohel (1996), Fouquet et Morain (2001), Miret emph{et al.} (2005,2006,2008), Ionica et Joux (2001). Nous présentons donc dans cette thèse, à l'aide d'une étude de l'action du Frobenius sur les points d'ordre $ell^k$, un nouveau moyen de déterminer les directions dans le graphe (volcan) d'isogénies. / Isogeny computation problem appeared in the SEA algorithm to count the number of points on an elliptic curve defined over a finite field. Algorithms using ideas of Elkies (1998) solved this problem with satisfying results in this context. The appearance of new applications of the isogeny computation problem (trapdoor crypto system, hash function, scalar multiplication acceleration, post quantic crypto system) motivated the search for a faster algorithm outside the SEA context. Couveignes's algorithm (1996) offers the best complexity in the degree of the isogeny but, despite improvements by DeFeo (2011), it proves being unpractical with great characteristic.The aim of this work is to present a modified version of Couveignes's algorithm (1996) that maintains the same complexity in the degree of the isogeny but is practical with any characteristic.Two approaches contribute to the improvement of Couveignes's algorithm (1996) : firstly, the construction of towers of degree $ell$ extensions which are efficient for faster arithmetic operations, as used in the work of De Feo (2011), and secondly, the specification of sets of points of order $ell^k$ that are stable under the action of isogenies.The main contribution of this document is done following the second approach. Our work uses the graph of isogeny where the vertices are elliptic curves and the edges are isogenies. We based our work on the previous results of David Kohel (1996), Fouquet and Morain (2001), Miret emph{& al.} (2005,2006,2008), Ionica and Joux (2001). We therefore present in this document, through the study of the action of the Frobenius endomorphism on points of order $ell^k$, a new way to specify directions in the isogeny graph (volcano).
17

Symbolic Modelling and Simulation of Wheeled Vehicle Systems on Three-Dimensional Roads

Bombardier, William January 2009 (has links)
In recent years, there has been a push by automotive manufacturers to improve the efficiency of the vehicle development process. This can be accomplished by creating a computationally efficient vehicle model that has the capability of predicting the vehicle behavior in many different situations at a fast pace. This thesis presents a procedure to automatically generate the simulation code of vehicle systems rolling over three-dimensional (3-D) roads given a description of the model as input. The governing equations describing the vehicle can be formulated using either a numerical or symbolical formulation approach. A numerical approach will re-construct numerical matrices that describe the system at each time step. Whereas a symbolic approach will generate the governing equations that describe the system for all time. The latter method offers many advantages to obtaining the equations. They only have to be formulated once and can be simplified using symbolic simplification techniques, thus making the simulations more computationally efficient. The road model is automatically generated in the formulation stage based on the single elevation function (3-D mathematical function) that is used to represent the road. Symbolic algorithms are adopted to construct and optimize the non-linear equations that are required to determine the contact point. A Newton-Raphson iterative scheme is constructed around the optimized non-linear equations, so that they can be solved at each time step. The road is represented in tabular form when it can not be defined by a single elevation function. A simulation code structure was developed to incorporate the tire on a 3-D road in a symbolic computer implementation of vehicle systems. It was created so that the tire forces and moments that appear in the generalized force matrix can be evaluated during simulation and not during formulation. They are evaluated systematically by performing a number of procedure calls. A road model is first used to determine the contact point between the tire and the ground. Its location is used to calculate the tire intermediate variables, such as the camber angle, that are required by a tire model to evaluate the tire forces and moments. The structured simulation code was implemented in the DynaFlexPro software package by creating a linear graph representation of the tire and the road. DynaFlexPro was used to analyze a vehicle system on six different road profiles performing different braking and cornering maneuvers. The analyzes were repeated in MSC.ADAMS for validation purposes and good agreement was achieved between the two software packages. The results confirmed that the symbolic computing approach presented in this thesis is more computationally efficient than the purely numerical approach. Thus, the simulation code structure increases the versatility of vehicle models by permitting them to be analyzed on 3-D trajectories while remaining computationally efficient.
18

Symbolic Modelling and Simulation of Wheeled Vehicle Systems on Three-Dimensional Roads

Bombardier, William January 2009 (has links)
In recent years, there has been a push by automotive manufacturers to improve the efficiency of the vehicle development process. This can be accomplished by creating a computationally efficient vehicle model that has the capability of predicting the vehicle behavior in many different situations at a fast pace. This thesis presents a procedure to automatically generate the simulation code of vehicle systems rolling over three-dimensional (3-D) roads given a description of the model as input. The governing equations describing the vehicle can be formulated using either a numerical or symbolical formulation approach. A numerical approach will re-construct numerical matrices that describe the system at each time step. Whereas a symbolic approach will generate the governing equations that describe the system for all time. The latter method offers many advantages to obtaining the equations. They only have to be formulated once and can be simplified using symbolic simplification techniques, thus making the simulations more computationally efficient. The road model is automatically generated in the formulation stage based on the single elevation function (3-D mathematical function) that is used to represent the road. Symbolic algorithms are adopted to construct and optimize the non-linear equations that are required to determine the contact point. A Newton-Raphson iterative scheme is constructed around the optimized non-linear equations, so that they can be solved at each time step. The road is represented in tabular form when it can not be defined by a single elevation function. A simulation code structure was developed to incorporate the tire on a 3-D road in a symbolic computer implementation of vehicle systems. It was created so that the tire forces and moments that appear in the generalized force matrix can be evaluated during simulation and not during formulation. They are evaluated systematically by performing a number of procedure calls. A road model is first used to determine the contact point between the tire and the ground. Its location is used to calculate the tire intermediate variables, such as the camber angle, that are required by a tire model to evaluate the tire forces and moments. The structured simulation code was implemented in the DynaFlexPro software package by creating a linear graph representation of the tire and the road. DynaFlexPro was used to analyze a vehicle system on six different road profiles performing different braking and cornering maneuvers. The analyzes were repeated in MSC.ADAMS for validation purposes and good agreement was achieved between the two software packages. The results confirmed that the symbolic computing approach presented in this thesis is more computationally efficient than the purely numerical approach. Thus, the simulation code structure increases the versatility of vehicle models by permitting them to be analyzed on 3-D trajectories while remaining computationally efficient.
19

Efficient Computation with Sparse and Dense Polynomials

Roche, Daniel Steven January 2011 (has links)
Computations with polynomials are at the heart of any computer algebra system and also have many applications in engineering, coding theory, and cryptography. Generally speaking, the low-level polynomial computations of interest can be classified as arithmetic operations, algebraic computations, and inverse symbolic problems. New algorithms are presented in all these areas which improve on the state of the art in both theoretical and practical performance. Traditionally, polynomials may be represented in a computer in one of two ways: as a "dense" array of all possible coefficients up to the polynomial's degree, or as a "sparse" list of coefficient-exponent tuples. In the latter case, zero terms are not explicitly written, giving a potentially more compact representation. In the area of arithmetic operations, new algorithms are presented for the multiplication of dense polynomials. These have the same asymptotic time cost of the fastest existing approaches, but reduce the intermediate storage required from linear in the size of the input to a constant amount. Two different algorithms for so-called "adaptive" multiplication are also presented which effectively provide a gradient between existing sparse and dense algorithms, giving a large improvement in many cases while never performing significantly worse than the best existing approaches. Algebraic computations on sparse polynomials are considered as well. The first known polynomial-time algorithm to detect when a sparse polynomial is a perfect power is presented, along with two different approaches to computing the perfect power factorization. Inverse symbolic problems are those for which the challenge is to compute a symbolic mathematical representation of a program or "black box". First, new algorithms are presented which improve the complexity of interpolation for sparse polynomials with coefficients in finite fields or approximate complex numbers. Second, the first polynomial-time algorithm for the more general problem of sparsest-shift interpolation is presented. The practical performance of all these algorithms is demonstrated with implementations in a high-performance library and compared to existing software and previous techniques.
20

Calculating Distribution Function and Characteristic Function using Mathematica

Chen, Cheng-yu 07 July 2010 (has links)
This paper deals with the applications of symbolic computation of Mathematica 7.0 (Wolfram, 2008) in distribution theory. The purpose of this study is twofold. Firstly, we will implement some functions to extend Mathematica capabilities to handle symbolic computations of the characteristic function for linear combination of independent univariate random variables. These functions utilizes pattern-matching codes that enhance Mathematica's ability to simplify expressions involving the product and summation of algebraic terms. Secondly, characteristic function can be classified into commonly used distributions, including six discrete distributions and seven continuous distributions, via the pattern-matching feature of Mathematica. Finally, several examples will be presented. The examples include calculating limit of characteristic function of linear combinations of independent random variables, and applications of coded functions and illustrate the central limit theorem, the law of large numbers and properties of some distributions.

Page generated in 0.1172 seconds