• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3260
  • 1477
  • 1007
  • 307
  • 200
  • 94
  • 74
  • 73
  • 71
  • 71
  • 71
  • 71
  • 71
  • 69
  • 53
  • Tagged with
  • 8029
  • 2285
  • 1822
  • 1084
  • 981
  • 973
  • 966
  • 849
  • 839
  • 834
  • 796
  • 782
  • 675
  • 614
  • 608
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

Numerical integration over planar regions

Peirce, William Hollis. January 1900 (has links)
Thesis--University of Wisconsin. / Vita. Bibliography: leaves 85-88.
222

Modelling the impact of surface melt on the hydrology and dynamics of the Greenland Ice Sheet

Koziol, Conrad Pawel January 2018 (has links)
Increasing surface runoff from the Greenland Ice Sheet due to a warming climate not only accelerates ice mass loss by altering surface mass balance, but may also lead to increased dynamic losses. This is because surface melt draining to the bed can reduce ice-bed coupling, leading to faster ice flow. Understanding the impact of surface melt on ice dynamics is important for constraining the contribution of the Greenland Ice Sheet to sea level rise. The aim of this thesis is to numerically model the influence of surface runoff on ice velocities. Three new models are presented: an updated supraglacial hydrology model incorporating moulin and crevasse drainage, along with lake drainage over the ice surface via channel incision; an ice sheet model implementing a numerically efficient formulation of ice flow; an adjoint code of the ice flow model based on automatic differentiation. Together with a subglacial hydrology model, these represent the key components of the ice sheet system. The supraglacial hydrology model is calibrated in the Paakitsoq region. Model output shows the partitioning of melt between different drainage pathways and the spatial distribution of surface drainage. Melt season intensity is found to be a relevant factor for both. A key challenge for simulations applying a coupled ice-flow/hydrology model is state and parameter initialization. This challenge is addressed by developing a new workflow for incorporating modelled subglacial water pressures into inversions of basal drag. A current subglacial hydrology model is run for a winter season, and the output is incorporated into the workflow to invert for basal drag at the start of summer in the Russell Glacier area. Comparison of the modelled subglacial system to observations suggests that model output is more in line with summer conditions than winter conditions. A multicomponent model integrating the main components of the ice sheet system is developed and applied to the Russell Glacier area. A coupled ice-flow/hydrology model is initialized using the proposed workflow, and driven using output from the supraglacial hydrology model. Three recent melt seasons are modelled. To a first order, predicted ice velocities match measured velocities at multiple GPS sites. This affirms the conceptual model that summer velocity patterns are driven by transitions between distributed and channelized subglacial hydrological systems.
223

Interval methods for non-linear systems

Shearer, J. M. January 1986 (has links)
In numerical mathematics, there is a need for methods which provide a user with the solution to his problem without requiring him to understand the mathematics underlying the method of solution. Such a method involves computable tests to determine whether or not a solution exists in a given region, and whether, if it exists, such a solution may be found by using the given method. Two valuable tools for the implementation of such methods are interval mathematics and symbolic computation. In. practice all computers have memories of finite size and cannot perform exact arithmetic. Therefore, in addition to the error which is inherent in a given numerical method, namely truncation error, there is also the error due to rounding. Using interval arithmetic, computable tests which guarantee the existence of a solution to a given problem in a given region, and the convergence of a particular iterative method to this solution, become practically realizable. This is not possible using real arithmetic due to the accumulation of rounding error on a computer. The advent of packages which allow symbolic computations to be carried out on a given computer is an important advance for computational numerical mathematics. In particular, the ability to compute derivatives automatically removes the need for a user to supply them, thus eliminating a major source of error in the use of methods requiring first or higher derivatives. In this thesis some methods which use interval arithmetic and symbolic computation for the solution of systems of nonlinear algebraic equations are presented. Some algorithms based on the symmetric single-step algorithm are described. These methods however do not possess computable existence, uniqueness, and convergence tests. Algorithms which do possess such tests, based on the Krawczyk-Moore algorithm are also presented. A simple package which allows symbolic computations to be carried out is described. Several applications for such a package are given. In particular, an interval form of Brown's method is presented.
224

The study of some numerical methods for solving partial differential equations

Abdullah, Abdul Rahman Bin January 1983 (has links)
The thesis commences with a description and classification of partial differential equations and the related matrix and eigenvalue theory. In most all cases the study of parabolic equations leads to initial boundary value problems and it is to this problem that the thesis is mainly concerned with. The basic (finite difference) methods to solve a (parabolic) partial differential equation are presented in the second chapter which is then followed by particular types of parabolic equations such as diffusion-convection, fourth order and non-linear problems in the third chapter. An introduction to the finite element technique is also included as an alternative to the finite difference method of solution. The advantages and disadvantages of some different strategies in terms of stability and truncation error are also considered. In Chapter Four the general derivation of a two time-level finite difference approximation to the simple heat conduction equation is derived. A new class of methods called the Group Explicit (GE) method is established which improves the stability of the previous explicit method. Comparison between the two methods in this class and the previous methods is also given. The method is also used 1n solving the two-space dimensional parabolic equation. The derivation of a general two-time level finite difference approximation and the general idea of the Group Explicit method are extended to the diffusion-convection equation in Chapter Five. Some other explicit algorithms for solving this problem ar~ also considered. In the sixth chapter the Group Explicit procedure is applied to solve a fourth-order parabolic equation on two interlocking nets. The concept of the GE method is also extendable to a non-linear partial differential equation. Consideration of this extension to a particular problem can be found in Chapter Seven. In Chapter Eight, some work on the finite element method for solving the heat-conduction and diffusion-convection equation is presented. Comparison of the results from this method with the finite-difference methods is given. The formulation and solution of this problem as a boundary value problem by the boundary value technique is also considered. A special method for solving diffusion-convection equation is presented in Chapter Nine as well as an extension of the Group Explicit method to a hyperbolic partial differential equation is given. The thesis concludes with recommendations for further work.
225

A numerical method based on Runge-Kutta and Gauss-Legendre integration for solving initial value problems in ordinary differential equations

Prentice, Justin Steven Calder 11 September 2012 (has links)
M.Sc. / A class of numerical methods for solving nonstiff initial value problems in ordinary differential equations has been developed. These methods, designated RKrGLn, are based on a Runge-Kutta method of order r (RKr), and Gauss-Legendre integration over n+ 1 nodes. The interval of integration for the initial value problem is subdivided into an integer number of subintervals. On each of these n + 1 nodes are defined in accordance with the zeros of the Legendre polynomial of degree n. The Runge-Kutta method is used to find an approximate solution at each of these nodes; Gauss-Legendre integration is used to find the solution at the endpoint of the subinterval. The process then carries over to the next subinterval. We find that for a suitable choice of n, the order of the local error of the Runge- Kutta method (r + 1) is preserved in the global error of RKrGLn. However, a poor choice of n can actually limit the order of RKrGLn, irrespective of the choice of r. What is more, the inclusion of Gauss-Legendre integration slightly reduces the number of arithmetical operations required to find a solution, in comparison with RKr at the same number of nodes. These two factors combine to ensure that RKrGLn is considerably more efficient than RKr, particularly when very accurate solutions are sought. Attempts to control the error in RKrGLn have been made. The local error has been successfully controlled using a variable stepsize strategy, similar to that generally used in RK methods. The difference lies in that it is the size of each subinterval that is controlled in RKrGLn, rather than each individual stepsize. Nevertheless, local error has been successfully controlled for relative tolerances ranging from 10 -4 to 10-10 . We have also developed algorithms for estimating and controlling the global error. These algorithms require that a complete solution be obtained for a specified distribution of nodes, after which the global error is estimated and then, if necessary, a new node distribution is determined and another solution obtained. The algorithms are based on Richardson extrapolation and the use of low-order and high-order pairs. The algorithms have successfully achieved desired relative global errors as small as 10-1° . We have briefly studied how RKrGLn may be used to solve stiff systems. We have determined the intervals of stability for several RKrGLn methods on the real line, and used this to develop an algorithm to solve a stiff problem. The algorithm is based on the idea of stepsize/subinterval adjustment, and has been used to successfully solve the van der Pol system. Lagrange interpolation on each subinterval has been implemented to obtain a piecewise continuous polynomial approximation to the numerical solution, with same order error, which can be used to find the solution at arbitrary nodes.
226

On meshless methods : a novel interpolatory method and a GPU-accelerated implementation

Hamed, Maien Mohamed Osman January 2013 (has links)
Meshless methods have been developed to avoid the numerical burden imposed by meshing in the Finite Element Method. Such methods are especially attrac- tive in problems that require repeated updates to the mesh, such as problems with discontinuities or large geometrical deformations. Although meshing is not required for solving problems with meshless methods, the use of meshless methods gives rise to different challenges. One of the main challenges associated with meshless methods is imposition of essential boundary conditions. If exact interpolants are used as shape functions in a meshless method, imposing essen- tial boundary conditions can be done in the same way as the Finite Element Method. Another attractive feature of meshless methods is that their use involves compu- tations that are largely independent from one another. This makes them suitable for implementation to run on highly parallel computing systems. Highly par- allel computing has become widely available with the introduction of software development tools that enable developing general-purpose programs that run on Graphics Processing Units. In the current work, the Moving Regularized Interpolation method has been de- veloped, which is a novel method of constructing meshless shape functions that achieve exact interpolation. The method is demonstrated in data interpolation and in partial differential equations. In addition, an implementation of the Element-Free Galerkin method has been written to run on a Graphics Processing Unit. The implementation is described and its performance is compared to that of a similar implementation that does not make use of the Graphics Processing Unit.
227

An information theoretic measure of algorithmic complexity

Wright, Lois E. January 1974 (has links)
This work is a study of an information theoretic model which is used to develop a complexity measure of an algorithm. The measure is defined to reflect the computational cost and structure of the given algorithm. In this study computational costs are expressed as the execution times of the algorithm, where the algorithm is coded as a program in a machine independent language, and analysed in terms of its representation as a pseudograph. It is shown that this measure aids in deciding which sections of the algorithm should be optimized, segmented or expressed as subprograms. The model proposed is designed to yield a measure which reflects both the program flow and computational cost. Such a measure allows an 'optimal' algorithm to be selected from a set of algorithms, all of which solve the given problem. This selection is made with a more meaningful criterion for decision than simply execution cost. The measure can also be used to further analyse a given algorithm and point to where code optimization techniques should be applied. However it does not yield a method of generating equivalent algorithms. / Science, Faculty of / Computer Science, Department of / Graduate
228

Sequence transformations and the solution of boundary value problems on unbounded domains

Croft, Anthony C. January 1989 (has links)
No description available.
229

A Block Incremental Algorithm for Computing Dominant Singular Subspaces

Unknown Date (has links)
This thesis presents and evaluates a generic algorithm for incrementally computing the dominant singular subspaces of a matrix. The relationship between the generality of the results and the necessary computation is explored. The performance of this method, both numerical and computational, is discussed in terms of the algorithmic parameters, such as block size and acceptance threshhold. Bounds on the error are presented along with a posteriori approximations of these bounds. Finally, a group of methods are proposed which iteratively improve the accuracy of computed results and the quality of the bounds. / A Thesis submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Master of Science. / Degree Awarded: Summer Semester, 2004. / Date of Defense: April 19, 2004. / Updating, Numerical Linear Algebra, Singular Value Decomposition, URV Factorization, Subspace Tracking / Includes bibliographical references. / Kyle Gallivan, Professor Directing Thesis; Anuj Srivastava, Committee Member; Robert van Engelen, Committee Member.
230

Methods for Linear and Nonlinear Array Data Dependence Analysis with the Chains of Recurrences Algebra

Unknown Date (has links)
The presence of data dependences between statements in a loop iteration space imposes strict constraints on statement order and loop restructuring when preserving program semantics. A compiler determines the safe partial ordering of statements that enhance performance by explicitly disproving the presence of dependences. As a result, the false positive rate of a dependence analysis technique is a crucial factor in the effectiveness of a restructuring compiler's ability to optimize the execution of performance-critical code fragments. This dissertation investigates reducing the false positive rate by improving the accuracy of analysis methods for dependence problems and increasing the total number of problems analyzed. Fundamental to these improvements is the rephrasing of the dependence problem in terms of Chains of Recurrences (CR), a formalism that has been shown to be conducive to efficient loop induction variable analysis. An infrastructure utilizing CR-analysis methods and enhanced dependence testing techniques is developed and tested. Experimental results indicate capabilities of dependence analysis methods can be improved without a reduction in efficiency. This results in a reduction in the false positive rate and an increase in the number of optimized and parallelized code fragments. / A Dissertation submitted to the Department of Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy. / Summer Semester, 2007. / July 2, 2007. / Chains of Recurrences, Depedence Testing, Loop Analysis, Induction Variable, Loop Analysis, CR / Includes bibliographical references. / Robert Van Engelen, Professor Directing Dissertation; Paul Ruscher, Outside Committee Member; Kyle Gallivan, Committee Member; David Whalley, Committee Member; Xin Yuan, Committee Member.

Page generated in 0.0674 seconds