• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 97
  • 40
  • 10
  • 8
  • 7
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 2
  • 1
  • Tagged with
  • 330
  • 40
  • 37
  • 35
  • 29
  • 29
  • 28
  • 27
  • 25
  • 24
  • 24
  • 23
  • 22
  • 21
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
251

Applied State Space Modelling of Non-Gaussian Time Series using Integration-based Kalman-filtering

Frühwirth-Schnatter, Sylvia January 1993 (has links) (PDF)
The main topic of the paper is on-line filtering for non-Gaussian dynamic (state space) models by approximate computation of the first two posterior moments using efficient numerical integration. Based on approximating the prior of the state vector by a normal density, we prove that the posterior moments of the state vector are related to the posterior moments of the linear predictor in a simple way. For the linear predictor Gauss-Hermite integration is carried out with automatic reparametrization based on an approximate posterior mode filter. We illustrate how further topics in applied state space modelling such as estimating hyperparameters, computing model likelihoods and predictive residuals, are managed by integration-based Kalman-filtering. The methodology derived in the paper is applied to on-line monitoring of ecological time series and filtering for small count data. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
252

Lehmer Numbers with at Least 2 Primitive Divisors

Juricevic, Robert January 2007 (has links)
In 1878, Lucas \cite{lucas} investigated the sequences $(\ell_n)_{n=0}^\infty$ where $$\ell_n=\frac{\alpha^n-\beta^n}{\alpha-\beta},$$ $\alpha \beta$ and $\alpha+\beta$ are coprime integers, and where $\beta/\alpha$ is not a root of unity. Lucas sequences are divisibility sequences; if $m|n$, then $\ell_m|\ell_n$, and more generally, $\gcd(\ell_m,\ell_n)=\ell_{\gcd(m,n)}$ for all positive integers $m$ and $n$. Matijasevic utilised this divisibility property of Lucas sequences in order to resolve Hilbert's 10th problem. \noindent In 1930, Lehmer \cite{lehmer} introduced the sequences $(u_n)_{n=0}^\infty$ where \begin{eqnarray*} u_n& = & \frac{\alpha^{n}-\beta^n}{\alpha^{\epsilon(n)}-\beta^{\epsilon(n)}},\\ \epsilon(n)&=&\left\{\begin{array}{ll} 1, \hspace{.1in}\mbox{if}\hspace{.1in}n\equiv 1 \pmod 2;\\ 2, \hspace{.1in}\mbox{if}\hspace{.1in}n\equiv 0\pmod 2;\end{array}\right. \end{eqnarray*} $\alpha \beta$ and $(\alpha +\beta)^2$ are coprime integers, and where $\beta/\alpha$ is not a root of unity. The sequences $(u_n)_{n=0}^\infty$ are known as Lehmer sequences, and the terms of these sequences are known as Lehmer numbers. Lehmer showed that his sequences had similar divisibility properties to those of Lucas sequences, and he used them to extend the Lucas test for primality. \noindent We define a prime divisor $p$ of $u_n$ to be a primitive divisor of $u_n$ if $p$ does not divide $$(\alpha^2-\beta^2)^2u_3\cdots u_{n-1}.$$ Note that in the list of prime factors of the first $n-1$ terms of the sequence $(u_n)_{n=0}^\infty$, a primitive divisor of $u_n$ is a new prime factor. \noindent We let \begin{eqnarray*} \kappa& = & k(\alpha \beta\max\{(\alpha-\beta)^2,(\alpha+\beta)^2\}),\\ \eta & = & \left\{\begin{array}{ll}1\hspace{.1in}\mbox{if}\hspace{.1in}\kappa\equiv 1\pmod 4,\\ 2\hspace{.1in}\mbox{otherwise},\end{array}\right. \end{eqnarray*} where $k(\alpha \beta \max\{(\alpha-\beta)^2,(\alpha+\beta)^2\})$ is the squarefree kernel of $\alpha \beta \max\{(\alpha-\beta)^2,(\alpha+\beta)^2\}$. On the one hand, building on the work of Schinzel \cite{schinzelI}, we prove that if $n>4$, $n\neq 6$, $n/(\eta \kappa)$ is an odd integer, and the triple $(n,\alpha,\beta)$, in case $(\alpha-\beta)^2>0$, is not equivalent to a triple $(n,\alpha,\beta)$ from an explicit table, then the $n$th Lehmer number $u_n$ has at least two primitive divisors. Moreover, we prove that if $n\geq 1.2\times 10^{10}$, and $n/(\eta \kappa)$ is an odd integer, then the $n$th Lehmer number $u_n$ has at least two primitive divisors. On the other hand, building on the work of Stewart \cite{stewart77}, we prove that there are only finitely many triples $(n,\alpha,\beta)$, where $n>6$, $n\neq 12$, and $n/(\eta \kappa)$ is an odd integer, such that the $n$th Lehmer number $u_n$ has less than two primitive divisors, and that these triples may be explicitly determined. We determine all of these triples $(n,\alpha,\beta)$ up to equivalence explicitly when $6<n\leq 30$, $n\neq 12$, and $n/(\eta \kappa)$ is an odd integer, and we tabulate the triples $(n,\alpha,\beta)$ we discovered, up to equivalence, for $30<n\leq 500$. Finally, we show that the conditions $n>6$, $n\neq 12$, are best possible, subject to the truth of two plausible conjectures.
253

Multilevel acceleration of neutron transport calculations

Marquez Damian, Jose Ignacio 24 August 2007 (has links)
Nuclear reactor design requires the calculation of integral core parameters and power and radiation profiles. These physical parameters are obtained by the solution of the linear neutron transport equation over the geometry of the reactor. In order to represent the fine structure of the nuclear core a very small geometrical mesh size should be used, but the computational capacity available these days is still not enough to solve these transport problems in the time range (hours-days) that would make the method useful as a design tool. This problem is traditionally solved by the solution of simple, smaller problems in specific parts of the core and then use a procedure known as homogenization to create average material properties and solve the full problem with a wider mesh size. The iterative multi-level solution procedure is inspired in this multi-stage approach, solving the problem at fuel-pin (cell) level, fuel assembly and nodal levels. The nested geometrical structure of the finite element representation of a reactor can be used to create a set of restriction/prolongation operators to connect the solution in the different levels. The procedure is to iterate between the levels, solving for the error in the coarse level using as source the restricted residual of the solution in the finer level. This way, the complete problem is only solved in the coarsest level and in the other levels only a pair of restriction/interpolation operations and a relaxation is required. In this work, a multigrid solver is developed for the in-moment equation of the spherical harmonics, finite element formulation of the second order transport equation. This solver is implemented as a subroutine in the code EVENT. Numerical tests are provided as a standalone diffusion solver and as part of a block Jacobi transport solver.
254

Automatic history matching in Bayesian framework for field-scale applications

Mohamed Ibrahim Daoud, Ahmed 12 April 2006 (has links)
Conditioning geologic models to production data and assessment of uncertainty is generally done in a Bayesian framework. The current Bayesian approach suffers from three major limitations that make it impractical for field-scale applications. These are: first, the CPU time scaling behavior of the Bayesian inverse problem using the modified Gauss-Newton algorithm with full covariance as regularization behaves quadratically with increasing model size; second, the sensitivity calculation using finite difference as the forward model depends upon the number of model parameters or the number of data points; and third, the high CPU time and memory required for covariance matrix calculation. Different attempts were used to alleviate the third limitation by using analytically-derived stencil, but these are limited to the exponential models only. We propose a fast and robust adaptation of the Bayesian formulation for inverse modeling that overcomes many of the current limitations. First, we use a commercial finite difference simulator, ECLIPSE, as a forward model, which is general and can account for complex physical behavior that dominates most field applications. Second, the production data misfit is represented by a single generalized travel time misfit per well, thus effectively reducing the number of data points into one per well and ensuring the matching of the entire production history. Third, we use both the adjoint method and streamline-based sensitivity method for sensitivity calculations. The adjoint method depends on the number of wells integrated, and generally is of an order of magnitude less than the number of data points or the model parameters. The streamline method is more efficient and faster as it requires only one simulation run per iteration regardless of the number of model parameters or the data points. Fourth, for solving the inverse problem, we utilize an iterative sparse matrix solver, LSQR, along with an approximation of the square root of the inverse of the covariance calculated using a numerically-derived stencil, which is broadly applicable to a wide class of covariance models. Our proposed approach is computationally efficient and, more importantly, the CPU time scales linearly with respect to model size. This makes automatic history matching and uncertainty assessment using a Bayesian framework more feasible for large-scale applications. We demonstrate the power and utility of our approach using synthetic cases and a field example. The field example is from Goldsmith San Andres Unit in West Texas, where we matched 20 years of production history and generated multiple realizations using the Randomized Maximum Likelihood method for uncertainty assessment. Both the adjoint method and the streamline-based sensitivity method are used to illustrate the broad applicability of our approach.
255

A distributed kernel summation framework for machine learning and scientific applications

Lee, Dong Ryeol 11 May 2012 (has links)
The class of computational problems I consider in this thesis share the common trait of requiring consideration of pairs (or higher-order tuples) of data points. I focus on the problem of kernel summation operations ubiquitous in many data mining and scientific algorithms. In machine learning, kernel summations appear in popular kernel methods which can model nonlinear structures in data. Kernel methods include many non-parametric methods such as kernel density estimation, kernel regression, Gaussian process regression, kernel PCA, and kernel support vector machines (SVM). In computational physics, kernel summations occur inside the classical N-body problem for simulating positions of a set of celestial bodies or atoms. This thesis attempts to marry, for the first time, the best relevant techniques in parallel computing, where kernel summations are in low dimensions, with the best general-dimension algorithms from the machine learning literature. We provide a unified, efficient parallel kernel summation framework that can utilize: (1) various types of deterministic and probabilistic approximations that may be suitable for both low and high-dimensional problems with a large number of data points; (2) indexing the data using any multi-dimensional binary tree with both distributed memory (MPI) and shared memory (OpenMP/Intel TBB) parallelism; (3) a dynamic load balancing scheme to adjust work imbalances during the computation. I will first summarize my previous research in serial kernel summation algorithms. This work started from Greengard/Rokhlin's earlier work on fast multipole methods for the purpose of approximating potential sums of many particles. The contributions of this part of this thesis include the followings: (1) reinterpretation of Greengard/Rokhlin's work for the computer science community; (2) the extension of the algorithms to use a larger class of approximation strategies, i.e. probabilistic error bounds via Monte Carlo techniques; (3) the multibody series expansion: the generalization of the theory of fast multipole methods to handle interactions of more than two entities; (4) the first O(N) proof of the batch approximate kernel summation using a notion of intrinsic dimensionality. Then I move onto the problem of parallelization of the kernel summations and tackling the scaling of two other kernel methods, Gaussian process regression (kernel matrix inversion) and kernel PCA (kernel matrix eigendecomposition). The artifact of this thesis has contributed to an open-source machine learning package called MLPACK which has been first demonstrated at the NIPS 2008 and subsequently at the NIPS 2011 Big Learning Workshop. Completing a portion of this thesis involved utilization of high performance computing resource at XSEDE (eXtreme Science and Engineering Discovery Environment) and NERSC (National Energy Research Scientific Computing Center).
256

Integration-based Kalman-filtering for a Dynamic Generalized Linear Trend Model

Schnatter, Sylvia January 1991 (has links) (PDF)
The topic of the paper is filtering for non-Gaussian dynamic (state space) models by approximate computation of posterior moments using numerical integration. A Gauss-Hermite procedure is implemented based on the approximate posterior mode estimator and curvature recently proposed in 121. This integration-based filtering method will be illustrated by a dynamic trend model for non-Gaussian time series. Comparision of the proposed method with other approximations ([15], [2]) is carried out by simulation experiments for time series from Poisson, exponential and Gamma distributions. (author's abstract) / Series: Forschungsberichte / Institut für Statistik
257

Accelerated granular matter simulation / Accelererad simulering av granulära material

Wang, Da January 2015 (has links)
Modeling and simulation of granular matter has important applications in both natural science and industry. One widely used method is the discrete element method (DEM). It can be used for simulating granular matter in the gaseous, liquid as well as solid regime whereas alternative methods are in general applicable to only one. Discrete element analysis of large systems is, however, limited by long computational time. A number of solutions to radically improve the computational efficiency of DEM simulations are developed and analysed. These include treating the material as a nonsmooth dynamical system and methods for reducing the computational effort for solving the complementarity problem that arise from implicit treatment of the contact laws. This allow for large time-step integration and ultimately more and faster simulation studies or analysis of more complex systems. Acceleration methods that can reduce the computational complexity and degrees of freedom have been invented. These solutions are investigated in numerical experiments, validated using experimental data and applied for design exploration of iron ore pelletising systems. / <p>This work has been generously supported by Algoryx Simulation, LKAB (dnr 223-</p><p>2442-09), Umeå University and VINNOVA (2014-01901).</p>
258

The Point-Split Method and the Linking Number of Space Curves

Forsberg, Timmy January 2014 (has links)
This is a report on research done in the field of mathematical physics. It is an investigation of the concept of the linking number between two simple and closed spatial curves. The linking number is a topological invariant with scientific applications ranging from DNA biology to Topological Quantum Field Theory. Our aim is to study C ̆alug ̆areanu’s theorem, also called White’s formula, which relates the linking number to the concepts of twist and writhe. We are interested in the limit of the two curves as they approach each other. To regulate this, we introduce a regularization method that utilizes a point-split. Further we explore if the result is dependent on how the regularization is introduced. Therefor we inflict an asymmetry in the regularization, with a parameter a in the point-split intervals, to check whether the result becomes dependent on a or not. We find that the result is independent of the parameter a.
259

A heterogenous three-dimensional computational model for wood drying

Truscott, Simon January 2004 (has links)
The objective of this PhD research program is to develop an accurate and efficient heterogeneous three-dimensional computational model for simulating the drying of wood at temperatures below the boiling point of water. The complex macroscopic drying equations comprise a coupled and highly nonlinear system of physical laws for liquid and energy conservation. Due to the heterogeneous nature of wood, the physical model parameters strongly depend upon the local pore structure, wood density variation within growth rings and variations in primary and secondary system variables. In order to provide a realistic representation of this behaviour, a set of previously determined parameters derived using sophisticated image analysis methods and homogenisation techniques is embedded within the model. From the literature it is noted that current three-dimensional computational models for wood drying do not take into consideration the heterogeneities of the medium. A significant advance made by the research conducted in this thesis is the development of a three - dimensional computational model that takes into account the heterogeneous board material properties which vary within the transverse plane with respect to the pith position that defines the radial and tangential directions. The development of an accurate and efficient computational model requires the consideration of a number of significant numerical issues, including the virtual board description, an effective mesh design based on triangular prismatic elements, the control volume finite element discretisation process for the cou- pled conservation laws, the derivation of an accurate dux expression based on gradient approximations together with flux limiting, and finally the solution of a large, coupled, nonlinear system using an inexact Newton method with a suitably preconditioned iterative linear solver for computing the Newton correction. This thesis addresses all of these issues for the case of low temperature drying of softwood. Specific case studies are presented that highlight the efficiency of the proposed numerical techniques and illustrate the complex heat and mass transport processes that evolve throughout drying.
260

Applications of finite field computation to cryptology : extension field arithmetic in public key systems and algebraic attacks on stream ciphers

Wong, Kenneth Koon-Ho January 2008 (has links)
In this digital age, cryptography is largely built in computer hardware or software as discrete structures. One of the most useful of these structures is finite fields. In this thesis, we explore a variety of applications of the theory and applications of arithmetic and computation in finite fields in both the areas of cryptography and cryptanalysis. First, multiplication algorithms in finite extensions of prime fields are explored. A new algebraic description of implementing the subquadratic Karatsuba algorithm and its variants for extension field multiplication are presented. The use of cy- clotomic fields and Gauss periods in constructing suitable extensions of virtually all sizes for efficient arithmetic are described. These multiplication techniques are then applied on some previously proposed public key cryptosystem based on exten- sion fields. These include the trace-based cryptosystems such as XTR, and torus- based cryptosystems such as CEILIDH. Improvements to the cost of arithmetic were achieved in some constructions due to the capability of thorough optimisation using the algebraic description. Then, for symmetric key systems, the focus is on algebraic analysis and attacks of stream ciphers. Different techniques of computing solutions to an arbitrary system of boolean equations were considered, and a method of analysing and simplifying the system using truth tables and graph theory have been investigated. Algebraic analyses were performed on stream ciphers based on linear feedback shift registers where clock control mechanisms are employed, a category of ciphers that have not been previously analysed before using this method. The results are successful algebraic attacks on various clock-controlled generators and cascade generators, and a full algebraic analyses for the eSTREAM cipher candidate Pomaranch. Some weaknesses in the filter functions used in Pomaranch have also been found. Finally, some non-traditional algebraic analysis of stream ciphers are presented. An algebraic analysis on the word-based RC4 family of stream ciphers is performed by constructing algebraic expressions for each of the operations involved, and it is concluded that each of these operations are significant in contributing to the overall security of the system. As far as we know, this is the first algebraic analysis on a stream cipher that is not based on linear feedback shift registers. The possibility of using binary extension fields and quotient rings for algebraic analysis of stream ciphers based on linear feedback shift registers are then investigated. Feasible algebraic attacks for generators with nonlinear filters are obtained and algebraic analyses for more complicated generators with multiple registers are presented. This new form of algebraic analysis may prove useful and thereby complement the traditional algebraic attacks. This thesis concludes with some future directions that can be taken and some open questions. Arithmetic and computation in finite fields will certainly be an important area for ongoing research as we are confronted with new developments in theory and exponentially growing computer power.

Page generated in 0.0604 seconds