• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 440
  • 117
  • 102
  • 48
  • 33
  • 25
  • 14
  • 13
  • 13
  • 6
  • 6
  • 5
  • 5
  • 4
  • 3
  • Tagged with
  • 974
  • 134
  • 120
  • 110
  • 99
  • 86
  • 82
  • 72
  • 71
  • 71
  • 70
  • 70
  • 70
  • 62
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
411

Few group cross section representation based on sparse grid methods / Danniëll Botes

Botes, Danniëll January 2012 (has links)
This thesis addresses the problem of representing few group, homogenised neutron cross sections as a function of state parameters (e.g. burn-up, fuel and moderator temperature, etc.) that describe the conditions in the reactor. The problem is multi-dimensional and the cross section samples, required for building the representation, are the result of expensive transport calculations. At the same time, practical applications require high accuracy. The representation method must therefore be efficient in terms of the number of samples needed for constructing the representation, storage requirements and cross section reconstruction time. Sparse grid methods are proposed for constructing such an efficient representation. Approximation through quasi-regression as well as polynomial interpolation, both based on sparse grids, were investigated. These methods have built-in error estimation capabilities and methods for optimising the representation, and scale well with the number of state parameters. An anisotropic sparse grid integrator based on Clenshaw-Curtis quadrature was implemented, verified and coupled to a pre-existing cross section representation system. Some ways to improve the integrator’s performance were also explored. The sparse grid methods were used to construct cross section representations for various Light Water Reactor fuel assemblies. These reactors have different operating conditions, enrichments and state parameters and therefore pose different challenges to a representation method. Additionally, an example where the cross sections have a different group structure, and were calculated using a different transport code, was used to test the representation method. The built-in error measures were tested on independent, uniformly distributed, quasi-random sample points. In all the cases studied, interpolation proved to be more accurate than approximation for the same number of samples. The primary source of error was found to be the Xenon transient at the beginning of an element’s life (BOL). To address this, the domain was split along the burn-up dimension into “start-up” and “operating” representations. As an alternative, the Xenon concentration was set to its equilibrium value for the whole burn-up range. The representations were also improved by applying anisotropic sampling. It was concluded that interpolation on a sparse grid shows promise as a method for building a cross section representation of sufficient accuracy to be used for practical reactor calculations with a reasonable number of samples. / Thesis (MSc Engineering Sciences (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2013.
412

Investigating the empirical relationship between oceanic properties observable by satellite and the oceanic pCO₂ / Marizelle van der Walt

Van der Walt, Marizelle January 2011 (has links)
In this dissertation, the aim is to investigate the empirical relationship between the partial pressure of CO2 (pCO2) and other ocean variables in the Southern Ocean, by using a small percentage of the available data. CO2 is one of the main greenhouse gases that contributes to global warming and climate change. The concentration of anthropogenic CO2 in the atmosphere, however, would have been much higher if some of it was not absorbed by oceanic and terrestrial sinks. The oceans absorb and release CO2 from and to the atmosphere. Large regions in the Southern Ocean are expected to be a CO2 sink. However, the measurements of CO2 concentrations in the ocean are sparse in the Southern Ocean, and accurate values for the sinks and sources cannot be determined. In addition, it is difficult to develop accurate oceanic and ocean-atmosphere models of the Southern Ocean with the sparse observations of CO2 concentrations in this part of the ocean. In this dissertation classical techniques are investigated to determine the empirical relationship between pCO2 and other oceanic variables using in situ measurements. Additionally, sampling techniques are investigated in order to make a judicious selection of a small percentage of the total available data points in order to develop an accurate empirical relationship. Data from the SANAE49 cruise stretching between Antarctica and Cape Town are used in this dissertation. The complete data set contains 6103 data points. The maximum pCO2 value in this stretch is 436.0 μatm, the minimum is 251.2 μatm and the mean is 360.2 μatm. An empirical relationship is investigated between pCO2 and the variables Temperature (T), chlorophyll-a concentration (Chl), Mixed Layer Depth (MLD) and latitude (Lat). The methods are repeated with latitude included and excluded as variable respectively. D-optimal sampling is used to select a small percentage of the available data for determining the empirical relationship. Least squares optimization is used as one method to determine the empirical relationship. For 200 D-optimally sampled points, the pCO2 prediction with the fourth order equation yields a Root Mean Square (RMS) error of 15.39 μatm (on the estimation of pCO2) with latitude excluded as variable and a RMS error of 8.797 μatm with latitude included as variable. Radial basis function (RBF) interpolation is another method that is used to determine the empirical relationship between the variables. The RBF interpolation with 200 D-optimally sampled points yields a RMS error of 9.617 μatm with latitude excluded as variable and a RMS error of 6.716 μatm with latitude included as variable. Optimal scaling is applied to the variables in the RBF interpolation, yielding a RMS error of 9.012 μatm with latitude excluded as variable and a RMS error of 4.065 μatm with latitude included as variable for 200 D-optimally sampled points. / Thesis (MSc (Applied Mathematics))--North-West University, Potchefstroom Campus, 2012
413

Application of Definability to Query Answering over Knowledge Bases

Kinash, Taras January 2013 (has links)
Answering object queries (i.e. instance retrieval) is a central task in ontology based data access (OBDA). Performing this task involves reasoning with respect to a knowledge base K (i.e. ontology) over some description logic (DL) dialect L. As the expressive power of L grows, so does the complexity of reasoning with respect to K. Therefore, eliminating the need to reason with respect to a knowledge base K is desirable. In this work, we propose an optimization to improve performance of answering object queries by eliminating the need to reason with respect to the knowledge base and, instead, utilizing cached query results when possible. In particular given a DL dialect L, an object query C over some knowledge base K and a set of cached query results S={S1, ..., Sn} obtained from evaluating past queries, we rewrite C into an equivalent query D, that can be evaluated with respect to an empty knowledge base, using cached query results S' = {Si1, ..., Sim}, where S' is a subset of S. The new query D is an interpolant for the original query C with respect to K and S. To find D, we leverage a tool for enumerating interpolants of a given sentence with respect to some theory. We describe a procedure that maps a knowledge base K, expressed in terms of a description logic dialect of first order logic, and object query C into an equivalent theory and query that are input into the interpolant enumerating tool, and resulting interpolants into an object query D that can be evaluated over an empty knowledge base. We show the efficacy of our approach through experimental evaluation on a Lehigh University Benchmark (LUBM) data set, as well as on a synthetic data set, LUBMMOD, that we created by augmenting an LUBM ontology with additional axioms.
414

Local Volatility Calibration on the Foreign Currency Option Market / Kalibrering av lokal volatilitet på valutaoptionsmarknaden

Falck, Markus January 2014 (has links)
In this thesis we develop and test a new method for interpolating and extrapolating prices of European options. The theoretical base originates from the local variance gamma model developed by Carr (2008), in which the local volatility model by Dupire (1994) is combined with the variance gamma model by Madan and Seneta (1990). By solving a simplied version of the Dupire equation under the assumption of a continuous ve parameter di usion term, we derive a parameterization dened for strikes in an interval of arbitrary size. The parameterization produces positive option prices which satisfy both conditions for absence of arbitrage in a one maturity setting, i.e. all adjacent vertical spreads and buttery spreads are priced non-negatively. The method is implemented and tested in the FX-option market. We suggest two sub-models, one with three and one with ve degrees of freedom. By using a least-square approach, we calibrate the two sub-models against 416 Reuters quoted volatility smiles. Both sub-models succeeds in generating prices within the bid-ask spread for all options in the sample. Compared to the three parameter model, the model with ve parameters calibrates more exactly to market quoted mids but has a longer calibration time. The three parameter model calibrates remarkably quickly; in a MATLAB implementation using a Levenberg-Marquardt algorithm the average calibration time is approximately 1 ms. Both sub-models produce volatility smiles which are C2 and well-behaving. Further, we suggest a technique allowing for arbitrage-free interpolation of calibrated option price functions in the maturity dimension. The interpolation is performed in parameter space, where every set of parameters uniquely determines an option price function. Furthermore, we produce sucient conditions to ensure absence of calendar spread arbitrage when calibrating the proposed model to several maturities. We use this technique to produce implied volatility surfaces which are suciently smooth, satisfy all conditions for absence of arbitrage and fit market quoted volatility surfaces within the bid-ask spread. In the final chapter we use the results for producing Dupire local volatility surfaces and for pricing variance swaps.
415

Evaluation of surface climate data from the North American Regional Reanalysis for Hydrological Applications in central Canada

Kim, Sung Joon 22 June 2012 (has links)
A challenge in hydrological studies in the Canadian Prairie region is to find good-quality meteorological data because many basins are located in remote regions where few stations are available, and existing stations typically have short records and often contain a high number of missing data. The recently released North American Regional Reanalysis (NARR) data set appears to have potential for hydrological studies in data-scarce central Canada. The main objectives of this study are: (1) to evaluate and utilize NARR data for hydrologic modelling and statistical downscaling, (2) to develop methods for estimating missing precipitation data using NARR data, and (3) to investigate and correct NARR precipitation bias in the Canadian Prairie region. Prior to applying NARR for hydrological modelling, the NARR surface data were evaluated by comparison with observed meteorological data over the Canadian Prairie region. The comparison results indicated that NARR is a suitable alternative to observed surface meteorological data and thus useful for hydrological modelling. After evaluation of NARR surface climate data, the SLURP model was set up with input data from NARR and calibrated for several watersheds. The results indicated that the hydrological model can be reasonably calibrated using NARR data as input. The relatively good agreement between precipitation from NARR and observed station data suggests that NARR information may be used in the estimation of missing precipitation records at weather stations. Several traditional methods for estimating missing data were compared with three NARR-based estimation methods. The results show that NARR-based methods significantly improved the estimation of precipitation compared to the traditional methods. The existence of NARR bias is a critical issue that must be addressed prior to the use of the data. Using observed weather station data, a statistical interpolation technique (also known as Optimum Interpolation) was employed to correct gridded NARR precipitation for bias. The results suggest that the method significantly reduces NARR bias over the selected study area.
416

Data transfer strategies for overset and hybrid computational methods

Quon, Eliot 12 January 2015 (has links)
Modern computational science permits the accurate solution of nonlinear partial differential equations (PDEs) on overlapping computational domains, known as an overset approach. The complex grid interconnectivity inherent in the overset method can introduce errors in the solution through “orphan” points, i.e., grid points for which reliable solution donor points cannot be located. For this reason, a variety of data transfer strategies based on scattered data interpolation techniques have been assessed with application to both overset and hybrid methodologies. Scattered data approaches are attractive because they are decoupled from solver type and topology, and may be readily applied within existing methodologies. In addition to standard radial basis function (RBF) interpolation, a novel steered radial basis function (SRBF) interpolation technique has been developed to introduce data adaptivity into the data transfer algorithm. All techniques were assessed by interpolating both continuous and discontinuous analytical test functions. For discontinuous functions, SRBF interpolation was able to maintain solution gradients with the steering technique being the scattered-data analog of a slope limiter. In comparison with linear mappings, the higher-order approaches were able to more accurately preserve flow physics for arbitrary grid configurations. Overset validation test cases included an inviscid convecting vortex, a shock tube, and a turbulent ship airwake. These were studied within unsteady Reynolds-Averaged Navier-Stokes (URANS) simulations to determine quantitative and qualitative improvements when applying RBF interpolation over current methods. The convecting vortex was also analyzed on a grid configuration which contained orphan points under the state-of-the-art overset paradigm. This was successfully solved by the RBF-based algorithm, which effectively eliminated orphans by enabling high-order extrapolation. Order-of-magnitude reductions in error compared to the exact vortex solution were observed. In addition, transient conservation errors that persisted in the original overset methodology were eliminated by the RBF approach. To assess the effect of advanced mapping techniques on the fidelity of a moving grid simulation, RBF interpolation was applied to a hybrid simulation of an isolated wind turbine rotor. The resulting blade pressure distributions were comparable to a rotor simulation with refined near-body grids.
417

Efficient Computation with Sparse and Dense Polynomials

Roche, Daniel Steven January 2011 (has links)
Computations with polynomials are at the heart of any computer algebra system and also have many applications in engineering, coding theory, and cryptography. Generally speaking, the low-level polynomial computations of interest can be classified as arithmetic operations, algebraic computations, and inverse symbolic problems. New algorithms are presented in all these areas which improve on the state of the art in both theoretical and practical performance. Traditionally, polynomials may be represented in a computer in one of two ways: as a "dense" array of all possible coefficients up to the polynomial's degree, or as a "sparse" list of coefficient-exponent tuples. In the latter case, zero terms are not explicitly written, giving a potentially more compact representation. In the area of arithmetic operations, new algorithms are presented for the multiplication of dense polynomials. These have the same asymptotic time cost of the fastest existing approaches, but reduce the intermediate storage required from linear in the size of the input to a constant amount. Two different algorithms for so-called "adaptive" multiplication are also presented which effectively provide a gradient between existing sparse and dense algorithms, giving a large improvement in many cases while never performing significantly worse than the best existing approaches. Algebraic computations on sparse polynomials are considered as well. The first known polynomial-time algorithm to detect when a sparse polynomial is a perfect power is presented, along with two different approaches to computing the perfect power factorization. Inverse symbolic problems are those for which the challenge is to compute a symbolic mathematical representation of a program or "black box". First, new algorithms are presented which improve the complexity of interpolation for sparse polynomials with coefficients in finite fields or approximate complex numbers. Second, the first polynomial-time algorithm for the more general problem of sparsest-shift interpolation is presented. The practical performance of all these algorithms is demonstrated with implementations in a high-performance library and compared to existing software and previous techniques.
418

Design And Implementation Of Fir Digital Filters With Variable Frequency Characteristics

Piskin, Hatice 01 December 2005 (has links) (PDF)
Variable digital filters (VDF) find many application areas in communication, audio, speech and image processing. This thesis analyzes design and implementation of FIR digital filters with variable frequency characteristics and introduces two design methods. The design and implementation of the proposed methods are realized on Matlab software program. Various filter design examples and comparisons are also outlilned. One of the major application areas of VDFs is software defined radio (SDR). The interpolation problem on sample rate converter (SRC) unit of the SDR is solved by using these filters. Realizations of VDFs on SRC are outlined and described. Simulations on Simulink and a specific hardware are examined.
419

Fixed-analysis adaptive-synthesis filter banks

Lettsome, Clyde Alphonso 07 April 2009 (has links)
Subband/Wavelet filter analysis-synthesis filters are a major component in many compression algorithms. Such compression algorithms have been applied to images, voice, and video. These algorithms have achieved high performance. Typically, the configuration for such compression algorithms involves a bank of analysis filters whose coefficients have been designed in advance to enable high quality reconstruction. The analysis system is then followed by subband quantization and decoding on the synthesis side. Decoding is performed using a corresponding set of synthesis filters and the subbands are merged together. For many years, there has been interest in improving the analysis-synthesis filters in order to achieve better coding quality. Adaptive filter banks have been explored by a number of authors where by the analysis filters and synthesis filters coefficients are changed dynamically in response to the input. A degree of performance improvement has been reported but this approach does require that the analysis system dynamically maintain synchronization with the synthesis system in order to perform reconstruction. In this thesis, we explore a variant of the adaptive filter bank idea. We will refer to this approach as fixed-analysis adaptive-synthesis filter banks. Unlike the adaptive filter banks proposed previously, there is no analysis synthesis synchronization issue involved. This implies less coder complexity and more coder flexibility. Such an approach can be compatible with existing subband wavelet encoders. The design methodology and a performance analysis are presented.
420

Parabolic systems and an underlying Lagrangian

Yolcu, Türkay 07 July 2009 (has links)
In this thesis, we extend De Giorgi's interpolation method to a class of parabolic equations which are not gradient flows but possess an entropy functional and an underlying Lagrangian. The new fact in the study is that not only the Lagrangian may depend on spatial variables, but also it does not induce a metric. Assuming the initial condition is a density function, not necessarily smooth, but solely of bounded first moments and finite "entropy", we use a variational scheme to discretize the equation in time and construct approximate solutions. Moreover, De Giorgi's interpolation method is revealed to be a powerful tool for proving convergence of our algorithm. Finally, we analyze uniqueness and stability of our solution in L¹.

Page generated in 0.0185 seconds