• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 377
  • 153
  • 69
  • 59
  • 39
  • 30
  • 13
  • 11
  • 8
  • 6
  • 5
  • 4
  • 4
  • 3
  • 3
  • Tagged with
  • 970
  • 204
  • 170
  • 136
  • 103
  • 81
  • 67
  • 63
  • 63
  • 59
  • 59
  • 58
  • 57
  • 56
  • 54
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Inverse modelling and optimisation in numerical groundwater flow models using proportional orthogonal decomposition

Wise, John Nathaniel 03 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2015. / ENGLISH ABSTRACT: Numerical simulations are widely used for predicting and optimising the exploitation of aquifers. They are also used to determine certain physical parameters, for example soil conductivity, by inverse calculations, where the model parameters are changed until the model results correspond optimally to measurements taken on site. The Richards’ equation describes the movement of an unsaturated fluid through porous media, and is characterised as a non-linear partial differential equation. The equation is subject to a number of parameters and is typically computationally expensive to solve. To determine the parameters in the Richards’ equation, inverse modelling studies often need to be undertaken. In these studies, the parameters of a numerical model are varied until the numerical response matches a measured response. Inverse modelling studies typically require 100’s of simulations, which implies that parameter optimisation in unsaturated case studies is common only in small or 1D problems in the literature. As a solution to overcome the computational expense incurred in inverse modelling, the use of Proper Orthogonal Decomposition (POD) as a Reduced Order Modelling (ROM) method is proposed in this thesis to speed-up individual simulations. An explanation of the Finite Element Method (FEM) is given using the Galerkin method, followed by a detailed explanation of the Galerkin POD approach. In the development of the Galerkin POD approach, the method of reducing matrices and vectors is shown, and the treatment of Neumann and Dirichlet boundary values is explained. The Galerkin POD method is applied to two case studies. The first case study is the Kogelberg site in the Table Mountain Group near Cape Town in South Africa. The response of the site is modelled at one well over the period of 2 years, and is assumed to be governed by saturated flow, making it a linear problem. The site is modelled as a 3D transient, homogeneous site, using 15 layers and ≈ 20000 nodes, using the FEM implemented on the open-source software FreeFem++. The model takes the evapotranspiration of the fynbos vegetation at the site into consideration, allowing the calculation of annual recharge into the aquifer. The ROM is created from high-fidelity responses taken over time at different parameter points, and speed-up times of ≈ 500 are achieved, corresponding to speed-up times found in the literature for linear problems. The purpose of the saturated groundwater model is to demonstrate that a POD-based ROM can approximate the full model response over the entire parameter domain, highlighting the excellent interpolation qualities and speed-up times of the Galerkin POD approach, when applied to linear problems. A second case study is undertaken on a synthetic unsaturated case study, using the Richards’ equation to describe the water movement. The model is a 2D transient model consisting of ≈ 5000 nodes, and is also created using FreeFem++. The Galerkin POD method is applied to the case study in order to replicate the high-fidelity response. This did not yield in any speed-up times, since the full matrices of non-linear problems need to be recreated at each time step in the transient simulation. Subsequently, a method is proposed in this thesis that adapts the Galerkin POD method by linearising the non-linear terms in the Richards’ equation, in a method named the Linearised Galerkin POD (LGP) method. This method is applied to the same 2D synthetic problem, and results in speed-up times in the range of 10 to 100. The adaptation, notably, does not use any interpolation techniques, favouring a code intrusive, but physics-based, approach. While the use of an intrusively linearised POD approach adds to the complexity of the ROM, it avoids the problem of finding kernel parameters typically present in interpolative POD approaches. Furthermore, the interpolation and possible extrapolation properties inherent to intrusive POD-based ROM’s are explored. The good extrapolation properties, within predetermined bounds, of intrusive POD’s allows for the development of an optimisation approach requiring a very small Design of Experiments (DOE) sets (e.g. with improved Latin Hypercube sampling). The optimisation method creates locally accurate models within the parameter space using Support Vector Classification (SVC). The region inside of the parameter space in which the optimiser is allowed to move is called the confidence region. This confidence region is chosen as the parameter region in which the ROM meets certain accuracy conditions. With the proposed optimisation technique, advantage is taken of the good extrapolation characteristics of the intrusive POD-based ROM’s. A further advantage of this optimisation approach is that the ROM is built on a set of high-fidelity responses obtained prior to the inverse modelling study, avoiding the need for full simulations during the inverse modelling study. In the methodologies and case studies presented in this thesis, initially infeasible inverse modelling problems are made possible by the use of the POD-based ROM’s. The speed up times and extrapolation properties of POD-based ROM’s are also shown to be favourable. In this research, the use of POD as a groundwater management tool for saturated and unsaturated sites is evident, and allows for the quick evaluation of different scenarios that would otherwise not be possible. It is proposed that a form of POD be implemented in conventional groundwater software to significantly reduce the time required for inverse modelling studies, thereby allowing for more effective groundwater management. / AFRIKAANSE OPSOMMING: Die Richards vergelyking beskryf die beweging van ’n vloeistof deur ’n onversadigde poreuse media, en word gekenmerk as ’n nie-lineêre parsiële differensiaalvergelyking. Die vergelyking is onderhewig aan ’n aantal parameters en is tipies berekeningsintensief om op te los. Om die parameters in die Richards vergelyking te bepaal, moet parameter optimering studies dikwels onderneem word. In hierdie studies, word die parameters van ’n numeriese model verander totdat die numeriese resultate die gemete resultate pas. Parameter optimering studies vereis in die orde van honderde simulasies, wat beteken dat studies wat gebruik maak van die Richards vergelyking net algemeen is in 1D probleme in die literatuur. As ’n oplossing vir die berekingskoste wat vereis word in parameter optimering studies, is die gebruik van Eie Ortogonale Ontbinding (POD) as ’n Verminderde Orde Model (ROM) in hierdie tesis voorgestel om individuele simulasies te versnel in die optimering konteks. Die Galerkin POD benadering is aanvanklik ondersoek en toegepas op die Richards vergelyking, en daarna is die tegniek getoets op verskeie gevallestudies. Die Galerkin POD metode word gedemonstreer op ’n hipotetiese gevallestudie waarin water beweging deur die Richards-vergelyking beskryf word. As gevolg van die nie-lineêre aard van die Richards vergelyking, het die Galerkin POD metode nie gelei tot beduidende vermindering in die berekeningskoste per simulasie nie. ’n Verdere gevallestudie word gedoen op ’n ware grootskaalse terrein in die Tafelberg Groep naby Kaapstad, Suid-Afrika, waar die grondwater beweging as versadig beskou word. Weens die lineêre aard van die vergelyking wat die beweging van versadigde water beskryf, is merkwaardige versnellings van > 500 in die ROM waargeneem in hierdie gevallestudie. Daarna was die die Galerkin POD metode aangepas deur die nie-lineêre terme in die Richards vergelyking te lineariseer. Die tegniek word die geLineariserde Galerkin POD (LGP) tegniek genoem. Die aanpassing het goeie resultate getoon, met versnellings groter as 50 keer wanneer die ROM met die oorspronklike simulasie vergelyk word. Al maak die tegniek gebruik van verder lineariseering, is die metode nogsteeds ’n fisika-gebaseerde benadering, en maak nie gebruik van interpolasie tegnieke nie. Die gebruik van ’n fisika-gebaseerde POD benaderings dra by tot die kompleksiteit van ’n volledige numeriese model, maar die kompleksiteit is geregverdig deur die merkwaardige versnellings in parameter optimerings studies. Verder word die interpolasie eienskappe, en moontlike ekstrapolasie eienskappe, inherent aan fisika-gebaseerde POD ROM tegnieke ondersoek in die navorsing. In die navorsing word ’n tegniek voorgestel waarin hierdie inherente eienskappe gebruik word om plaaslik akkurate modelle binne die parameter ruimte te skep. Die voorgestelde tegniek maak gebruik van ondersteunende vektor klassifikasie. Die grense van die plaaslik akkurate model word ’n vertrouens gebeid genoem. Hierdie vertrouens gebied is gekies as die parameter ruimte waarin die ROM voldoen aan vooraf uitgekiesde akkuraatheidsvereistes. Die optimeeringsbenadering vermy ook die uitvoer van volledige simulasies tydens die parameter optimering, deur gebruik te maak van ’n ROM wat gebaseer is op die resultate van ’n stel volledige simulasies, voordat die parameter optimering studie gedoen word. Die volledige simulasies word tipies uitgevoer op parameter punte wat gekies word deur ’n proses wat genoem word die ontwerp van eksperimente. Verdere hipotetiese grondwater gevallestudies is onderneem om die LGP en die plaaslik akkurate tegnieke te toets. In hierdie gevallestudies is die grondwater beweging weereens beskryf deur die Richards vergelyking. In die gevalle studie word komplekse en tyd-rowende modellerings probleme vervang deur ’n POD gebaseerde ROM, waarin individuele simulasies merkwaardig vinniger is. Die spoed en interpolasie/ekstrapolasie eienskappe blyk baie gunstig te wees. In hierdie navorsing is die gebruik van verminderde orde modelle as ’n grondwaterbestuursinstrument duidelik getoon, waarin voorsiening geskep word vir die vinnige evaluering van verskillende modellering situasies, wat andersins nie moontlik is nie. Daar word voorgestel dat ’n vorm van POD in konvensionele grondwater sagteware geïmplementeer word om aansienlike versnellings in parameter studies moontlik te maak, wat na meer effektiewe bestuur van grondwater sal lei.
52

縮基法初始值問題之數值研究 / Numerical studies of reduced basis methos for initial value problems

陳揚敏 Unknown Date (has links)
縮基法(RBM) 是對參數化的曲線求逼近解的一個方法,基本上乃使用投影法將解曲線投射到解空間的一子空間中,如此一來,可將原問題轉換成一較小的系統,並經由數值計算出小系統的解,來求得大系統的一逼近解。在本篇論文中主要的乃探討RBM在常微分方程組初始值問題上的應用,並發展一套含有誤差控制的演算法。 本篇論文中所採用的ODE Solver 乃由Gordon 和Shampine 基於Adams PECE方法所發展的。在求解的過程中,對於計算解誤差的控制我們除了利用ODE Solver 的誤差估計,另外我們又發展對縮基解(reduced basis solution) 的後(Aposteriori) 誤差估計,以確保數值計算解的準確性。我們所考慮使用的子空間有三種Taylor, Lagrange , Hermite 。同時為了要增加數值的特定性及簡化小系統的求解工作,我們先行將子空間的基底直交化。因此,除了誤差的控制外,我們也討論了roundoff error 對向量直交化及形成小系統時所造成的影響,並設立誤差標準以判別何時誤差過大到嚴重影響縮基解的準確度。 本篇論文的目的是希望利用RBM發展出一套解常微分方程組初始值問題的求算法,以期計算解能在較短的時間內準確的被計算出來。 / The reduced basis method(RBM) is a scheme for approximating parametric solution curves. The basic technique of RBM is projection. By applying the method, we can find an approximate solution of the original system which satisfies a system of smaller size. In this paper, we mainly concern the applications of RBM for ODE initial value problems and develop an algorithm which contains a set of error controls. The ODE solver used in this paper is developed by Gordon and Shampine based on Adams PECE formulas. To assure the accuracy of the reduced basis approximation, we set up an appropriate automatic error control in calling GS solver and develop an a posteriori error estimate to keep the reduction error under control. The subspaces considered are Taylor, Lagrange and Hermite subspaces.In the meantime, in order to improve the numerical stability and simplify the computation of the reduced basis solution, we orthogonalize the generators of reduced subspaces. We also discuss the roundoff errors in the orthogonalization process and build up a criterion for identifying the case the accuracy of the reduced basis solution up a criterion for identifying the case the accuracy of the reduced basis solution is destroyed by the errors. The aim of this paper is to develop an algorithm to solve the ODE initial value problems efficiently.
53

Very large register file for BLAS-3 operations.

January 1995 (has links)
by Aylwin Chung-Fai, Yu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 1995. / Includes bibliographical references (leaves 117-118). / Abstract --- p.i / Acknowledgement --- p.iii / List of Tables --- p.v / List of Figures --- p.vi / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- BLAS-3 Operations --- p.2 / Chapter 1.2 --- Organization of Thesis --- p.2 / Chapter 1.3 --- Contribution --- p.3 / Chapter 2 --- Background Studies --- p.4 / Chapter 2.1 --- Registers & Cache Memory --- p.4 / Chapter 2.2 --- Previous Research --- p.6 / Chapter 2.3 --- Problem of Register & Cache --- p.8 / Chapter 2.4 --- BLAS-3 Operations On RISC Microprocessor --- p.10 / Chapter 3 --- Compiler Optimization Techniques for BLAS-3 Operations --- p.12 / Chapter 3.1 --- One-Dimensional Q-Way J-Loop Unrolling --- p.13 / Chapter 3.2 --- Two-Dimensional P×Q -Ways I×J-Loops Unrolling --- p.15 / Chapter 3.3 --- Addition of Code to Remove Redundant Code --- p.17 / Chapter 3.4 --- Simulation Result --- p.17 / Chapter 3.5 --- Summary --- p.23 / Chapter 4 --- Architectural Model of Very Large Register File --- p.25 / Chapter 4.1 --- Architectural Model --- p.26 / Chapter 4.2 --- Traditional Register File vs. Very Large Register File --- p.32 / Chapter 5 --- Ideal Case Study of Very Large Register File --- p.35 / Chapter 5.1 --- Matrix Multiply --- p.36 / Chapter 5.2 --- LU Decomposition --- p.41 / Chapter 5.3 --- Convolution --- p.50 / Chapter 6 --- Worst Case Study of Very Large Register File --- p.58 / Chapter 6.1 --- Matrix Multiply --- p.59 / Chapter 6.2 --- LU Decomposition --- p.65 / Chapter 6.3 --- Convolution --- p.74 / Chapter 7 --- Proposed Case Study of Very Large Register File --- p.81 / Chapter 7.1 --- Matrix Multiply --- p.82 / Chapter 7.2 --- LU Decomposition --- p.91 / Chapter 7.3 --- Convolution --- p.102 / Chapter 7.4 --- Comparison --- p.111 / Chapter 8 --- Conclusion & Future Work --- p.114 / Chapter 8.1 --- Summary --- p.114 / Chapter 8.2 --- Future Work --- p.115 / Bibliography --- p.117
54

Towards a methodology for the prediction of flame extinction and suppression in three-dimensional normal and microgravity environments

Sutula, Jason Anthony January 2009 (has links)
The probability of a fire occurring in space vehicles and facilities is amplified by the amounts of electrical equipment used. Additionally, the lack of egress for space personnel and irreplaceable resources used aboard space vehicles and facilities require a rapid response of a suppression system and quick extinguishment. Current experimental means that exist to gather data in space vehicles and facilities are limited by both size of the experiment and cost. Thus, more economical solutions must be considered. The aim of this research was to develop a reliable and inexpensive methodology for the prediction of flame extinction and suppression in any three-dimensional environment. This project was split into two parts. Part one included the identification and validation of a computational model for the prediction of gas dispersion. Part two involved the development of an analytical parameter for predicting flame extinction. For model validation, an experimental apparatus was constructed. The experimental apparatus was one-eighth of the volume of electronics racks found aboard typical space facilities. The experimental apparatus allowed for the addition of parallel plates to increase the complexity of the geometry. Data acquisition consisted of gas concentration measurements through planar laser induced fluorescence (PLIF) of nitrogen dioxide and velocity field measurements through particle image velocimetry (PIV). A theoretical framework for a generalized Damköhler number for the prediction of local flame extinction was also developed. Based on complexities in this parameter, the computational code FLUENT was determined to be the ideal means for predicting this quantity. The concentration and velocity field measurements provided validation data for the modelling analysis. Comparison of the modelling analysis with experimental data demonstrated that the FLUENT code adequately predicted the transport of gas to a remote location. The 5 FLUENT code was also used to predict gas transport at microgravity conditions. The model demonstrated that buoyancy decreases the time to achieve higher gas concentrations between the parallel plates. As an example of the use of this methodology for a combustion scenario, the model was used to predict flame extinction in a blow-off case (i.e., rapid increase in strain rate) and localized flame extinction (i.e., flame shrinking) in a low-strain dilution case with carbon dioxide over time. The model predictions demonstrated the potential of this methodology with a Damköhler number for the prediction of extinction in three-dimensional environments.
55

An integrated multiprocessor for matrix algorithms / Warren Marwood.

Marwood, Warren January 1994 (has links)
Bibliography: leaves 237-251. / xxi, 251 leaves : ill. ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / The work in this thesis is devoted to the architecture, implementation and performance of a MATRISC processing mode. Simulation results for the MATRISC processor are provided which give performance estimates for systems which can be implemented in current technologies. It is concluded that the extremely high performance of MATRISC processors makes possible the construction of parallel computers with processing capabilities in excess of one teraflops. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 1994
56

MatRISC : a RISC multiprocessor for matrix applications / Andrew James Beaumont-Smith.

Beaumont-Smith, Andrew James January 2001 (has links)
"November, 2001" / Errata on back page. / Includes bibliographical references (p. 179-183) / xxii, 193 p. : ill. (some col.), plates (col.) ; 30 cm. / Title page, contents and abstract only. The complete thesis in print form is available from the University Library. / This thesis proposes a highly integrated SOC (system on a chip) matrix-based parallel processor which can be used as a co-processor when integrated into the on-chip cache memory of a microprocessor in a workstation environment. / Thesis (Ph.D.)--University of Adelaide, Dept. of Electrical and Electronic Engineering, 2002
57

Reduced-Basis Output Bound Methods for Parametrized Partial Differential Equations

Prud'homme, C., Rovas, D.V., Veroy, K., Machiels, L., Maday, Y., Patera, Anthony T., Turinici, G. 01 1900 (has links)
We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced-basis approximations -- Galerkin projection onto a space WN spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation -- relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures -- methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage -- in which, given a new parameter value, we calculate the output of interest and associated error bound -- depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control. / Singapore-MIT Alliance (SMA)
58

Non-blocking synchronization and system design

Greenwald, Michael Barry. January 1900 (has links)
Thesis (Ph.D)--Stanford University, 1999. / Title from PDF t.p. (viewed May 9, 2002). "August 1999." "Adminitrivia V1/Prg/19990826"--Metadata.
59

A neural network face detector design using bit-width reduced FPU in FPGA

Lee, Yongsoon 05 February 2007
This thesis implemented a field programmable gate array (FPGA)-based face detector using a neural network (NN), as well as a bit-width reduced floating-point unit (FPU). An NN was used to easily separate face data and non-face data in the face detector. The NN performs time consuming repetitive calculation. This time consuming problem was solved by a Field Programmable Gate Array (FPGA) device and a bit-width reduced FPU in this thesis. A floating-point bit-width reduction provided a significant saving of hardware resources, such as area and power.<p>The analytical error model, using the maximum relative representation error (MRRE) and the average relative representation error (ARRE), was developed to obtain the maximum and average output errors for the bit-width reduced FPUs. After the development of the analytical error model, the bit-width reduced FPUs and an NN were designed using MATLAB and VHDL. Finally, the analytical (MATLAB) results, along with the experimental (VHDL) results, were compared. The analytical results and the experimental results showed conformity of shape. It was also found that while maintaining 94.1% detection accuracy, a reduction in bit-width from 32 bits to 16 bits reduced the size of memory and arithmetic units by 50%, and the total power consumption by 14.7%.
60

A new RISC architecture for high speed data acquisition

Gribble, Donald L. 12 November 1991 (has links)
This thesis describes the design of a RISC architecture for high speed data acquisition. The structure of existing data acquisition systems is first examined. An instruction set is created to allow the data acquisition system to serve a wide variety of applications. The architecture is designed to allow the execution of an instruction each clock cycle. The utility of the RISC system is illustrated by implementing several representative applications. Performance of the system is analyzed and future enhancements discussed. / Graduation date: 1992

Page generated in 0.0337 seconds