• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 30
  • Tagged with
  • 72
  • 72
  • 72
  • 72
  • 21
  • 17
  • 14
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

An application of photogrammetry in the petrochemical industry

Singels, Wynand 03 1900 (has links)
Thesis (MScEng (Mathematical Sciences. Applied Mathematics))--Stellenbosch University, 2008. / When building or improving a petrochemical plant, drawings are used extensively in the design process. However, existing petrochemical plants seldom match their drawings, or the drawings are lost, forcing the need to generate a 3D model of the structure of the plant. In this thesis photogrammetry is investigated as a method of generating a digital 3D model of an existing plant. Camera modeling, target extraction and 3D reconstruction are discussed in detail, and a real-world system is investigated.
22

Spherical parameterisation methods for 3D surfaces

Brink, Willie 12 1900 (has links)
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2005. / The surface of a 3D model may be digitally represented as a collection of flat polygons in R3. The collection is known as a polygonal mesh. This representation method has become standard in computer graphics.
23

The development of an integrated effectiveness model for aerial targets

Tome, Leo D. 03 1900 (has links)
Thesis (MScEng (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2007. / During the design or acquisition of missile systems the effectiveness of the system needs to be evaluated. Often actual testing is not possible and therefore mathematical models need to be constructed and solved with the aid of software. The current simulation model is investigated, verified, and a mathematical model to aid in the design of the detonic payload, developed. The problem is confined to the end-game scenario with the developed simulation model focusing on the last milliseconds before warhead detonation. The model, that makes use of the raytracing methodology, models the warhead explosion in the vicinity of a target and calculates the probability of kill for the specific warhead design against the target. Using the data generated by the simulation model, the warhead designer can make the necessary design changes to improve the design. A heuristic method was developed and is discussed which assists in this design process. There is, however, a large population of possible designs. Meta-heuristic methods may be employed in reducing this population and to help confine the manual search to a considerably smaller search area. A fuze detection model as well as the capability to generate truly random intercept scenarios was developed as to enable employment of meta-heuristic search methods. The simulation model, as well as design optimising technology, has successfully been incorporated into a Windows based software package known as EVA (The Effectiveness and Vulnerability Analyser).
24

Criticality of the lower domination parameters of graphs

Coetzer, Audrey 03 1900 (has links)
Thesis (MSc (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2007. / In this thesis we focus on the lower domination parameters of a graph G, denoted ¼(G), for ¼ 2 {i, ir, °}. For each of these parameters, we are interested in characterizing the structure of graphs that are critical when faced with small changes such as vertex-removal, edge-addition and edge-removal. While criticality with respect to independence and domination have been well documented in the literature, many open questions still remain with regards to irredundance. In this thesis we answer some of these questions. First we describe the relationship between transitivity and criticality. This knowledge we then use to determine under which conditions certain classes of graphs are critical. Each of the chosen classes of graphs will provide specific examples of different types of criticality. We also formulate necessary conditions for graphs to be ir-critical and ir-edge-critical.
25

Numerical Laplace transformation methods for integrating linear parabolic partial differential equations

Ngounda, Edgard 12 1900 (has links)
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2009. / ENGLISH ABSTRACT: In recent years the Laplace inversion method has emerged as a viable alternative method for the numerical solution of PDEs. Effective methods for the numerical inversion are based on the approximation of the Bromwich integral. In this thesis, a numerical study is undertaken to compare the efficiency of the Laplace inversion method with more conventional time integrator methods. Particularly, we consider the method-of-lines based on MATLAB’s ODE15s and the Crank-Nicolson method. Our studies include an introductory chapter on the Laplace inversion method. Then we proceed with spectral methods for the space discretization where we introduce the interpolation polynomial and the concept of a differentiation matrix to approximate derivatives of a function. Next, formulas of the numerical differentiation formulas (NDFs) implemented in ODE15s, as well as the well-known second order Crank-Nicolson method, are derived. In the Laplace method, to compute the Bromwich integral, we use the trapezoidal rule over a hyperbolic contour. Enhancement to the computational efficiency of these methods include the LU as well as the Hessenberg decompositions. In order to compare the three methods, we consider two criteria: The number of linear system solves per unit of accuracy and the CPU time per unit of accuracy. The numerical results demonstrate that the new method, i.e., the Laplace inversion method, is accurate to an exponential order of convergence compared to the linear convergence rate of the ODE15s and the Crank-Nicolson methods. This exponential convergence leads to high accuracy with only a few linear system solves. Similarly, in terms of computational cost, the Laplace inversion method is more efficient than ODE15s and the Crank-Nicolson method as the results show. Finally, we apply with satisfactory results the inversion method to the axial dispersion model and the heat equation in two dimensions. / AFRIKAANSE OPSOMMING: In die afgelope paar jaar het die Laplace omkeringsmetode na vore getree as ’n lewensvatbare alternatiewe metode vir die numeriese oplossing van PDVs. Effektiewe metodes vir die numeriese omkering word gebasseer op die benadering van die Bromwich integraal. In hierdie tesis word ’n numeriese studie onderneem om die effektiwiteit van die Laplace omkeringsmetode te vergelyk met meer konvensionele tydintegrasie metodes. Ons ondersoek spesifiek die metode-van-lyne, gebasseer op MATLAB se ODE15s en die Crank-Nicolson metode. Ons studies sluit in ’n inleidende hoofstuk oor die Laplace omkeringsmetode. Dan gaan ons voort met spektraalmetodes vir die ruimtelike diskretisasie, waar ons die interpolasie polinoom invoer sowel as die konsep van ’n differensiasie-matriks waarmee afgeleides van ’n funksie benader kan word. Daarna word formules vir die numeriese differensiasie formules (NDFs) ingebou in ODE15s herlei, sowel as die welbekende tweede orde Crank-Nicolson metode. Om die Bromwich integraal te benader in die Laplace metode, gebruik ons die trapesiumreël oor ’n hiperboliese kontoer. Die berekeningskoste van al hierdie metodes word verbeter met die LU sowel as die Hessenberg ontbindings. Ten einde die drie metodes te vergelyk beskou ons twee kriteria: Die aantal lineêre stelsels wat moet opgelos word per eenheid van akkuraatheid, en die sentrale prosesseringstyd per eenheid van akkuraatheid. Die numeriese resultate demonstreer dat die nuwe metode, d.i. die Laplace omkeringsmetode, akkuraat is tot ’n eksponensiële orde van konvergensie in vergelyking tot die lineêre konvergensie van ODE15s en die Crank-Nicolson metodes. Die eksponensiële konvergensie lei na hoë akkuraatheid met slegs ’n klein aantal oplossings van die lineêre stelsel. Netso, in terme van berekeningskoste is die Laplace omkeringsmetode meer effektief as ODE15s en die Crank-Nicolson metode. Laastens pas ons die omkeringsmetode toe op die aksiale dispersiemodel sowel as die hittevergelyking in twee dimensies, met bevredigende resultate.
26

Multivariate refinable functions with emphasis on box splines

Van der Bijl, Rinske 03 1900 (has links)
Thesis (MComm (Mathematics))--Stellenbosch University, 2008. / The general purpose of this thesis is the analysis of multivariate refinement equations, with focus on the bivariate case. Since box splines are the main prototype of such equations (just like the cardinal B-splines in the univariate case), we make them our primary subject of discussion throughout. The first two chapters are indeed about the origin and definition of box splines, and try to elaborate on them in sufficient detail so as to build on them in all subsequent chapters, while providing many examples and graphical illustrations to make precise every aspect regarding box splines that will be mentioned. Multivariate refinement equations are ones that take on the form (x) =Xi2Zn pi (Mx − i), (1) where is a real-valued function, called a refinable function, on Rn, p = {pi}i2Zn is a sequence of real numbers, called a refinement mask, and M is an n × n matrix with integer entries, called a dilation matrix. It is important to note that any such equation is thus simultaneously determined by all three of , p and M — and the thesis will try and explain what role each of these plays in a refinement equation. In Chapter 3 we discuss the definition of refinement equations in more detail and elaborate on box splines as our first examples of refinable functions, also showing that one can actually use them to create even more such functions. Also observing from Chapter iii iv 2 that box splines demand yet another parameter from us, namely an initial direction matrix D, we focus on the more general instances of these in Chapter 4, while keeping the dilation matrix M fixed. Chapter 5 then in turn deals with the matrix M and tries to generalize some of the results found in Chapter 3 accordingly, keeping the initial direction matrix fixed. Having dealt with the refinement equation itself, we subsequently focus our attention on the support of a (bivariate) refinable function — that is, the part of the xy-grid on which such a function “lives” — and that of a refinement mask, in Chapter 6, and obtain a few results that are in a sense introductory to our work in the next chapter. Next, we move on to discuss one area in which refinable functions are especially applicable, namely subdivision, which is analyzed in Chapter 7. After giving the basic definitions of subdivision and subdivision convergence, and investigating the “sum rules” in Section 7.1, we prove our main subdivision convergence result in Section 7.2. The chapter is concluded with some examples in Section 7.3. The thesis is concluded, in Chapter 8, with a number of remarks on what has been done and issues that are left for future research.
27

Metric reconstruction of multiple rigid objects

De Vaal, Jan Hendrik 03 1900 (has links)
Thesis (MScEng (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2009. / Engineers struggle to replicate the capabilities of the sophisticated human visual system. This thesis sets out to recover motion and 3D structure of multiple rigid objects up to a similarity. The motion of these objects are either recorded in a single video sequence, or images of the objects are recorded on multiple, di erent cameras. We assume a perspective camera model with optional provision for calibration information. The Structure from Motion (SfM) problem is addressed from a matrix factorization point of view. This leads to a reconstruction correct up to a projectivity of little use in itself. Using techniques from camera autocalibration the projectivity is upgraded to a similarity. This reconstruction is also applied to multiple objects through motion segmentation. The SfM system developed in this thesis is a batch-processing algorithm, requiring few frames for a solution and readily accepts images from very di erent viewpoints. Since a solution can be obtained with just a few frames, it can be used to initialize sequential methods with slower convergence rates, such as the Kalman lter. The SfM system is critically evaluated against an extensive set of motion sequences.
28

Off-line signature verification using classifier ensembles and flexible grid features

Swanepoel, Jacques Philip 12 1900 (has links)
Thesis (MSc (Mathematical Sciences))—University of Stellenbosch, 2009. / Thesis presented in partial fulfilment of the requirements for the degree of Master of Science in applied mathematics at Stellenbosch University / ENGLISH ABSTRACT: In this study we investigate the feasibility of combining an ensemble of eight continuous base classifiers for the purpose of off-line signature verification. This work is mainly inspired by the process of cheque authentication within the banking environment. Each base classifier is constructed by utilising a specific local feature, in conjunction with a specific writer-dependent signature modelling technique. The local features considered are pixel density, gravity centre distance, orientation and predominant slant. The modelling techniques considered are dynamic time warping and discrete observation hidden Markov models. In this work we focus on the detection of high quality (skilled) forgeries. Feature extraction is achieved by superimposing a grid with predefined resolution onto a signature image, whereafter a single local feature is extracted from each signature sub-image corresponding to a specific grid cell. After encoding the signature image into a matrix of local features, each column within said matrix represents a feature vector (observation) within a feature set (observation sequence). In this work we propose a novel flexible grid-based feature extraction technique and show that it outperforms existing rigid grid-based techniques. The performance of each continuous classifier is depicted by a receiver operating characteristic (ROC) curve, where each point in ROC-space represents the true positive rate and false positive rate of a threshold-specific discrete classifier. The objective is therefore to develope a combined classifier for which the area-under-curve (AUC) is maximised -or for which the equal error rate (EER) is minimised. Two disjoint data sets, in conjunction with a cross-validation protocol, are used for model optimisation and model evaluation. This protocol avoids possible model overfitting, and also scrutinises the generalisation potential of each classifier. During the first optimisation stage, the grid configuration which maximises proficiency is determined for each base classifier. During the second optimisation stage, the most proficient ensemble of optimised base classifiers is determined for several classifier fusion strategies. During both optimisation stages only the optimisation data set is utilised. During evaluation, each optimal classifier ensemble is combined using a specific fusion strategy, and retrained and tested on the separate evaluation data set. We show that the performance of the optimal combined classifiers is significantly better than that of the optimal individual base classifiers. Both score-based and decision-based fusion strategies are investigated, which includes a novel extension to an existing decision-based fusion strategy. The existing strategy is based on ROC-statistics of the base classifiers and maximum likelihood estimation. We show that the proposed elitist maximum attainable ROC-based strategy outperforms the existing one. / AFRIKAANSE OPSOMMING: In hierdie projek ondersoek ons die haalbaarheid van die kombinasie van agt kontinue basis-klassifiseerders, vir statiese handtekeningverifikasie. Hierdie werk is veral relevant met die oog op die bekragtiging van tjeks in die bankwese. Elke basis-klassifiseerder word gekonstrueer deur ’n spesifieke plaaslike kenmerk in verband te bring met ’n spesifieke skrywer-afhanklike handtekeningmodelleringstegniek. Die plaaslike kenmerke sluit pikseldigtheid, swaartepunt-afstand, oriëntasie en oorheersende helling in, terwyl die modelleringstegnieke dinamiese tydsverbuiging en diskrete verskuilde Markov modelle insluit. Daar word op die opsporing van hoë kwaliteit vervalsings gefokus. Kenmerk-onttreking word bewerkstellig deur die superponering van ’n rooster van voorafgedefinieerde resolusie op ’n bepaalde handtekening. ’n Enkele plaaslike kenmerk word onttrek vanuit die betrokke sub-beeld geassosieer met ’n spesifieke roostersel. Nadat die handtekeningbeeld na ’n matriks van plaaslike kenmerke getransformeer is, verteenwoordig elke kolom van die matriks ’n kenmerkvektor in ’n kenmerkstel. In hierdie werk stel ons ’n nuwe buigsame rooster-gebasseerde kenmerk-ontrekkingstegniek voor en toon aan dat dit die bestaande starre rooster-gebasseerde tegnieke oortref. Die prestasie van elke kontinue klassifiseerder word voorgestel deur ’n ROC-kurwe, waar elke punt in die ROC-ruimte die ware positiewe foutkoers en vals positiewe foutkoers van ’n drempel-spesifieke diskrete klassifiseerder verteenwoordig. Die doelwit is derhalwe die ontwikkeling van ’n gekombineerde klassifiseerder, waarvoor die area onder die kurwe (AUC) gemaksimeer word - of waarvoor die gelyke foutkoers (EER) geminimeer word. Twee disjunkte datastelle en ’n kruisverifi¨eringsprotokol word gebruik vir model optimering en model evaluering. Hierdie protokol vermy potensiële model-oorpassing, en ondersoek ook die veralgemeningspotensiaal van elke klassifiseerder. Tydens die eerste optimeringsfase word die rooster-konfigurasie wat die bekwaamheid van elke basis-klassifiseerder maksimeer, gevind. Tydens die tweede optimeringsfase word die mees bekwame groepering van geoptimeerde basis-klassifiseerders gevind vir verskeie klassifiseerder fusiestrategieë. Tydens beide optimeringsfases word slegs die optimeringsdatastel gebruik. Tydens evaluering word elke optimale groep klassifiseerders gekombineer met ’n spesifieke fusie-strategie, her-afgerig en getoets op die aparte evalueringsdatastel. Ons toon aan dat die prestasie van die optimale gekombineerde klassifiseerder aansienlik beter is as dié van die optimale individuele basis-klassifiseerders. Beide telling- en besluit-gebaseerde fusie-strategieë word ondersoek, insluitend ’n nuwe uitbreiding van ’n bestaande besluit-gebasseerde kombinasie strategie. Die bestaande strategie is gebaseer op die ROC-statistiek van die basis-klassifiseerders en maksimum aanneemlikheidsberaming. Ons toon aan dat die voorgestelde elitistiese maksimum haalbare ROC-gebasseerde strategie die bestaande strategie oortref.
29

Powered addition as modelling technique for flow processes

De Wet, Pierre 03 1900 (has links)
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: The interpretation – and compilation of predictive equations to represent the general trend – of collected data is aided immensely by its graphical representation. Whilst, by and large, predictive equations are more accurate and convenient for use in applications than graphs, the latter is often preferable since it visually illustrates deviations in the data, thereby giving an indication of reliability and the range of validity of the equation. Combination of these two tools – a graph for demonstration and an equation for use – is desirable to ensure optimal understanding. Often, however, the functional dependencies of the dependent variable are only known for large and small values of the independent variable; solutions for intermediate quantities being obscure for various reasons (e.g. narrow band within which the transition from one regime to the other occurs, inadequate knowledge of the physics in this area, etc.). The limiting solutions may be regarded as asymptotic and the powered addition to a power, s, of such asymptotes, f0 and f¥ , leads to a single correlating equation that is applicable over the entire domain of the dependent variable. This procedure circumvents the introduction of ad hoc curve fitting measures for the different regions and subsequent, unwanted jumps in piecewise fitted correlative equations for the dependent variable(s). Approaches to successfully implement the technique for different combinations of asymptotic conditions are discussed. The aforementioned method of powered addition is applied to experimental data and the semblances and discrepancies with literature and analytical models are discussed; the underlying motivation being the aspiration towards establishing a sound modelling framework for analytical and computational predictive measures. The purported procedure is revealed to be highly useful in the summarising and interpretation of experimental data in an elegant and simplistic manner. / AFRIKAANSE OPSOMMING: Die interpretasie – en samestelling van vergelykings om die algemene tendens voor te stel – van versamelde data word onoorsienbaar bygestaan deur die grafiese voorstelling daarvan. Ten spyte daarvan dat vergelykings meer akkuraat en geskik is vir die gebruik in toepassings as grafieke, is laasgenoemde dikwels verskieslik aangesien dit afwykings in die data visueel illustreer en sodoende ’n aanduiding van die betroubaarheid en omvang van geldigheid van die vergelyking bied. ’n Kombinasie van hierdie twee instrumente – ’n grafiek vir demonstrasie en ’n vergelyking vir aanwending – is wenslik om optimale begrip te verseker. Die funksionele afhanklikheid van die afhanklike veranderlike is egter dikwels slegs bekend vir groot en klein waardes van die onafhanklike veranderlike; die oplossings by intermediêre hoeveelhede onduidelik as gevolg van verskeie redes (waaronder, bv. ’n smal band van waardes waarbinne die oorgang tussen prosesse plaasvind, onvoldoende kennis van die fisika in hierdie area, ens.). Beperkende oplossings / vergelykings kan as asimptote beskou word en magsaddisie tot ’n mag, s, van sodanige asimptote, f0 en f¥, lei tot ’n enkel, saamgestelde oplossing wat toepaslik is oor die algehele domein van die onafhanklike veranderlike. Dié prosedure voorkom die instelling van ad hoc passingstegnieke vir die verskillende gebiede en die gevolglike ongewensde spronge in stuksgewyspassende vergelykings van die afhankilke veranderlike(s). Na aanleiding van die moontlike kombinasies van asimptotiese toestande word verskillende benaderings vir die suksesvolle toepassing van hierdie tegniek bespreek. Die bogemelde metode van magsaddisie word toegepas op eksperimentele data en die ooreenkomste en verskille met literatuur en analitiese modelle bespreek; die onderliggend motivering ’n strewe na die daarstelling van ’n modellerings-raamwerk vir analitiese- en rekenaarvoorspellingsmaatreëls. Die voorgestelde prosedure word aangetoon om, op ’n elegante en eenvoudige wyse, hoogs bruikbaar te wees vir die lesing en interpretasie van eksperimentele data.
30

Modelling of single phase diffusive transport in porous environments

Du Plessis, Elsa 03 1900 (has links)
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: Macroscopic diffusion through porous media is considered in systems where this process does not occur along with or induce bulk convective flow of the diffusing species. The diffusion coefficient present in the governing equations of suchmacroscopic diffusion is unique to a pair of species in a binary system. This coefficient may be determined experimentally, but such experimentation must be carried out for every different pair of species. Taking this into consideration, a deterministic pore-scale model is proposed to predict the effective diffusivity of homogeneous and unconsolidated porous media which ultimately depends solely on the porosity of the media. The approach taken is to model a porous medium as either a fibre bed or an array of granules through which the diffusive process is assumed to be homogenous and transversally isotropic. The fibre bed and granular modelsmay be viewed as two-dimensional and three-dimensional models respectively, and may also be combined to form a weighted average model which adjusts to differing diffusive behaviour at different porosities. The model is validated through comparison with published analytical and numerical models as well as experimental data available in the literature. A numerical program is implemented to generate further data for various arrangements of homogeneous, anisotropic and transversely isotropic porous media. The numerical results were validated against an analytical model from the literature which proved to be inapplicable to a specific case. The weighted average analytical model is proposed for this case, instead. The results of this study indicate that the weighted average analytical model is in good agreement with the numerical and experimental data and as such may be applied directly to a binary system of which the porosity is known in order to predict the effective diffusivity. / AFRIKAANSE OPSOMMING: Makroskopiese diffusieprosesse deur poreuse media word oorweeg in sisteme waar geen konveksie van die diffunderende stof plaasvind of geïnduseer word nie. Die wiskundige beskrywing van hierdie prossese bevat die sogenaamde diffusiekoëffisïent, ’n konstante wat uniek is tot ’n tweeledige sisteem. Dié konstante kan eksperimenteel bepaal word, maar as gevolg van die uniekhied daarvan tot verskillende sisteme moet dit vir elke tweeledige sisteem bepaal word. Op grond hiervan word ’n deterministiese model voorgestel om die effektiewe diffusiwiteit vir diffusie deur homogene en losstaande poreuse media te voorspel. Die model hang slegs af van die porositeit van die poreuse medium wat benader word as ’n veselbed of korrelstruktuur. Die diffusieproses deur dergelike strukture word beskou as homogeen en isotroop in die dwarsstroomrigting. Die veselbed- en korrelmodelle word beskou as twee- en driedimensionele modelle onderskeidelik en word gekombineer om ’n geweegde gemiddelde model te vorm wat dus by enige porositeit die verlangde porositeit gee. Die model is geverifieer deur vergelyking met analitiese- en numeriese modelle asook eksperimentele data vanuit die literatuur. ’n Numeriese program is gebruik om verdere resultate te verkry vir verskeie skikkings van homogene, anisotrope en dwarsverskuifde poreuse media. Die numeriese resultate is gekontroleer deur vergelyking met ’n analitiese model vanuit die literatuur. ’n Spesifieke geval is uitgewys waarvoor hierdie model nie toepasbaar is nie, maar waarvoor die voorgestelde geweegde gemiddelde model goeie resultate lewer. Die uitkomste dui aan dat die analitiese model goed ooreenstem met die numeriese en eksperimentele data en kan dus direk toegepas word om die effektiewe diffusiwiteit te verkry van ’n tweeledige sisteem waarvan die porositeit bekend is.

Page generated in 0.206 seconds