• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 40
  • 30
  • Tagged with
  • 72
  • 72
  • 72
  • 72
  • 21
  • 17
  • 14
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Efficient computation of shifted linear systems of equations with application to PDEs

Eneyew, Eyaya Birara 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: In several numerical approaches to PDEs shifted linear systems of the form (zI - A)x = b, need to be solved for several values of the complex scalar z. Often, these linear systems are large and sparse. This thesis investigates efficient numerical methods for these systems that arise from a contour integral approximation to PDEs and compares these methods with direct solvers. In the first part, we present three model PDEs and discuss numerical approaches to solve them. We use the first problem to demonstrate computations with a dense matrix, the second problem to demonstrate computations with a sparse symmetric matrix and the third problem for a sparse but nonsymmetric matrix. To solve the model PDEs numerically we apply two space discrerization methods, namely the finite difference method and the Chebyshev collocation method. The contour integral method mentioned above is used to integrate with respect to the time variable. In the second part, we study a Hessenberg reduction method for solving shifted linear systems with a dense matrix and present numerical comparison of it with the built-in direct linear system solver in SciPy. Since both are direct methods, in the absence of roundoff errors, they give the same result. However, we find that the Hessenberg reduction method is more efficient in CPU-time than the direct solver. As application we solve a one-dimensional version of the heat equation. In the third part, we present efficient techniques for solving shifted systems with a sparse matrix by Krylov subspace methods. Because of their shift-invariance property, the Krylov methods allow one to obtain approximate solutions for all values of the parameter, by generating a single approximation space. Krylov methods applied to the linear systems are generally slowly convergent and hence preconditioning is necessary to improve the convergence. The use of shift-invert preconditioning is discussed and numerical comparisons with a direct sparse solver are presented. As an application we solve a two-dimensional version of the heat equation with and without a convection term. Our numerical experiments show that the preconditioned Krylov methods are efficient in both computational time and memory space as compared to the direct sparse solver. / AFRIKAANSE OPSOMMING: In verskeie numeriese metodes vir PDVs moet geskuifde lineêre stelsels van die vorm (zI − A)x = b, opgelos word vir verskeie waardes van die komplekse skalaar z. Hierdie stelsels is dikwels groot en yl. Hierdie tesis ondersoek numeriese metodes vir sulke stelsels wat voorkom in kontoerintegraalbenaderings vir PDVs en vergelyk hierdie metodes met direkte metodes vir oplossing. In die eerste gedeelte beskou ons drie model PDVs en bespreek numeriese benaderings om hulle op te los. Die eerste probleem word gebruik om berekenings met ’n vol matriks te demonstreer, die tweede probleem word gebruik om berekenings met yl, simmetriese matrikse te demonstreer en die derde probleem vir yl, onsimmetriese matrikse. Om die model PDVs numeries op te los beskou ons twee ruimte-diskretisasie metodes, naamlik die eindige-verskilmetode en die Chebyshev kollokasie-metode. Die kontoerintegraalmetode waarna hierbo verwys is word gebruik om met betrekking tot die tydveranderlike te integreer. In die tweede gedeelte bestudeer ons ’n Hessenberg ontbindingsmetode om geskuifde lineêre stelsels met ’n vol matriks op te los, en ons rapporteer numeriese vergelykings daarvan met die ingeboude direkte oplosser vir lineêre stelsels in SciPy. Aangesien beide metodes direk is lewer hulle dieselfde resultate in die afwesigheid van afrondingsfoute. Ons het egter bevind dat die Hessenberg ontbindingsmetode meer effektief is in terme van rekenaartyd in vergelyking met die direkte oplosser. As toepassing los ons ’n een-dimensionele weergawe van die hittevergelyking op. In die derde gedeelte beskou ons effektiewe tegnieke om geskuifde stelsels met ’n yl matriks op te los, met Krylov subruimte-metodes. As gevolg van hul skuifinvariansie eienskap, laat die Krylov metodes mens toe om benaderde oplossings te verkry vir alle waardes van die parameter, deur slegs een benaderingsruimte voort te bring. Krylov metodes toegepas op lineêre stelsels is in die algemeen stadig konvergerend, en gevolglik is prekondisionering nodig om die konvergensie te verbeter. Die gebruik van prekondisionering gebasseer op skuif-en-omkeer word bespreek en numeriese vergelykings met direkte oplossers word aangebied. As toepassing los ons ’n twee-dimensionele weergawe van die hittevergelyking op, met ’n konveksie term en daarsonder. Ons numeriese eksperimente dui aan dat die Krylov metodes met prekondisionering effektief is, beide in terme van berekeningstyd en rekenaargeheue, in vergelyking met die direkte metodes.
32

An explicit finite difference method for analyzing hazardous rock mass

Basson, Gysbert 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: FLAC3D is a three-dimensional explicit nite difference program for solving a variety of solid mechanics problems, both linear and non-linear. The development of the algorithm and its initial implementation were performed by Itasca Consulting Group Inc. The main idea of the algorithm is to discritise the domain of interest into a Lagrangian grid where each cell represents an element of the material. Each cell can then deform according to a prescribed stress/strain law together with the equations of motion. An in-depth study of the algorithm was performed and implemented in Java. During the implementation, it was observed that the type of boundary conditions typically used has a major in uence on the accuracy of the results, especially when boundaries are close to regions with large stress variations, such as in mining excavations. To improve the accuracy of the algorithm, a new type of boundary condition was developed where the FLAC3D domain is embedded in a linear elastic material, named the Boundary Node Shell (BNS). Using the BNS shows a signi cant improvement in results close to excavations. The FLAC algorithm is also quite amendable to paralellization and a multi-threaded version that makes use of multiple Central Processing Unit (CPU) cores was developed to optimize the speed of the algorithm. The nal outcome is new non-commercial Java source code (JFLAC) which includes the Boundary Node Shell (BNS) and shared memory parallelism over and above the basic FLAC3D algorithm. / AFRIKAANSE OPSOMMING: FLAC3D is 'n eksplisiete eindige verskil program wat 'n verskeidenheid liniêre en nieliniêre soliede meganika probleme kan oplos. Die oorspronklike algoritme en die implimentasies daarvan was deur Itasca Consulting Group Inc. toegepas. Die hoo dee van die algoritme is om 'n gebied te diskritiseer deur gebruik te maak van 'n Lagrangese rooster, waar elke sel van die rooster 'n element van die rooster materiaal beskryf. Elke sel kan dan vervorm volgens 'n sekere spannings/vervormings wet. 'n Indiepte ondersoek van die algoritme was uitgevoer en in Java geïmplimenteer. Tydens die implementering was dit waargeneem dat die grense van die rooster 'n groot invloed het op die akkuraatheid van die resultate. Dit het veral voorgekom in areas waar stress konsentrasies hoog is, gewoonlik naby areas waar myn uitgrawings gemaak is. Dit het die ontwikkelling van 'n nuwe tipe rand kondisie tot gevolg gehad, sodat die akkuraatheid van die resultate kon verbeter. Die nuwe rand kondisie, genaamd die Grens Node Omhulsel (GNO), aanvaar dat die gebied omring is deur 'n elastiese materiaal, wat veroorsaak dat die grense van die gebied 'n elastiese reaksie het op die stress binne die gebied. Die GNO het 'n aansienlike verbetering in die resultate getoon, veral in areas naby myn uitgrawings. Daar was ook waargeneem dat die FLAC algoritme parralleliseerbaar is en het gelei tot die implentering van 'n multi-SVE weergawe van die sagteware om die spoed van die algoritme te optimeer. Die nale uitkomste is 'n nuwe nie-kommersiële Java weergawe van die algoritme (JFLAC), wat die implimentering van die nuwe GNO randwaardekondisie insluit, asook toelaat vir die gebruik van multi- Sentrale Verwerkings Eenheid (SVE) as 'n verbetering op die basiese FLAC3D algoritme.
33

Reinforcement learning : theory, methods and application to decision support systems

Mouton, Hildegarde Suzanne 12 1900 (has links)
Thesis (MSc (Applied Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: In this dissertation we study the machine learning subfield of Reinforcement Learning (RL). After developing a coherent background, we apply a Monte Carlo (MC) control algorithm with exploring starts (MCES), as well as an off-policy Temporal-Difference (TD) learning control algorithm, Q-learning, to a simplified version of the Weapon Assignment (WA) problem. For the MCES control algorithm, a discount parameter of τ = 1 is used. This gives very promising results when applied to 7 × 7 grids, as well as 71 × 71 grids. The same discount parameter cannot be applied to the Q-learning algorithm, as it causes the Q-values to diverge. We take a greedy approach, setting ε = 0, and vary the learning rate (α ) and the discount parameter (τ). Experimentation shows that the best results are found with set to 0.1 and constrained in the region 0.4 ≤ τ ≤ 0.7. The MC control algorithm with exploring starts gives promising results when applied to the WA problem. It performs significantly better than the off-policy TD algorithm, Q-learning, even though it is almost twice as slow. The modern battlefield is a fast paced, information rich environment, where discovery of intent, situation awareness and the rapid evolution of concepts of operation and doctrine are critical success factors. Combining the techniques investigated and tested in this work with other techniques in Artificial Intelligence (AI) and modern computational techniques may hold the key to solving some of the problems we now face in warfare. / AFRIKAANSE OPSOMMING: Die fokus van hierdie verhandeling is die masjienleer-algoritmes in die veld van versterkingsleer. ’n Koherente agtergrond van die veld word gevolg deur die toepassing van ’n Monte Carlo (MC) beheer-algoritme met ondersoekende begintoestande, sowel as ’n afbeleid Temporale-Verskil beheer-algoritme, Q-leer, op ’n vereenvoudigde weergawe van die wapentoekenningsprobleem. Vir die MC beheer-algoritme word ’n afslagparameter van τ = 1 gebruik. Dit lewer belowende resultate wanneer toegepas op 7 × 7 roosters, asook op 71 × 71 roosters. Dieselfde afslagparameter kan nie op die Q-leer algoritme toegepas word nie, aangesien dit veroorsaak dat die Q-waardes divergeer. Ons neem ’n gulsige aanslag deur die gulsigheidsparameter te verstel na ε = 0. Ons varieer dan die leertempo ( α) en die afslagparameter (τ). Die beste eksperimentele resultate is behaal wanneer = 0.1 en as die afslagparameter vasgehou word in die gebied 0.4 ≤ τ ≤ 0.7. Die MC beheer-algoritme lewer belowende resultate wanneer toegepas op die wapentoekenningsprobleem. Dit lewer beduidend beter resultate as die Q-leer algoritme, al neem dit omtrent twee keer so lank om uit te voer. Die moderne slagveld is ’n omgewing ryk aan inligting, waar dit kritiek belangrik is om vinnig die vyand se planne te verstaan, om bedag te wees op die omgewing en die konteks van gebeure, en waar die snelle ontwikkeling van die konsepte van operasie en doktrine lei tot sukses. Die tegniekes wat in die verhandeling ondersoek en getoets is, en ander kunsmatige intelligensie tegnieke en moderne berekeningstegnieke saamgesnoer, mag dalk die sleutel hou tot die oplossing van die probleme wat ons tans in die gesig staar in oorlogvoering.
34

Off-line signature verification using ensembles of local Radon transform-based HMMs

Panton, Mark Stuart 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: An off-line signature verification system attempts to authenticate the identity of an individual by examining his/her handwritten signature, after it has been successfully extracted from, for example, a cheque, a debit or credit card transaction slip, or any other legal document. The questioned signature is typically compared to a model trained from known positive samples, after which the system attempts to label said signature as genuine or fraudulent. Classifier fusion is the process of combining individual classifiers, in order to construct a single classifier that is more accurate, albeit computationally more complex, than its constituent parts. A combined classifier therefore consists of an ensemble of base classifiers that are combined using a specific fusion strategy. In this dissertation a novel off-line signature verification system, using a multi-hypothesis approach and classifier fusion, is proposed. Each base classifier is constructed from a hidden Markov model (HMM) that is trained from features extracted from local regions of the signature (local features), as well as from the signature as a whole (global features). To achieve this, each signature is zoned into a number of overlapping circular retinas, from which said features are extracted by implementing the discrete Radon transform. A global retina, that encompasses the entire signature, is also considered. Since the proposed system attempts to detect high-quality (skilled) forgeries, it is unreasonable to assume that samples of these forgeries will be available for each new writer (client) enrolled into the system. The system is therefore constrained in the sense that only positive training samples, obtained from each writer during enrolment, are available. It is however reasonable to assume that both positive and negative samples are available for a representative subset of so-called guinea-pig writers (for example, bank employees). These signatures constitute a convenient optimisation set that is used to select the most proficient ensemble. A signature, that is claimed to belong to a legitimate client (member of the general public), is therefore rejected or accepted based on the majority vote decision of the base classifiers within the most proficient ensemble. When evaluated on a data set containing high-quality imitations, the inclusion of local features, together with classifier combination, significantly increases system performance. An equal error rate of 8.6% is achieved, which compares favorably to an achieved equal error rate of 12.9% (an improvement of 33.3%) when only global features are considered. Since there is no standard international off-line signature verification data set available, most systems proposed in the literature are evaluated on data sets that differ from the one employed in this dissertation. A direct comparison of results is therefore not possible. However, since the proposed system utilises significantly different features and/or modelling techniques than those employed in the above-mentioned systems, it is very likely that a superior combined system can be obtained by combining the proposed system with any of the aforementioned systems. Furthermore, when evaluated on the same data set, the proposed system is shown to be significantly superior to three other systems recently proposed in the literature. / AFRIKAANSE OPSOMMING: Die doel van ’n statiese handtekening-verifikasiestelsel is om die identiteit van ’n individu te bekragtig deur sy/haar handgeskrewe handtekening te analiseer, nadat dit suksesvol vanaf byvoorbeeld ’n tjek,’n debiet- of kredietkaattransaksiestrokie, of enige ander wettige dokument onttrek is. Die bevraagtekende handtekening word tipies vergelyk met ’n model wat afgerig is met bekende positiewe voorbeelde, waarna die stelsel poog om die handtekening as eg of vervals te klassifiseer. Klassifiseerder-fusie is die proses waardeer individuele klassifiseerders gekombineer word, ten einde ’n enkele klassifiseerder te konstrueer, wat meer akkuraat, maar meer berekeningsintensief as sy samestellende dele is. ’n Gekombineerde klassifiseerder bestaan derhalwe uit ’n ensemble van basis-klassifiseerders, wat gekombineer word met behulp van ’n spesifieke fusie-strategie. In hierdie projek word ’n nuwe statiese handtekening-verifikasiestelsel, wat van ’n multi-hipotese benadering en klassifiseerder-fusie gebruik maak, voorgestel. Elke basis-klassifiseerder word vanuit ’n verskuilde Markov-model (HMM) gekonstrueer, wat afgerig word met kenmerke wat vanuit lokale gebiede in die handtekening (lokale kenmerke), sowel as vanuit die handtekening in geheel (globale kenmerke), onttrek is. Ten einde dit te bewerkstellig, word elke handtekening in ’n aantal oorvleulende sirkulêre retinas gesoneer, waaruit kenmerke onttrek word deur die diskrete Radon-transform te implementeer. ’n Globale retina, wat die hele handtekening in beslag neem, word ook beskou. Aangesien die voorgestelde stelsel poog om hoë-kwaliteit vervalsings op te spoor, is dit onredelik om te verwag dat voorbeelde van hierdie handtekeninge beskikbaar sal wees vir elke nuwe skrywer (kliënt) wat vir die stelsel registreer. Die stelsel is derhalwe beperk in die sin dat slegs positiewe afrigvoorbeelde, wat bekom is van elke skrywer tydens registrasie, beskikbaar is. Dit is egter redelik om aan te neem dat beide positiewe en negatiewe voorbeelde beskikbaar sal wees vir ’n verteenwoordigende subversameling van sogenaamde proefkonynskrywers, byvoorbeeld bankpersoneel. Hierdie handtekeninge verteenwoordig ’n gereieflike optimeringstel, wat gebruik kan word om die mees bekwame ensemble te selekteer. ’n Handtekening, wat na bewering aan ’n wettige kliënt (lid van die algemene publiek) behoort, word dus verwerp of aanvaar op grond van die meerderheidstem-besluit van die basis-klassifiseerders in die mees bekwame ensemble. Wanneer die voorgestelde stelsel op ’n datastel, wat hoë-kwaliteit vervalsings bevat, ge-evalueer word, verhoog die insluiting van lokale kenmerke en klassifiseerder-fusie die prestasie van die stelsel beduidend. ’n Gelyke foutkoers van 8.6% word behaal, wat gunstig vergelyk met ’n gelyke foutkoers van 12.9% (’n verbetering van 33.3%) wanneer slegs globale kenmerke gebruik word. Aangesien daar geen standard internasionale statiese handtekening-verifikasiestelsel bestaan nie, word die meeste stelsels, wat in die literatuur voorgestel word, op ander datastelle ge-evalueer as die datastel wat in dié projek gebruik word. ’n Direkte vergelyking van resultate is dus nie moontlik nie. Desnieteenstaande, aangesien die voorgestelde stelsel beduidend ander kenmerke en/of modeleringstegnieke as dié wat in bogenoemde stelsels ingespan word gebruik, is dit hoogs waarskynlik dat ’n superieure gekombineerde stelsel verkry kan word deur die voorgestelde stelsel met enige van bogenoemde stelsels te kombineer. Voorts word aangetoon dat, wanneer op dieselfde datastel geevalueerword, die voorgestelde stelstel beduidend beter vaar as drie ander stelsels wat onlangs in die literatuur voorgestel is.
35

Design of an automated decision support system for scheduling tasks in a generalized job-shop

Bester, Margarete Joan 04 1900 (has links)
Thesis (MSc)--University of Stellenbosch, 2006. / Please refer to full text for abstract
36

Implementation and evaluation of two prediction techniques for the Lorenz time series

Huddlestone, Grant E 03 1900 (has links)
Thesis (MSc)-- Stellenbosch University, 2003. / ENGLISH ABSTRACT: This thesis implements and evaluates two prediction techniques used to forecast deterministic chaotic time series. For a large number of such techniques, the reconstruction of the phase space attractor associated with the time series is required. Embedding is presented as the means of reconstructing the attractor from limited data. Methods for obtaining the minimal embedding dimension and optimal time delay from the false neighbour heuristic and average mutual information method are discussed. The first prediction algorithm that is discussed is based on work by Sauer, which includes the implementation of the singular value decomposition on data obtained from the embedding of the time series being predicted. The second prediction algorithm is based on neural networks. A specific architecture, suited to the prediction of deterministic chaotic time series, namely the time dependent neural network architecture is discussed and implemented. Adaptations to the back propagation training algorithm for use with the time dependent neural networks are also presented. Both algorithms are evaluated by means of predictions made for the well-known Lorenz time series. Different embedding and algorithm-specific parameters are used to obtain predicted time series. Actual values corresponding to the predictions are obtained from Lorenz time series, which aid in evaluating the prediction accuracies. The predicted time series are evaluated in terms of two criteria, prediction accuracy and qualitative behavioural accuracy. Behavioural accuracy refers to the ability of the algorithm to simulate qualitative features of the time series being predicted. It is shown that for both algorithms the choice of the embedding dimension greater than the minimum embedding dimension, obtained from the false neighbour heuristic, produces greater prediction accuracy. For the neural network algorithm, values of the embedding dimension greater than the minimum embedding dimension satisfy the behavioural criterion adequately, as expected. Sauer's algorithm has the greatest behavioural accuracy for embedding dimensions smaller than the minimal embedding dimension. In terms of the time delay, it is shown that both algorithms have the greatest prediction accuracy for values of the time delay in a small interval around the optimal time delay. The neural network algorithm is shown to have the greatest behavioural accuracy for time delay close to the optimal time delay and Sauer's algorithm has the best behavioural accuracy for small values of the time delay. Matlab code is presented for both algorithms. / AFRIKAANSE OPSOMMING: In hierdie tesis word twee voorspellings-tegnieke geskik vir voorspelling van deterministiese chaotiese tydreekse ge"implementeer en geevalueer. Vir sulke tegnieke word die rekonstruksie van die aantrekker in fase-ruimte geassosieer met die tydreeks gewoonlik vereis. Inbedmetodes word aangebied as 'n manier om die aantrekker te rekonstrueer uit beperkte data. Metodes om die minimum inbed-dimensie te bereken uit gemiddelde wedersydse inligting sowel as die optimale tydsvertraging te bereken uit vals-buurpunt-heuristiek, word bespreek. Die eerste voorspellingsalgoritme wat bespreek word is gebaseer op 'n tegniek van Sauer. Hierdie algoritme maak gebruik van die implementering van singulierwaarde-ontbinding van die ingebedde tydreeks wat voorspel word. Die tweede voorspellingsalgoritme is gebaseer op neurale netwerke. 'n Spesifieke netwerkargitektuur geskik vir deterministiese chaotiese tydreekse, naamlik die tydafhanklike neurale netwerk argitektuur word bespreek en ge"implementeer. 'n Modifikasie van die terugprapagerende leer-algoritme vir gebruik met die tydafhanklike neurale netwerk word ook aangebied. Albei algoritmes word geevalueer deur voorspellings te maak vir die bekende Lorenz tydreeks. Verskeie inbed parameters en ander algoritme-spesifieke parameters word gebruik om die voorspelling te maak. Die werklike waardes vanuit die Lorentz tydreeks word gebruik om die voorspellings te evalueer en om voorspellingsakkuraatheid te bepaal. Die voorspelde tydreekse word geevalueer op grand van twee kriteria, naamlik voorspellingsakkuraatheid, en kwalitatiewe gedragsakkuraatheid. Gedragsakkuraatheid verwys na die vermoe van die algoritme om die kwalitatiewe eienskappe van die tydreeks korrek te simuleer. Daar word aangetoon dat vir beide algoritmes die keuse van inbed-dimensie grater as die minimum inbeddimensie soos bereken uit die vals-buurpunt-heuristiek, grater akkuraatheid gee. Vir die neurale netwerkalgoritme gee 'n inbed-dimensie grater as die minimum inbed-dimensie ook betel' gedragsakkuraatheid soos verwag. Vir Sauer se algoritme, egter, word betel' gedragsakkuraatheid gevind vir 'n inbed-dimensie kleiner as die minimale inbed-dimensie. In terme van tydsvertraging word dit aangetoon dat vir beide algoritmes die grootste voorspellingsakkuraatheid verkry word by tydvertragings in 'n interval rondom die optimale tydsvetraging. Daar word ook aangetoon dat die neurale netwerk-algoritme die beste gedragsakkuraatheid gee vir tydsvertragings naby aan die optimale tydsvertraging, terwyl Sauer se algoritme betel' gedragsakkuraatheid gee by kleineI' waardes van die tydsvertraging. Die Matlab kode van beide algoritmes word ook aangebied.
37

Grain regression analysis

Sullwald, Wichard 04 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: Grain regression analysis forms an essential part of solid rocket motor simulation. In this thesis a numerical grain regression analysis module is developed as an alternative to cumbersome and time consuming analytical methods. The surface regression is performed by the level-set method, a numerical interface advancement scheme. A novel approach to the integration of the surface area and volume of a numerical interface, as defined implicitly in a level-set framework, by means of Monte-Carlo integration is proposed. The grain regression module is directly coupled to a quasi -1D internal ballistics solver in an on-line fashion, in order to take into account the effects of spatially varying burn rate distributions. A multi-timescale approach is proposed for the direct coupling of the two solvers. / AFRIKAANSE OPSOMMING: Gryn regressie analise vorm ’n integrale deel van soliede vuurpylmotor simulasie. In hierdie tesis word ’n numeriese gryn regressie analise model, as ’n alternatief tot dikwels omslagtige en tydrowende analitiese metodes, ontwikkel. Die oppervlak regressie word deur die vlak-set metode, ’n numeriese koppelvlak beweging skema uitgevoer. ’n Nuwe benadering tot die integrasie van die buite-oppervlakte en volume van ’n implisiete numeriese koppelvlak in ’n vlakset raamwerk, deur middel van Monte Carlo-integrasie word voorgestel. Die gryn regressie model word direk en aanlyn aan ’n kwasi-1D interne ballistiek model gekoppel, ten einde die uitwerking van ruimtelik-wisselende brand-koers in ag te neem. ’n Multi-tydskaal benadering word voorgestel vir die direkte koppeling van die twee modelle.
38

Long-term tracking of multiple interacting pedestrians using a single camera

Keaikitse, Advice Seiphemo 04 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: Object detection and tracking are important components of many computer vision applications including automated surveillance. Automated surveillance attempts to solve the challenges associated with closed-circuit camera systems. These include monitoring large numbers of cameras and the associated labour costs, and issues related to targeted surveillance. Object detection is an important step of a surveillance system and must overcome challenges such as changes in object appearance and illumination, dynamic background objects like ickering screens, and shadows. Our system uses Gaussian mixture models, which is a background subtraction method, to detect moving objects. Tracking is challenging because measurements from the object detection stage are not labelled and could be from false targets. We use multiple hypothesis tracking to solve this measurement origin problem. Practical long-term tracking of objects should have re-identi cation capabilities to deal with challenges arising from tracking failure and occlusions. In our system each tracked object is assigned a one-class support vector machine (OCSVM) which learns the appearance of that object. The OCSVM is trained online using HSV colour features. Therefore, objects that were occluded or left the scene can be reidenti ed and their tracks extended. Standard, publicly available data sets are used for testing. The performance of the system is measured against ground truth using the Jaccard similarity index, the track length and the normalized mean square error. We nd that the system performs well. / AFRIKAANSE OPSOMMING: Die opsporing en volging van voorwerpe is belangrike komponente van baie rekenaarvisie toepassings, insluitend outomatiese bewaking. Outomatiese bewaking poog om die uitdagings wat verband hou met geslote kring kamera stelsels op te los. Dit sluit in die monitering van groot hoeveelhede kameras en die gepaardgaande arbeidskoste, en kwessies wat verband hou met toegespitse bewaking. Die opsporing van voorwerpe is 'n belangrike stap in 'n bewakingstelsel en moet uitdagings soos veranderinge in die voorwerp se voorkoms en beligting, dinamiese agtergrondvoorwerpe soos ikkerende skerms, en skaduwees oorkom. Ons stelsel maak gebruik van Gaussiese mengselmodelle, wat 'n agtergrond-aftrek metode is, om bewegende voorwerpe op te spoor. Volging is 'n uitdaging, want afmetings van die voorwerp-opsporing stadium is nie gemerk nie en kan afkomstig wees van valse teikens. Ons gebruik verskeie hipotese volging (multiple hypothesis tracking ) om hierdie meting-oorsprong probleem op te los. Praktiese langtermynvolging van voorwerpe moet heridenti seringsvermoëns besit, om die uitdagings wat voortspruit uit mislukte volging en okklusies te kan hanteer. In ons stelsel word elke gevolgde voorwerp 'n een-klas ondersteuningsvektormasjien (one-class support vector machine, OCSVM) toegeken, wat die voorkoms van daardie voorwerp leer. Die OCSVM word aanlyn afgerig met die gebruik van HSV kleurkenmerke. Daarom kan voorwerpe wat verdwyn later her-identi seer word en hul spore kan verleng word. Standaard, openbaar-beskikbare datastelle word vir toetse gebruik. Die prestasie van die stelsel word gemeet teen korrekte afvoer, met behulp van die Jaccard ooreenkoms-indeks, die spoorlengte en die genormaliseerde gemiddelde kwadraatfout. Ons vind dat die stelsel goed presteer.
39

Thermal and colour data fusion for people detection and tracking

Joubert, Pierre 04 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2014. / ENGLISH ABSTRACT: In this thesiswe approach the problem of tracking multiple people individually in a video sequence. Automatic object detection and tracking is non-trivial as humans have complex and mostly unpredictable movements, and there are sensor noise and measurement uncertainties present. We consider traditional object detection methods and decide to use thermal data for the detection step. This choice is supported by the robustness of thermal data compared to colour data in unfavourable lighting conditions and in surveillance applications. A drawback of using thermal data is that we lose colour information, since the sensor interprets the heat emission of the body rather than visible light. We incorporate a colour sensor which is used to build features for each detected object. These features are used to help determine correspondences in detected objects over time. A problem with traditional blob detection algorithms, which typically consist of background subtraction followed by connected-component labelling, is that objects can appear to split or merge, or disappear in a few frames. We decide to add ‘dummy’ blobs in an effort to counteract these problems. We refrain from making any hard decisions with respect to the blob correspondences over time, and rather let the system decide which correspondences are more probable. Furthermore, we find that the traditional Markovian approach of determining correspondences between detected blobs in the current time step and only the previous time step can lead to unwanted behaviour. We rather consider a sequence of time steps and optimize the tracking across them. We build a composite correspondence model and weigh each correspondence according to similarity (correlation) in object features. All possible tracks are determined through this model and a likelihood is calculated for each. Using the best scoring tracks we then label all the detections and use this labelling as measurement input for a tracking filter. We find that the window tracking approach shows promise even though the data we us for testing is of poor quality and noisy. The system struggles with cluttered scenes and when a lot of dummy nodes are present. Nonetheless our findings act as a proof of concept and we discuss a few future improvements that can be considered. / AFRIKAANSE OPSOMMING: In hierdie tesis benader ons die probleemomverskeiemense individueel in ’n video-opname op te spoor en te volg. Outomatiese voorwerp-opsporing en -volging is nie-triviaal, want mense het komplekse en meestal onvoorspelbare bewegings, en daar is sensor-ruis en metingonsekerhede teenwoordig. Ons neem tradisionele voorwerp-opsporing metodes in ag en besluit om termiese data te gebruik vir die opsporingstap. Hierdie keuse word ondersteun deur die robuustheid van termiese data in vergelyking met kleur data in ongunstige lig-kondisies en in sekuriteitstoepassings. Die nadeel van die gebruik van termiese data is dat ons kleur inligting verloor, aangesien die sensor die hitte vrystelling van die liggaam interpreteer, eerder as sigbare lig. Ons inkorporeer ’n kleur-sensor wat gebruik word om die kenmerke van elke gevolgde voorwerp te bou. Hierdie kenmerke word gebruik om te help om ooreenkomste tussen opgespoorde voorwerpe te bepaal met die verloop van tyd. ’n Probleem met die tradisionele voorwerp-opsporing algoritmes, wat tipies bestaan uit agtergrond- aftrekking gevolg deur komponent-etikettering, is dat dit kan voorkom asof voorwerpe verdeel of saamsmelt, of verdwyn in ’n paar rame. Ons besluit om ‘flous’-voorwerpe by te voeg in ’n poging om hierdie probleme teen te werk. Ons weerhou om enige konkrete besluite oor opgespoorde voorwerpe se ooreenkomste met die verloop van tyd te maak, en laat die stelsel eerder toe om te besluit watter ooreenkomste meer waarskynlik is. Verder vind ons dat die tradisionele Markoviaanse benadering vir die bepaling van ooreenkomste tussen opgespoorde voorwerpe in die huidige tydstap en die vorige een kan lei tot ongewenste gedrag. Ons oorweeg eerder ’n reeks van tydstappe, of ’n venster, en optimeer die volg van voorwerpe oor hulle. Ons bou ’n saamgestelde ooreenstemmingsmodel en weeg elke ooreenstemming volgens die ooreenkoms (korrelasie) tussen voorwerpe se kenmerke. Alle moontlike spore word deur hierdie model bepaal en ’n waarskynlikheid word bereken vir elkeen. Die spore met die beste tellings word gebruik om al die opsporings te nommeer, en hierdie etikettering word gebruik as meting-inset vir ’n volgingsfilter. Ons vind dat die venster-volg benadering belowend vaar selfs al is die invoerdata in ons toetse van swak gehalte en ruiserig. Die stelsel sukkel met besige tonele en wanneer baie flous-voorwerpe teenwoordig is. Tog dien ons bevindinge as ’n bewys van konsep en ons bespreek ’n paar verbeterings wat in die toekoms oorweeg kan word.
40

Applying the MDCT to image compression

Muller, Rikus 03 1900 (has links)
Thesis (DSc (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2009. / The replacement of the standard discrete cosine transform (DCT) of JPEG with the windowed modifed DCT (MDCT) is investigated to determine whether improvements in numerical quality can be achieved. To this end, we employ an existing algorithm for optimal quantisation, for which we also propose improvements. This involves the modelling and prediction of quantisation tables to initialise the algorithm, a strategy that is also thoroughly tested. Furthermore, the effects of various window functions on the coding results are investigated, and we find that improved quality can indeed be achieved by modifying JPEG in this fashion.

Page generated in 0.569 seconds