• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 477
  • 92
  • 35
  • 32
  • 10
  • 5
  • 5
  • 5
  • 4
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • Tagged with
  • 818
  • 818
  • 126
  • 120
  • 117
  • 101
  • 85
  • 81
  • 75
  • 70
  • 68
  • 63
  • 62
  • 58
  • 56
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
221

NOISE AWARE BAYESIAN PARAMETER ESTIMATION IN BIOPROCESSES: USING NEURAL NETWORK SURROGATE MODELS WITH NON-UNIFORM DATA SAMPLING / NOISE AWARE BAYESIAN PARAMETER ESTIMATION IN BIOPROCESSES

Weir, Lauren January 2024 (has links)
This thesis demonstrates a parameter estimation technique for bioprocesses that utilizes measurement noise in experimental data to determine credible intervals on parameter estimates, with this information of potential use in prediction, robust control, and optimization. To determine these estimates, the work implements Bayesian inference using nested sampling, presenting an approach to develop neural network (NN) based surrogate models. To address challenges associated with non-uniform sampling of experimental measurements, an NN structure is proposed. The resultant surrogate model is utilized within a Nested Sampling Algorithm that samples possible parameter values from the parameter space and uses the NN to calculate model output for use in the likelihood function based on the joint probability distribution of the noise of output variables. This method is illustrated against simulated data, then with experimental data from a Sartorius fed-batch bioprocess. Results demonstrate the feasibility of the proposed technique to enable rapid parameter estimation for bioprocesses. / Thesis / Master of Applied Science (MASc) / Bioprocesses require models that can be developed quickly for rapid production of desired pharmaceuticals. Parameter estimation is necessary for these models, especially first principles models. Generating parameter estimates with confidence intervals is important for model based control. Challenges with parameter estimation that must be addressed are the presence of non-uniform sampling and measurement noise in experimental data. This thesis demonstrates a method of parameter estimation that generates parameter estimates with credible intervals by incorporating measurement noise in experimental data, while also employing a dynamic neural network surrogate model that can process non-uniformly sampled data. The proposed technique implements Bayesian inference using nested sampling and was tested against both simulated and real experimental fed-batch data.
222

Moment estimators involving the second and third sample moments for the negative binomial distribution

Mah, Valiant Wai-Yung January 1965 (has links)
This thesis essentially takes two separate paths to solve the same problem, namely that of obtaining an estimator, a parameter of the negative binomial distribution, for which we can show that such properties as bias and variance of this estimator are "better" than corresponding properties of the simple moment estimator, the latter being the estimator which is used most often in practice. We first consider two moment estimators involving the third sample moment. In the case of both of these estimators, for a restricted range of the parameters and of sample size, these estimators are not an improvement over the simple moment estimator. In fact, for the range considered, the bias and variance of the simple moment estimator was always smaller. We then considered an estimator which was defined as the simple moment estimator for part of the sample space and defined as a constant elsewhere. This was primarily done to remove a "singularity" in the moment estimator. It was felt that this singularity was causing the large bias and variance which seemed to exist for certain values of the parameters. For n=100, the bias and variance were approximated in a range of interest of the parameters. The results indicate an improvement over the sample moment estimator. / Ph. D.
223

Spectral Analysis of Nonuniformly Sampled Data and Applications

Babu, Prabhu January 2012 (has links)
Signal acquisition, signal reconstruction and analysis of spectrum of the signal are the three most important steps in signal processing and they are found in almost all of the modern day hardware. In most of the signal processing hardware, the signal of interest is sampled at uniform intervals satisfying some conditions like Nyquist rate. However, in some cases the privilege of having uniformly sampled data is lost due to some constraints on the hardware resources. In this thesis an important problem of signal reconstruction and spectral analysis from nonuniformly sampled data is addressed and a variety of methods are presented. The proposed methods are tested via numerical experiments on both artificial and real-life data sets. The thesis starts with a brief review of methods available in the literature for signal reconstruction and spectral analysis from non uniformly sampled data. The methods discussed in the thesis are classified into two broad categories - dense and sparse methods, the classification is based on the kind of spectra for which they are applicable. Under dense spectral methods the main contribution of the thesis is a non-parametric approach named LIMES, which recovers the smooth spectrum from non uniformly sampled data. Apart from recovering the spectrum, LIMES also gives an estimate of the covariance matrix. Under sparse methods the two main contributions are methods named SPICE and LIKES - both of them are user parameter free sparse estimation methods applicable for line spectral estimation. The other important contributions are extensions of SPICE and LIKES to multivariate time series and array processing models, and a solution to the grid selection problem in sparse estimation of spectral-line parameters. The third and final part of the thesis contains applications of the methods discussed in the thesis to the problem of radial velocity data analysis for exoplanet detection. Apart from the exoplanet application, an application based on Sudoku, which is related to sparse parameter estimation, is also discussed.
224

Parameters affecting accuracy and reproducibility of sedimentary particle size analysis of clays

Van der Merwe, J. J. 04 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2004 / ENGLISH ABSTRACT: The main aim of this study is to establish a standard procedure for all sedimentary particle size analysis methods specifically for clay minerals and mixtures thereof. Not only will it improve accuracy and reproducibility during clay size analysis, it will also secure comparability between different operators. As a start, all the apparatus-related parameters that can affect the accuracy and reproducibility were determined for the apparatus used, viz. the Sedigraph SOOOD. Thereafter, these parameters were kept constant, and the effects of potential material-related parameters were investigated one-by-one. First to be investigated were those parameters relating specifically to sample preparation. They were: grinding intensity, chemical dissolution of cementing materials, duration of prior soaking, salt content, centrifugal washing with polar organic liquids, deflocculant type and concentration, the effect of pH, ultrasonic time, and stirring during ultrasonic treatment. Then, the influence on accuracy and reproducibility of the physical and chemical parameters related to the suspension was determined. They were: the use of the viscosity and density of water to calibrate the apparatus in stead of those of the suspension liquid, hydrolysis of the deflocculant with suspension-ageing, and the effect of solid concentration on hindered settling. During this investigation a novel method was developed to enable faster and more accurate pycnometric density determinations. Next, the unique characteristics of clays, which can influence the results of sedimentary particle size analyses, were examined. Serious problems are encountered with the accuracy of the analyses of some clay types abundantly found in nature, viz. the smectites and mixed-layered clay minerals. Due to their swelling in water, and variations in the amounts of their crystal layers, they experience unpredictable changes in particle size. The latter is caused by the following external factors: clay type, humidity, type of exchange cation, electrolyte concentration, clay concentration, pH, deflocculant type and concentration, pressure history of the swell-clay suspension, and ageing of the suspension. The effect of each of them on the accuracy and reproducibility of the sedimentary particle size analysis of clays are investigated in detail. Another problem that influences the accuracy of the sedimentary methods is that owing to swelling, the densities of smectites and mixed-layered clays change by varying degrees when suspended in water. It is, however, impossible to pycnometrically determine the density of a swell-clay since it absorbs a part of the water used for its volume determination. To solve this problem, a novel method was devised to calculate swell-clay density. This method makes use of existing Monte Carlo simulations of the swelling mechanism of montmorillonite. During all sedimentary methods, an average clay density is normally used to calculate the particle size distribution of clay mixtures. However, if there is a large enough difference between the calculated average density and that of a component, then inaccurate results will be recorded. The magnitude of this effect was investigated for a few self-made clay mixtures, which consisted of different proportions of kaolinite, illite, and montmorillonite. Based on all the above results, a practical approach to, and a standard methodology for all the sedimentary methods of particle size analysis of clay minerals are presented. Additionally, a condensed summary is provided in table-form, which contains the magnitudes of the errors associated with each of the parameters that were examined. / AFRIKAANSE OPSOMMING: Die doel van hierdie studie is om 'n standaard prosedure daar te stel vir alle sedimentêre metodes van partikelgrootte analise, spesifiek vir gebruik met kleiminerale en mengsels daarvan. So 'n standaard prosedure sal die akkuraatheid en herhaalbaarheid van klei-analises verbeter, en die vergelykbaarheid tussen verskillende operateurs verseker. Aanvanklik is slegs die parameters bepaal wat die akkuraatheid en herhaalbaarheid van die gekose apparaat (Sedigraph 5000D) kan beïnvloed. Daarna is al hierdie parameters konstant gehou, en is die potensiële effekte van die moontlike materiaal-verwante parameters een na die ander ondersoek. Eerstens is die invloed van monstervoorbereiding op akkuraatheid en herhaalbaarheid bepaal. Verskillende parameters nl. maal-intensiteit, chemiese oplossing van sementerende materiale, sentrifugale wassing met polêre organiese vloeistowwe, tipe ontvlokker en konsentrasie, die effek van pH, ultrasoniese tyd en die effek van roer tydens ultrasonikasie is ondersoek. Vervolgens is die invloed op die akkuraatheid en herhaalbaarheid van die fisiese en chemiese parameters verwant aan die suspensie bepaal. Hierdie parameters was nl. die gebruik van die viskositeit en digtheid van water in plaas van dié van die suspensievloeistof, hidrolise van die ontvlokker tydens suspensieveroudering, asook die effek van vastestof-konsentrasie op belemmerde uitsakking. Gedurende hierdie ondersoek is ook 'n nuwe metode ontwikkel wat vinniger, en meer akkurate piknometriese digtheidsbepalings moontlik maak. Die unieke eienskappe van kleie wat die resultate van sedimentêre metodes van partikelgrootte analises kan beïnvloed, is volgende ondersoek. Tydens die analises van party kleie wat baie volop in die natuur voorkom, nl. die smektiete en menglaag-kleie, word ernstige akkuraatheids-probleme ondervind. Hul swelling in water, tesame met variasies in hul aantal kristal-lagies, veroorsaak onvoorspelbare verandering van hul partikelgroottes. Laasgenoemde word deur die volgende eksterne faktore veroorsaak: klei tipe, humiditeit, tipe uitruil-katioon, elektrolietkonsentrasie, kleikonsentrasie, pH, ontvlokker-tipe en konsentrasie, drukgeskiedenis van 'n swelklei-suspensie, en veroudering van die suspensie. Die effek van elk op die akkuraatheid en herhaalbaarheid van die sedimentêre partikelgrootte analises van kleie word in detail bespreek. 'n Verdere probleem wat die akkuraatheid van sedimentêre metodes beïnvloed, is dat wanneer smektiete en menglaag-kleie in water gesuspendeer word, hulle digthede in verskillende mates weens swelling verander. Dit is egter onmoontlik om die digtheid van swelkleie in water piknometries te bepaal, omdat swelklei 'n gedeelte van die water absorbeer wat gebruik moet word om die kleivolume mee te bepaal. Om hierdie probleem op te los, is 'n nuwe metode ontwikkelom die digtheid van swelkleie mee te bereken. Die metode maak gebruik van reedsbestaande Monte Carlo simulasies van die swelling van montmorillonite. Tydens alle sedimentêre metodes word normaalweg van 'n gemiddelde kleidigtheid gebruik gemaak om die partikelgrootte-verspreiding van kleimengsels mee te bereken. Indien die berekende gemiddelde digtheid egter genoegsaam met dié van 'n kleikomponent verskil, sal onakkurate resultate verkry word. Hierdie effek is ondersoek vir 'n paar selfgemaakte kleimengsels wat uit verskillende hoeveelhede kaoliniet, illiet, en montmorilloniet bestaan het. Laastens word 'n praktiese benadering en 'n standaard metode vir alle sedimentêre metodes voorgestel, wat gebaseer is op al die bogenoemde resultate. 'n Verkorte opsomming, met die groottes van die foute geassosieer met elke parameter wat ondersoek is, word laastens in tabelvorm verskaf.
225

Η διαδικασία φλυαρίας σε ασύρματα δίκτυα

Κατσάνος, Κωνσταντίνος 06 December 2013 (has links)
Στις ημέρες μας, η εμφάνιση των ασύρματων δικτύων σε πολλές πτυχές της καθημερινότητας, είναι συνεχώς αυξανομενη. Το γεγονός αυτό, έχει ως συνέπεια να υπάρχει μεγάλη ερευνητική δραστηριότητα γύρω από τα ασύρματα δίκτυα, η οποία αφορά όχι μόνο το σχεδιασμό τους και την ανάπτυξη διάφορων πρωτοκόλλων, αλλά και άλλες εφαρμογές, όπως είναι για παράδειγμα η εκτίμηση παραμέτρων. Στα πλαίσια της εργασίας αυτής, μελετάται η ανάπτυξη των αλγορίθμων φλυαρίας, οι οποίοι αφορούν μία κατανεμημένη προσέγγιση του προβλήματος της εκτίμησης παραμέτρων σε ένα δίκτυο. Πιο συγκεκριμένα, σε αντίθεση με τις κλασσικές μεθόδους στις οποίες αναλαμβάνει ένας κεντρικός κόμβος με μεγάλη υπολογιστική ισχύ να λύσει το πρόβλημα της εκτίμησης της παραμέτρου ενδιαφέροντος, με τους αλγόριθμους φλυαρίας αναιρείται η έννοια του κεντρικού κόμβου και η εκτίμηση στηρίζεται στη συνεχή ανταλλαγή πληροφοριών μεταξύ των κόμβων του δικτύου. Με τις προσομοιώσεις που έγιναν στα πλαίσια αυτής της εργασίας, αποδεικνύεται ότι οι εν λόγω αλγόριθμοι εξασφαλίζουν επιτυχημένη προσέγγιση του προβλήματος που καλούνται να επιλύσουν παρότι οι αλγόριθμοι φλυαρίας στηρίζονται σε υποβέλτιστες τεχνικές εκτίμησης παραμέτρων οι οποίες βασίζονται σε αναδρομικούς προσαρμοστικούς αλγορίθμους. Τέλος, αντιμετωπίζεται το πρόβλημα της εκτίμησης της θέσης ενός στόχου που κινείται στην περιοχή ενός δικτύου με βάση τη διαδικασία της φλυαρίας. / In recent years, the emergence of wireless networks in many aspects of daily life, is increasingly growing. This fact has as consequence a strong research activity around various types of wireless networks, not only in the design and development of various protocols, but also in other applications such as parameter estimation. In this thesis, we study the development of gossip algorithms that are related to a distributed approach to the problem of parameter estimation in a network. More specifically, in contrast with classical methods that assume a central node with high computational power to solve the problem of estimation of the parameter of interest, the use of gossip algorithms negates this concept and the estimation process is based on continuing exchange of information between network nodes. Additionally, despite the fact that gossip algorithms belong to suboptimal parameter estimation techniques, that are based on recursive adaptive algorithms, the simulation results presented show that these algorithms ensure successful approach to the problem they have to solve. Finally, the process of gossiping deals with the problem of estimating the position of a moving target in the region of a wireless network.
226

Efficient modelling of a wind turbine system for parameter estimation applications

Bekker, Johannes Cornelius 03 1900 (has links)
Thesis (MScEng)--Stellenbosch University, 2012 / ENGLISH ABSTRACT: Wind energy is a very current topic, both locally and internationally. It is one of the most rapidly growing renewable energy sources with installed capacity doubling every three years. South Africa's installed wind energy currently accounts for only 10 MW of the 197 GW worldwide installed capacity. With a 10 TWh renewable energy production target set for 2013 by the South African government, renewable energy projects have gained momentum in recent years. This target, together with data from case studies and reports on resource planning and technical requirements, shows that South Africa is well positioned for the implementation of wind energy sources. All this development in the local wind generation market creates a need for local knowledge in the field of wind energy as well as a need to efficiently model and analyse wind turbine systems and grid interactions for local operating conditions. Although the relevant model topologies are well established, obtaining or deriving appropriate parameter values from first principles remains problematic. Some parameters are also dependent on operating conditions and are best determined from site measurements using parameter estimation methodologies. One of the objectives of this project is to investigate whether the system parameter values can be obtained by performing parameter estimation on the model of a wind turbine system. The models used for parameter estimation processes require fast simulation times. Therefore, basic C-code S-function models of the wind turbine system components, i.e., the wind turbine blade, gearbox and generator, were developed and compiled as a Simulink library. These library components were then used for the parameter estimation process. The developed models, as well as the complete wind turbine system model, were validated and their performance evaluated, by comparing them to existing Simulink block models. These models all proved to be accurate and all showed reductions in simulation times. The principle of performing parameter estimation on C-code S-function models is proven by case studies performed on the individual models and the complete wind turbine system. The power coefficient matrix parameter values of the individual turbine blade model estimated with 100% accuracy for the excited elements. The individual gearbox parameter values all estimated accurately with errors below 2.5%. The parameter values of the individual generator models were estimated accurately for the ABC model, with errors below 4%, and less accurately for the DQ model with errors below 13%. The estimation results obtained for the complete wind turbine system model showed that the parameter values of the gearbox model and generator model were estimated accurately when the system model was excited through a step in angular velocity and steps in amplitude of the stator voltages respectively. A final estimation showed that a combination of gearbox and generator parameter values were accurately estimated when the model was excited through both a step in angular velocity and steps in the amplitude of the stator voltages. / AFRIKAANSE OPSOMMING: Windenergie is 'n baie aktuele onderwerp beide plaaslik en internasionaal. Windenergie is een van die vinnigste groeiende hernubare energie bronne met die geïnstalleerde kapasiteit wat driejaarliks verdubbel. Suid-Afrika se geïnstalleerde windenergie maak tans slegs 10 MW uit van die wêreldwye geïnstalleerde kapasiteit van 197 GW. Die Suid-Afrikaanse regering het ’n 10 TWh hernubare-energie produksie teiken gestel vir 2013. As gevolg hiervan het hernubare-energie projekte die laaste paar jaar momentum gekry. Hierdie teiken, tesame met die data van gevallestudies en verslae oor hulpbronbeplanning en tegniese vereistes, toon dat Suid-Afrika goed geposisioneer is vir die implementering van windenergiebronne. Hierdie ontwikkelinge in die plaaslike windenergie mark skep ’n behoefte aan plaaslike kennis op die gebied van windenergie, asook die behoefte vir ’n doeltreffende wyse vir die modellering en analisering van windturbine stelsels en netwerk integrasie vir plaaslike werkskondisies. Alhoewel die betrokke model topologieë reeds goed gevestig is, is die verkryging van toepaslike parameter waardes vanuit eerste beginsels steeds problematies. Sommige parameters is ook afhanklik van die werkskondisies en kan die beste bepaal word deur gebruik te maak van parameter estimasie metodologieë vanaf terrein metings. Een van die doelwitte van die projek is om ondersoek in te stel na die moontlikheid om die stelsel parameter waardes te verkry deur parameter estimasie toe te pas op ’n windturbine stelsel. Die modelle wat gebruik word vir die parameter estimasie prosesse benodig vinnige simulasie tye. Daarom is basiese C-kode S-funksie modelle vir die komponente van windturbine stelsels, d.w.s., die wind turbine lemme, ratkas en generator, ontwikkel en saamgestel as ’n Simulink biblioteek. Die komponente in hierdie biblioteek was toe gebruik vir die parameter estimasie proses. Die ontwikkelde modelle sowel as die hele windturbine stelsel model was gevalideer en hul werksverrigting geëvalueer, deur dit te vergelyk met bestaande Simulink blok modelle. Hierdie modelle het almal getoon dat hulle akkuraat is en het almal ’n vermindering in simulasie tyd getoon. Die beginsel van parameter estimasie wat uitgevoer word op C-kode S-funksie modelle, is bewys deur gevallestudies wat op die individuele modelle en die hele windturbine stelsel model uitgevoer was. Die geperturbeerde elemente van die kragkoëffisiënt-matriks arameter van die individuele turbine lemme model se waardes het 100% akkuraatheid geëstimeer. Die individuele ratkas model se parameter waardes was almal akkuraat geëstimeer, met foute kleiner as 2.5%. Die individuele generator modelle se parameter waardes was akkuraat geëstimeer vir die ABC model, met foute kleiner as 4%, en minder akkuraat vir die DQ model, met foute kleiner as 13%. Die resultate wat verkry is van die estimasie wat uitgevoer is op die volledige windturbine stelsel model, het getoon dat die parameter waardes van die ratkas model en die generator model akkuraat geëstimeer word, wanneer die stelsel model onderskeidelik deur ’n trap in die hoeksnelheid en trappe in die amplitude van die stator spannings geperturbeer word. ’n Finale estimasie het getoon dat ’n kombinasie van ratkas en generator parameter waardes akkuraat geëstimeer kan word as die model deur beide die trap in hoeksnelheid en die trappe in die amplitude van die stator spannings geperturbeer word.
227

The Integrated Distributed Hydrological Model, ECOFLOW- a Tool for Catchment Management

Sokrut, Nikolay January 2005 (has links)
<p>In order to find effective measures that meet the requirements for proper groundwater quality and quantity management, there is a need to develop a Decision Support System (DSS) and a suitable modelling tool. Central components of a DSS for groundwater management are thought to be models for surface- and groundwater flow and solute transport. The most feasible approach seems to be integration of available mathematical models, and development of a strategy for evaluation of the uncertainty propagation through these models. The physically distributed hydrological model ECOMAG has been integrated with the groundwater model MODFLOW to form a new integrated watershed modelling system - ECOFLOW. The modelling system ECOFLOW has been developed and embedded in Arc View. The multiple-scale modelling principle, combines a more detailed representation of the groundwater flow conditions with lumped watershed modelling, characterised by simplicity in model use, and a minimised number of model parameters. A Bayesian statistical downscaling procedure has also been developed and implemented in the model. This algorithm implies downscaling of the parameters used in the model, and leads to decreasing of the uncertainty level in the modelling results. The integrated model ECOFLOW has been applied to the Vemmenhög catchment, in Southern Sweden, and the Örsundaån catchment, in central Sweden. The applications demonstrated that the model is capable of simulating, with reasonable accuracy, the hydrological processes within both the agriculturally dominated watershed (Vemmenhög) and the forest dominated catchment area (Örsundaån). The results show that the ECOFLOW model adequately predicts the stream and groundwater flow distribution in these watersheds, and that the model can be used as a possible tool for simulation of surface– and groundwater processes on both local and regional scales. A chemical module ECOMAG-N has been created and tested on the Vemmenhög watershed with a highly dense drainage system and intensive fertilisation practises. The chemical module appeared to provide reliable estimates of spatial nitrate loads in the watershed. The observed and simulated nitrogen concentration values were found to be in close agreement at most of the reference points. The proposed future research includes further development of this model for contaminant transport in the surface- and ground water for point and non-point source contamination modelling. Further development of the model will be oriented towards integration of the ECOFLOW model system into a planned Decision Support System.</p>
228

Novel Methods for T2 Estimation Using Highly Undersampled Radial MRI Data

Huang, Chuan January 2011 (has links)
The work presented in this dissertation involves the development of parametric magnetic resonance imaging (MRI) techniques that can be used in a clinical set up. In the first chapter an introduction of basic magnetic resonance physics is given. The introduction covers the source to tissue magnetization, the origin of the detectable signal, the relaxation mechanisms, and the imaging principles. In the second chapter T₂ estimation - the main parametric MRI technique addressed in this work - is introduced and the problem associated with T₂ estimation from highly undersampled fast spin-echo (FSE) data is presented. In Chapter 3, a novel model-based algorithm with linearization by principal component analysis (REPCOM) is described. Based on simulations, physical phantom and in vivo data, the proposed algorithm is shown to produce accurate and stable T₂ estimates. In Chapter 4, the concept of indirect echoes associated with the acquisition of FSE data is introduced. Indirect echo correction using the extended phase graph approach is then studied for standard sampled data. A novel reconstruction algorithm (SERENADE) is presented for the reconstruction of decay curves with indirect echoes from highly undersampled data. The technique is evaluated using simulations, physical phantom and in vivo data; decay curves with indirect echoes are shown to be accurately recovered by this technique. Chapter 5 is dedicated to correcting the partial volume effect (PVE) in T₂ estimation. For the case of small lesions within a background tissue, PVE affects T₂ estimation which in turn affects lesion classification. A novel joint fitting algorithm is proposed and compared to conventional fitting algorithms using fully sampled spin-echo (SE) images. It is shown that the proposed algorithm is more accurate, robust, and insensitive to region of interest drawing than the conventional fitting algorithms. Because the acquisition of fully sampled SE images is long, the technique is combined with a thick refocusing slice approach in order to be able to use undersampled FSE data and reduce the acquisition time to a breath hold (~ 20 s). The final chapter summarizes the results presented in the dissertations and discusses areas for future work.
229

Parameter Estimation Techniques for Nonlinear Dynamic Models with Limited Data, Process Disturbances and Modeling Errors

Karimi, Hadiseh 23 December 2013 (has links)
In this thesis appropriate statistical methods to overcome two types of problems that occur during parameter estimation in chemical engineering systems are studied. The first problem is having too many parameters to estimate from limited available data, assuming that the model structure is correct, while the second problem involves estimating unmeasured disturbances, assuming that enough data are available for parameter estimation. In the first part of this thesis, a model is developed to predict rates of undesirable reactions during the finishing stage of nylon 66 production. This model has too many parameters to estimate (56 unknown parameters) and not having enough data to reliably estimating all of the parameters. Statistical techniques are used to determine that 43 of 56 parameters should be estimated. The proposed model matches the data well. In the second part of this thesis, techniques are proposed for estimating parameters in Stochastic Differential Equations (SDEs). SDEs are fundamental dynamic models that take into account process disturbances and model mismatch. Three new approximate maximum likelihood methods are developed for estimating parameters in SDE models. First, an Approximate Expectation Maximization (AEM) algorithm is developed for estimating model parameters and process disturbance intensities when measurement noise variance is known. Then, a Fully-Laplace Approximation Expectation Maximization (FLAEM) algorithm is proposed for simultaneous estimation of model parameters, process disturbance intensities and measurement noise variances in nonlinear SDEs. Finally, a Laplace Approximation Maximum Likelihood Estimation (LAMLE) algorithm is developed for estimating measurement noise variances along with model parameters and disturbance intensities in nonlinear SDEs. The effectiveness of the proposed algorithms is compared with a maximum-likelihood based method. For the CSTR examples studied, the proposed algorithms provide more accurate estimates for the parameters. Additionally, it is shown that the performance of LAMLE is superior to the performance of FLAEM. SDE models and associated parameter estimates obtained using the proposed techniques will help engineers who implement on-line state estimation and process monitoring schemes. / Thesis (Ph.D, Chemical Engineering) -- Queen's University, 2013-12-23 15:12:35.738
230

Post-manoeuvre and online parameter estimation for manned and unmanned aircraft

Jameson, Pierre-Daniel January 2013 (has links)
Parameterised analytical models that describe the trimmed inflight behaviour of classical aircraft have been studied and are widely accepted by the flight dynamics community. Therefore, the primary role of aircraft parameter estimation is to quantify the parameter values which make up the models and define the physical relationship of the air vehicle with respect to its local environment. Nevertheless, a priori empirical predictions dependent on aircraft design parameters also exist, and these provide a useful means of generating preliminary values predicting the aircraft behaviour at the design stage. However, at present the only feasible means that exist to actually prove and validate these parameter values remains to extract them through physical experimentation either in a wind-tunnel or from a flight test. With the advancement of UAVs, and in particular smaller UAVs (less than 1m span) the ability to fly the full scale vehicle and generate flight test data presents an exciting opportunity. Furthermore, UAV testing lends itself well to the ability to perform rapid prototyping with the use of COTS equipment. Real-time system identification was first used to monitor highly unstable aircraft behaviour in non-linear flight regimes, while expanding the operational flight envelope. Recent development has focused on creating self-healing control systems, such as adaptive re-configurable control laws to provide robustness against airframe damage, control surface failures or inflight icing. In the case of UAVs real-time identification, would facilitate rapid prototyping especially in low-cost projects with their constrained development time. In a small UAV scenario, flight trials could potentialy be focused towards dynamic model validation, with the prior verification step done using the simulation environment. Furthermore, the ability to check the estimated derivatives while the aircraft is flying would enable detection of poor data readings due to deficient excitation manoeuvres or atmospheric turbulence. Subsequently, appropriate action could then be taken while all the equipment and personnel are in place. This thesis describes the development of algorithms in order to perform online system identification for UAVs which require minimal analyst intervention. Issues pertinent to UAV applications were: the type of excitation manoeuvers needed and the necessary instrumentation required to record air-data. Throughout the research, algorithm development was undertaken using an in-house Simulink© model of the Aerosonde UAV which provided a rapid and flexible means of generating simulated data for analysis. In addition, the algorithms were further tested with real flight test data that was acquired from the Cranfield University Jestream-31 aircraft G-NFLA during its routine operation as a flying classroom. Two estimation methods were principally considered, the maximum likelihood and least squares estimators, with the aforementioned found to be best suited to the proposed requirements. In time-domain analysis reconstruction of the velocity state derivatives ˙W and ˙V needed for the SPPO and DR modes respectively, provided more statistically reliable parameter estimates without the need of a α- or β- vane. By formulating the least squares method in the frequency domain, data issues regarding the removal of bias and trim offsets could be more easily addressed while obtaining timely and reliable parameter estimates. Finally, the importance of using an appropriate input to excite the UAV dynamics allowing the vehicle to show its characteristics must be stressed.

Page generated in 0.1541 seconds