• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 7
  • 4
  • Tagged with
  • 31
  • 12
  • 8
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Specification testing of Garch regression models

Shadat, Wasel Bin January 2011 (has links)
This thesis analyses, derives and evaluates specification tests of Generalized Auto-Regressive Conditional Heteroskedasticity (GARCH) regression models, both univariate and multivariate. Of particular interest, in the first half of the thesis, is the derivation of robust test procedures designed to assess the Constant Conditional Correlation (CCC) assumption often employed in multivariate GARCH (MGARCH) models. New asymptotically valid conditional moment tests are proposed which are simple to construct, easily implementable following the full or partial Quasi Maximum Likelihood (QML) estimation and which are robust to non-normality. In doing so, a non-normality robust version of the Tse's (2000) LM test is provided. In addition, a new and easily programmable expressions of the expected Hessian matrix associated with the QMLE is obtained. The finite sample performances of these tests are investigated in an extensive Monte Carlo study, programmed in GAUSS.In the second half of the thesis, attention is devoted to nonparametric testing of GARCH regression models. First simultaneous consistent nonparametric tests of the conditional mean and conditional variance structure of univariate GARCH models are considered. The approach is developed from the Integrated Generalized Spectral (IGS) and Projected Integrated Conditional Moment (PICM) procedures proposed recently by Escanciano (2008 and 2009, respectively) for time series models. Extending Escanciano (2008), a new and simple wild bootstrap procedure is proposed to implement these tests. A Monte Carlo study compares the performance of these nonparametric tests and four parametric tests of nonlinearity and/or asymmetry under a wide range of alternatives. Although the proposed bootstrap scheme does not strictly satisfy the asymptotic requirements, the simulation results demonstrate its ability to control the size extremely well and therefore the power comparison seems justified. Furthermore, this suggests there may exist weaker conditions under which the tests are implementable. The simulation exercise also presents the new evidence of the effect of conditional mean misspecification on various parametric tests of conditional variance. The testing procedures are also illustrated with the help of the S&P 500 data. Finally the PICM and IGS approaches are extended to the MGARCH case. The procedure is illustrated with the help of a bivariate CCC-GARCH model, but can be generalized to other MGARCH specifications. Simulation exercise shows that these tests have satisfactory size and are robust to non-normality. The marginal mean and variance tests have excellent power; however the covariance marginal tests lack power for some alternatives.
22

Ανάλυση παλινδρόμησης με χρήση ποιοτικών ερμηνευτικών μεταβλητών : διερεύνηση της επίδρασης του φύλου στις επιδόσεις μαθητών του γυμνασίου

Μαλλή, Ουρανία 05 March 2014 (has links)
Σε πολλά προβλήματα υπάρχει η ανάγκη να ασχοληθούμε ταυτόχρονα με την μελέτη δύο μεταβλητών ώστε να δούμε αν υπάρχει αλληλεξάρτηση μεταξύ τους, καθώς και να εντοπίσουμε την σχέση που εκφράζει αυτήν την αλληλεξάρτηση. Η σχέση αυτή ονομάζεται εξίσωση παλινδρόμησης και περιγράφει τον τρόπο αλληλεξάρτησης των μεταβλητών, τον κανόνα δηλαδή που διαμορφώνει τις τιμές της μιας μεταβλητής από τις τιμές της άλλης. Η πρώτη θα ονομάζεται ανεξάρτητη (ερμηνευτική) και η δεύτερη που οι τιμές της θα καθορίζονται από αυτές της πρώτης εξαρτημένη(ερμηνευόμενη). Κάποιες φορές οι ερμηνευτικές μεταβλητές που χρησιμοποιούμε είναι ποιοτικές και υπάρχουν τρόποι ποσοτικού προσδιορισμού των κατηγοριών μιας ποιοτικής μεταβλητής. Η μελέτη αυτή έχει ως στόχο την διερεύνηση της σχέσης του φύλου του μαθητή με τις επιδόσεις του στα μαθηματικά, ώστε να αναλυθούν οι διαφορές που εμφανίζονται μεταξύ των δυο φύλων. Για τον σκοπό αυτό θα χρησιμοποιηθούν γραμμικά μοντέλα, όπου όμως η ερμηνευτική μεταβλητή(το φύλο) είναι ποιοτική. Θα γίνει ποσοτικός προσδιορισμός των κατηγοριών της με την χρήση δύο τιμών: 0 αν είναι κορίτσι, 1 αν είναι αγόρι. Ο πληθυσμός της έρευνας αποτελείται από μαθητές γυμνασίου της ορεινής Αχαΐας που άρχισαν και τελείωσαν το γυμνάσιο στο συγκεκριμένο σχολείο. Για κάθε μαθητή έχει καταγραφεί από την καρτέλα του για κάθε τάξη η επίδοση στα μαθηματικά, στη γλώσσα, η συνολική επίδοση και το φύλο. / In many problems there is a need to simultaneously study two variables in order to see if there is interdependence between them and as well as to identify the equation that expresses this interdependence. This equation is called the regression equation and describes the way that these variables are interdependent. The first variable will be called independent (explanatory) and the second one, whose values are determined by the values of the first, will be called dependent (interpreted). In some cases the explanatory variables we use are qualitative and there are ways of quantifying the categories of a qualitative variable. This study aims to investigate the relationship between the sex of a student and his or hers performance in mathematics, in order to analyze the differences between the two sexes. For this purpose linear models will be used, where the interpretative variable (sex) is qualitative. We will quantify the categories with the use of two values: 0 if it's a girl and 1 if it's a boy. The survey population is consisted of students of a high school located in mountainous Achaia that started and finished studying at this particular school. For each student we have retrieved the performance in mathematics course, language course, overall performance and gender for each year studying in that school.
23

Modélisation d’un parc de machines pour la surveillance. : Application aux composants en centrale nucléaire / Modelling a fleet of machines for their diagnosis. : Application to nuclear power plants components

Ankoud, Farah 12 December 2011 (has links)
Cette thèse porte sur la conception de méthodes de surveillance de système à partir de données collectées sur des composants de conceptions identiques exploités par plusieurs processus. Nous nous sommes intéressés aux approches de diagnostic sans modèle a priori et plus particulièrement à l'élaboration des modèles de bon fonctionnement des composants à partir des données collectées sur le parc. Nous avons ainsi abordé ce problème comme un problème d'apprentissage multi-tâches qui consiste à élaborer conjointement les modèles de chaque composant, l'hypothèse sous-jacente étant que ces modèles partagent des parties communes. Dans le deuxième chapitre, on considère, dans un premier temps, des modèles linéaires de type multi-entrées/mono-sortie, ayant des structures a priori connues. Dans une première approche, après une phase d'analyse des modèles obtenus par régression linéaire pour les machines prises indépendamment les unes des autres, on identifie leurs parties communes, puis on procède à une nouvelle estimation des coefficients des modèles pour tenir compte des parties communes. Dans une seconde approche, on identifie simultanément les coefficients des modèles ainsi que leurs parties communes. Dans un deuxième temps, on cherche à obtenir directement les relations de redondance existant entre les variables mesurées par l'ACP. On s'affranchit alors des hypothèses sur la connaissance des structures des modèles et on prend en compte la présence d'erreurs sur l'ensemble des variables. Dans un troisième chapitre, une étude de la discernabilité des modèles est réalisée. Il s'agit de déterminer les domaines de variation des variables d'entrée garantissant la discernabilité des sorties des modèles. Ce problème d'inversion ensembliste est résolu soit en utilisant des pavés circonscrits aux différents domaines soit une approximation par pavage de ces domaines. Finalement, une application des approches proposées est réalisée sur des simulateurs d'échangeurs thermiques / This thesis deals with the conception of diagnosis systems using the data collected on identical machines working under different conditions. We are interested in the fault diagnosis method without a priori model and in modelling a fleet of machines using the data collected on all the machines. Hence, the problem can be formulated as a multi-task learning problem where models of the different machines are constructed simultaneously. These models are supposed to share some common parts. In the second chapter, we first consider linear models of type multiple-input/single-output. A first approach consists in analyzing the linear regression models generated using the data of each machine independently from the others in order to identify their common parts. Using this knowledge, new models for the machines are generated. The second approach consists in identifying simultaneously the coefficients of the models and their common parts. Secondly, the redundancy models are searched for using PCA. This way, no hypothesis on the knowledge of the structures of models describing the normal behavior of each machine is needed. In addition, this method allows to take into consideration the errors existing on all the variables since it does not differentiate between input or output variables. In the third chapter, a study on the discernibility of the outputs of the models is realized. The problem consists in identifying the range of variation of the input variables leading to discernible outputs of the models. This problem is solved using either the confined pavements to the different domains or a pavement method. Finally, the multi-task modelling approaches are applied on simulators of heat exchangers
24

Génération de grilles de type volumes finis : adaptation à un modèle structural, pétrophysique et dynamique / Generation of finite volume grids : Adaptation to a structural, petrophysical and dynamical model

Merland, Romain 18 April 2013 (has links)
Cet ouvrage aborde la génération de grilles de Voronoï sous contrainte pour réduire les erreurs liées à la géométrie des cellules lors de la simulation réservoir. Les points de Voronoï sont optimisés en minimisant des fonctions objectif correspondant à différentes contraintes géométriques. L'originalité de cette approche est de pouvoir combiner les contraintes simultanément : - la qualité des cellules, en plaçant les points de Voronoï aux barycentres des cellules ; - le raffinement local, en fonction d'un champ de densité [rho], correspondant à la perméabilité, la vitesse ou la vorticité ; - l'anisotropie des cellules, en fonction d'un champ de matrice M contenant les trois vecteurs principaux de l'anisotropie, dont l'un est défini par le vecteur vitesse ou par le gradient stratigraphique ; - l'orientation des faces des cellules, en fonction d'un champ de matrice M contenant les trois vecteurs orthogonaux aux faces, dont l'un est défini par le vecteur vitesse ; - la conformité aux surfaces du modèle structural, failles et horizons ; - l'alignement des points de Voronoï le long des puits. La qualité des grilles générées est appréciée à partir de critères géométriques et de résultats de simulation comparés à des grilles fines de référence. Les résultats indiquent une amélioration de la géométrie, qui n'est pas systématiquement suivie d'une amélioration des résultats de simulation / Voronoi grids are generated under constraints to reduce the errors due to cells geometry during flow simulation in reservoirs. The Voronoi points are optimized by minimizing objective functions relevant to various geometrical constraints. An original feature of this approach is to combine simultaneously the constraints: - Cell quality, by placing the Voronoi points at the cell barycenters. - Local refinement according to a density field rho, relevant to permeability, velocity or vorticity. - Cell anisotropy according to a matrix field M built with the three principal vectors of the anisotropy, which one is defined by the velocity vector or by the stratigraphic gradient. - Faces orientation according to a matrix field M built with the three vectors orthogonal to the faces, which one is defined by the velocity vector. - Conformity to structural features, faults and horizons. - Voronoï points alignment along well path. The quality of the generated grids is assessed from geometrical criteria and from comparisons of flow simulation results with reference fine grids. Results show geometrical improvements, that are not necessarily followed by flow simulation results improvements
25

Statistical modelling of return on capital employed of individual units

Burombo, Emmanuel Chamunorwa 10 1900 (has links)
Return on Capital Employed (ROCE) is a popular financial instrument and communication tool for the appraisal of companies. Often, companies management and other practitioners use untested rules and behavioural approach when investigating the key determinants of ROCE, instead of the scientific statistical paradigm. The aim of this dissertation was to identify and quantify key determinants of ROCE of individual companies listed on the Johannesburg Stock Exchange (JSE), by comparing classical multiple linear regression, principal components regression, generalized least squares regression, and robust maximum likelihood regression approaches in order to improve companies decision making. Performance indicators used to arrive at the best approach were coefficient of determination ( ), adjusted ( , and Mean Square Residual (MSE). Since the ROCE variable had positive and negative values two separate analyses were done. The classical multiple linear regression models were constructed using stepwise directed search for dependent variable log ROCE for the two data sets. Assumptions were satisfied and problem of multicollinearity was addressed. For the positive ROCE data set, the classical multiple linear regression model had a of 0.928, an of 0.927, a MSE of 0.013, and the lead key determinant was Return on Equity (ROE),with positive elasticity, followed by Debt to Equity (D/E) and Capital Employed (CE), both with negative elasticities. The model showed good validation performance. For the negative ROCE data set, the classical multiple linear regression model had a of 0.666, an of 0.652, a MSE of 0.149, and the lead key determinant was Assets per Capital Employed (APCE) with positive effect, followed by Return on Assets (ROA) and Market Capitalization (MC), both with negative effects. The model showed poor validation performance. The results indicated more and less precision than those found by previous studies. This suggested that the key determinants are also important sources of variability in ROCE of individual companies that management need to work with. To handle the problem of multicollinearity in the data, principal components were selected using Kaiser-Guttman criterion. The principal components regression model was constructed using dependent variable log ROCE for the two data sets. Assumptions were satisfied. For the positive ROCE data set, the principal components regression model had a of 0.929, an of 0.929, a MSE of 0.069, and the lead key determinant was PC4 (log ROA, log ROE, log Operating Profit Margin (OPM)) and followed by PC2 (log Earnings Yield (EY), log Price to Earnings (P/E)), both with positive effects. The model resulted in a satisfactory validation performance. For the negative ROCE data set, the principal components regression model had a of 0.544, an of 0.532, a MSE of 0.167, and the lead key determinant was PC3 (ROA, EY, APCE) and followed by PC1 (MC, CE), both with negative effects. The model indicated an accurate validation performance. The results showed that the use of principal components as independent variables did not improve classical multiple linear regression model prediction in our data. This implied that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Generalized least square regression was used to assess heteroscedasticity and dependences in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the weighted generalized least squares regression model had a of 0.920, an of 0.919, a MSE of 0.044, and the lead key determinant was ROE with positive effect, followed by D/E with negative effect, Dividend Yield (DY) with positive effect and lastly CE with negative effect. The model indicated an accurate validation performance. For the negative ROCE data set, the weighted generalized least squares regression model had a of 0.559, an of 0.548, a MSE of 57.125, and the lead key determinant was APCE and followed by ROA, both with positive effects.The model showed a weak validation performance. The results suggested that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Robust maximum likelihood regression was employed to handle the problem of contamination in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the robust maximum likelihood regression model had a of 0.998, an of 0.997, a MSE of 6.739, and the lead key determinant was ROE with positive effect, followed by DY and lastly D/E, both with negative effects. The model showed a strong validation performance. For the negative ROCE data set, the robust maximum likelihood regression model had a of 0.990, an of 0.984, a MSE of 98.883, and the lead key determinant was APCE with positive effect and followed by ROA with negative effect. The model also showed a strong validation performance. The results reflected that the key determinants are major sources of variability in ROCE of individual companies that management need to work with. Overall, the findings showed that the use of robust maximum likelihood regression provided more precise results compared to those obtained using the three competing approaches, because it is more consistent, sufficient and efficient; has a higher breakdown point and no conditions. Companies management can establish and control proper marketing strategies using the key determinants, and results of these strategies can see an improvement in ROCE. / Mathematical Sciences / M. Sc. (Statistics)
26

Logistic regression to determine significant factors associated with share price change

Muchabaiwa, Honest 19 February 2014 (has links)
This thesis investigates the factors that are associated with annual changes in the share price of Johannesburg Stock Exchange (JSE) listed companies. In this study, an increase in value of a share is when the share price of a company goes up by the end of the financial year as compared to the previous year. Secondary data that was sourced from McGregor BFA website was used. The data was from 2004 up to 2011. Deciding which share to buy is the biggest challenge faced by both investment companies and individuals when investing on the stock exchange. This thesis uses binary logistic regression to identify the variables that are associated with share price increase. The dependent variable was annual change in share price (ACSP) and the independent variables were assets per capital employed ratio, debt per assets ratio, debt per equity ratio, dividend yield, earnings per share, earnings yield, operating profit margin, price earnings ratio, return on assets, return on equity and return on capital employed. Different variable selection methods were used and it was established that the backward elimination method produced the best model. It was established that the probability of success of a share is higher if the shareholders are anticipating a higher return on capital employed, and high earnings/ share. It was however, noted that the share price is negatively impacted by dividend yield and earnings yield. Since the odds of an increase in share price is higher if there is a higher return on capital employed and high earning per share, investors and investment companies are encouraged to choose companies with high earnings per share and the best returns on capital employed. The final model had a classification rate of 68.3% and the validation sample produced a classification rate of 65.2% / Mathematical Sciences / M.Sc. (Statistics)
27

Statistical modelling of return on capital employed of individual units

Burombo, Emmanuel Chamunorwa 10 1900 (has links)
Return on Capital Employed (ROCE) is a popular financial instrument and communication tool for the appraisal of companies. Often, companies management and other practitioners use untested rules and behavioural approach when investigating the key determinants of ROCE, instead of the scientific statistical paradigm. The aim of this dissertation was to identify and quantify key determinants of ROCE of individual companies listed on the Johannesburg Stock Exchange (JSE), by comparing classical multiple linear regression, principal components regression, generalized least squares regression, and robust maximum likelihood regression approaches in order to improve companies decision making. Performance indicators used to arrive at the best approach were coefficient of determination ( ), adjusted ( , and Mean Square Residual (MSE). Since the ROCE variable had positive and negative values two separate analyses were done. The classical multiple linear regression models were constructed using stepwise directed search for dependent variable log ROCE for the two data sets. Assumptions were satisfied and problem of multicollinearity was addressed. For the positive ROCE data set, the classical multiple linear regression model had a of 0.928, an of 0.927, a MSE of 0.013, and the lead key determinant was Return on Equity (ROE),with positive elasticity, followed by Debt to Equity (D/E) and Capital Employed (CE), both with negative elasticities. The model showed good validation performance. For the negative ROCE data set, the classical multiple linear regression model had a of 0.666, an of 0.652, a MSE of 0.149, and the lead key determinant was Assets per Capital Employed (APCE) with positive effect, followed by Return on Assets (ROA) and Market Capitalization (MC), both with negative effects. The model showed poor validation performance. The results indicated more and less precision than those found by previous studies. This suggested that the key determinants are also important sources of variability in ROCE of individual companies that management need to work with. To handle the problem of multicollinearity in the data, principal components were selected using Kaiser-Guttman criterion. The principal components regression model was constructed using dependent variable log ROCE for the two data sets. Assumptions were satisfied. For the positive ROCE data set, the principal components regression model had a of 0.929, an of 0.929, a MSE of 0.069, and the lead key determinant was PC4 (log ROA, log ROE, log Operating Profit Margin (OPM)) and followed by PC2 (log Earnings Yield (EY), log Price to Earnings (P/E)), both with positive effects. The model resulted in a satisfactory validation performance. For the negative ROCE data set, the principal components regression model had a of 0.544, an of 0.532, a MSE of 0.167, and the lead key determinant was PC3 (ROA, EY, APCE) and followed by PC1 (MC, CE), both with negative effects. The model indicated an accurate validation performance. The results showed that the use of principal components as independent variables did not improve classical multiple linear regression model prediction in our data. This implied that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Generalized least square regression was used to assess heteroscedasticity and dependences in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the weighted generalized least squares regression model had a of 0.920, an of 0.919, a MSE of 0.044, and the lead key determinant was ROE with positive effect, followed by D/E with negative effect, Dividend Yield (DY) with positive effect and lastly CE with negative effect. The model indicated an accurate validation performance. For the negative ROCE data set, the weighted generalized least squares regression model had a of 0.559, an of 0.548, a MSE of 57.125, and the lead key determinant was APCE and followed by ROA, both with positive effects.The model showed a weak validation performance. The results suggested that the key determinants are less important sources of variability in ROCE of individual companies that management need to work with. Robust maximum likelihood regression was employed to handle the problem of contamination in the data. It was constructed using stepwise directed search for dependent variable ROCE for the two data sets. For the positive ROCE data set, the robust maximum likelihood regression model had a of 0.998, an of 0.997, a MSE of 6.739, and the lead key determinant was ROE with positive effect, followed by DY and lastly D/E, both with negative effects. The model showed a strong validation performance. For the negative ROCE data set, the robust maximum likelihood regression model had a of 0.990, an of 0.984, a MSE of 98.883, and the lead key determinant was APCE with positive effect and followed by ROA with negative effect. The model also showed a strong validation performance. The results reflected that the key determinants are major sources of variability in ROCE of individual companies that management need to work with. Overall, the findings showed that the use of robust maximum likelihood regression provided more precise results compared to those obtained using the three competing approaches, because it is more consistent, sufficient and efficient; has a higher breakdown point and no conditions. Companies management can establish and control proper marketing strategies using the key determinants, and results of these strategies can see an improvement in ROCE. / Mathematical Sciences / M. Sc. (Statistics)
28

Logistic regression to determine significant factors associated with share price change

Muchabaiwa, Honest 19 February 2014 (has links)
This thesis investigates the factors that are associated with annual changes in the share price of Johannesburg Stock Exchange (JSE) listed companies. In this study, an increase in value of a share is when the share price of a company goes up by the end of the financial year as compared to the previous year. Secondary data that was sourced from McGregor BFA website was used. The data was from 2004 up to 2011. Deciding which share to buy is the biggest challenge faced by both investment companies and individuals when investing on the stock exchange. This thesis uses binary logistic regression to identify the variables that are associated with share price increase. The dependent variable was annual change in share price (ACSP) and the independent variables were assets per capital employed ratio, debt per assets ratio, debt per equity ratio, dividend yield, earnings per share, earnings yield, operating profit margin, price earnings ratio, return on assets, return on equity and return on capital employed. Different variable selection methods were used and it was established that the backward elimination method produced the best model. It was established that the probability of success of a share is higher if the shareholders are anticipating a higher return on capital employed, and high earnings/ share. It was however, noted that the share price is negatively impacted by dividend yield and earnings yield. Since the odds of an increase in share price is higher if there is a higher return on capital employed and high earning per share, investors and investment companies are encouraged to choose companies with high earnings per share and the best returns on capital employed. The final model had a classification rate of 68.3% and the validation sample produced a classification rate of 65.2% / Mathematical Sciences / M.Sc. (Statistics)
29

Prise en compte de la complexité géométrique des modèles structuraux dans des méthodes de maillage fondées sur le diagramme de Voronoï / Accounting for the geometrical complexity of geological structural models in Voronoi-based meshing methods

Pellerin, Jeanne 20 March 2014 (has links)
Selon la méthode utilisée pour construire un modèle structural en trois dimensions et selon l'application à laquelle il est destiné, son maillage, en d'autres termes sa représentation informatique, doit être adapté afin de respecter des critères de type, de nombre et de qualité de ses éléments. Les méthodes de maillage développées dans d'autres domaines que la géomodélisation ne permettent pas de modifier le modèle d'entrée. Ceci est souhaitable en géomodélisation afin de mieux contrôler le nombre d'éléments du maillage et leur qualité. L'objectif de cette thèse est de développer des méthodes de maillage permettant de remplir ces objectifs afin de gérer la complexité géométrique des modèles structuraux définis par frontières. Premièrement, une analyse des sources de complexité géométrique dans ces modèles est proposée. Les mesures développées constituent une première étape dans la définition d'outils permettant la comparaison objective de différents modèles et aident à caractériser précisément les zones plus compliquées à mailler dans un modèle. Ensuite, des méthodes originales de remaillage surfacique et de maillage volumique fondées sur l'utilisation des diagrammes de Voronoï sont proposées. Les fondements de ces deux méthodes sont identiques : (1) une optimisation de type Voronoï barycentrique est utilisée pour globalement obtenir un nombre contrôlé d’éléments de bonne qualité et (2) des considérations combinatoires permettant de construire localement le maillage final, éventuellement en modifiant le modèle initial. La méthode de remaillage surfacique est automatique et permet de simplifier un modèle à une résolution donnée. L'originalité de la méthode de maillage volumique est que les éléments générés sont de types différents. Des prismes et pyramides sont utilisés pour remplir les zones très fines du modèle, tandis que le reste du modèle est rempli avec des tétraèdres / Depending on the specific method used to build a 3D structural model, and on the exact purpose of this model, its mesh must be adapted so that it enforces criteria on element types, maximum number of elements, and mesh quality. Meshing methods developed for applications others than geomodeling forbid any modification of the input model, that may be desirable in geomodeling to better control the number of elements in the final mesh and their quality. The objective of this thesis is to develop meshing methods that fulfill this requirement to better manage the geometrical complexity of B-Rep geological structural models. An analysis of the sources of geometrical complexity in those models is first proposed. The introduced measures are a first step toward the definition of tools allowing objective comparisons of structural models and permit to characterize the model zones that are more complicated to mesh. We then introduce two original meshing methods based on Voronoi diagrams: the first for surface remeshing, the second for hybrid gridding. The key ideas of these methods are identical: (1) the use of a centroidal Voronoi optimization to have a globally controlled number of elements of good quality, and (2) combinatorial considerations to locally build the final mesh while sometimes modifying the initial model. The surface remeshing method is automatic and permits to simplify a model at a given resolution. The gridding method generates a hybrid volumetric mesh. Prisms and pyramids fill the very thin layers of the model while the remaining regions are filled with tetrahedra
30

A simulation study of the effect of therapeutic horseback riding : a logistic regression approach

Pauw, Jeanette 11 1900 (has links)
Therapeutic horseback riding (THR) uses the horse as a therapeutic apparatus in physical and psychological therapy. This dissertation suggests a more appropriate technique for measuring the effect of THR. A research survey of the statistical methods used to determine the effect of THR was undertaken. Although researchers observed clinically meaningful change in several of the studies, this was not supported by statistical tests. A logistic regression approach is proposed as a solution to many of the problems experienced by researchers on THR. Since large THR related data sets are not available, data were simulated. Logistic regression and t-tests were used to analyse the same simulated data sets, and the results were compared. The advantages of the logistic regression approach are discussed. This statistical technique can be applied in any field where the therapeutic value of an intervention has to be proven scientifically. / Mathematical Sciences / M. Sc. (Statistics)

Page generated in 0.4097 seconds