Spelling suggestions: "subject:"criterion A"" "subject:"eriterion A""
241 |
Εκτίμηση των παραμέτρων στο μοντέλο της διπαραμετρικής εκθετικής κατανομής, υπό περιορισμόΡαφτοπούλου, Χριστίνα 10 June 2014 (has links)
Η παρούσα μεταπτυχιακή διατριβή εντάσσεται ερευνητικά στην περιοχή της Στατιστικής Θεωρίας Αποφάσεων και ειδικότερα στην εκτίμηση των παραμέτρων στο μοντέλο της διπαραμετρικής εκθετικής κατανομής με παράμετρο θέσης μ και παράμετρο κλίμακος σ. Θεωρούμε το πρόβλημα εκτίμησης των παραμέτρων κλίμακας μ και θέσης σ, όταν μ≤c, όπου c είναι μία γνωστή σταθερά. Αποδεικνύουμε ότι σε σχέση με το κριτήριο του Μέσου Τετραγωνικού Σφάλματος (ΜΤΣ), οι βέλτιστοι αναλλοίωτοι εκτιμητές των μ και σ, είναι μη αποδεκτοί όταν μ≤c, και προτείνουμε βελτιωμένους. Επίσης συγκρίνουμε του εκτιμητές αυτούς σε σχέση με το κριτήριο του Pitman. Επιπλέον, προτείνουμε εκτιμητές που είναι καλύτεροι από τους βέλτιστους αναλλοίωτους εκτιμητές, όταν μ≤c, ως προς την συνάρτηση ζημίας LINEX. Τέλος, η θεωρία που αναπτύσσεται εφαρμόζεται σε δύο ανεξάρτητα δείγματα προερχόμενα από εκθετική κατανομή. / The present master thesis deals with the estimation of the location parameter μ and the scale parameter σ of the two-parameter exponential distribution. We consider the problem of estimation of locasion parameter μ and the scale parameter σ, when it is known apriori that μ≤c, where c is a known constant. We establish that with respect to the mean square error (mse) criterion the best affine estimators of μ and σ in the absence of information μ≤c are inadmissible and we propose estimators which are better than these estimators. Also, we compare these estimators with respect to the Pitman Nearness criterion. We propose estimators which are better than the standard estimators in the unrestricted case with respect to the suitable choise of LINEX loss. Finally, the theory developed is applied to the problem of estimating the location and scale parameters of two exponential distributions when the location parameters are ordered.
|
242 |
Multiple Outlier Detection: Hypothesis Tests versus Model Selection by Information CriteriaLehmann, Rüdiger, Lösler, Michael 14 June 2017 (has links) (PDF)
The detection of multiple outliers can be interpreted as a model selection problem. Models that can be selected are the null model, which indicates an outlier free set of observations, or a class of alternative models, which contain a set of additional bias parameters. A common way to select the right model is by using a statistical hypothesis test. In geodesy data snooping is most popular. Another approach arises from information theory. Here, the Akaike information criterion (AIC) is used to select an appropriate model for a given set of observations. The AIC is based on the Kullback-Leibler divergence, which describes the discrepancy between the model candidates. Both approaches are discussed and applied to test problems: the fitting of a straight line and a geodetic network. Some relationships between data snooping and information criteria are discussed. When compared, it turns out that the information criteria approach is more simple and elegant. Along with AIC there are many alternative information criteria for selecting different outliers, and it is not clear which one is optimal.
|
243 |
Polymères en milieu aléatoire : influence d'un désordre corrélé sur le phénomène de localisation / Polymers in random environment : influence of correlated disorder on the localization phenomenonBerger, Quentin 15 June 2012 (has links)
Cette thèse porte sur l'étude de modèles de polymère en milieu aléatoire: on se concentre sur le cas d'un polymère dirigé en dimension d+1 qui interagit avec un défaut unidimensionnel. Les interactions sont possiblement non-homogènes, et sont représentées par des variables aléatoires. Une question importante est celle de l'influence du désordre sur le phénomène de localisation: on veut déterminer si la présence d'inhomogénéités modifie les propriétés critiques du système, et notamment les caractéristiques de la transition de phase (auquel cas le désodre est dit pertinent). En particulier, nous prouvons que dans le cas où le défaut est une marche aléatoire, le désordre est pertinent en dimension d≥3. Ensuite, nous étudions le modèle d'accrochage sur une ligne de défauts possédant des inhomogénéités corrélées spatialement. Il existe un critère non rigoureux (dû à Weinrib et Halperin), que l'on applique à notre modèle, et qui prédit si le désordre est pertinent ou non en fonction de l'exposant critique du système homogène, noté νpur, et de l'exposant de décroissance des corrélations. Si le désordre est gaussien et les corrélations sommables, nous montrons la validité du critère de Weinrib-Halperin: nous le prouvons dans la version hiérarchique du modèle, et aussi, de manière partielle, dans le cadre (standard) non-hiérarchique. Nous avons de plus obtenu un résultat surprenant: lorsque les corrélations sont suffisamment fortes, et en particulier si elles sont non-sommables (dans le cadre gaussien), il apparaît un régime où le désordre devient toujours pertinent, l'ordre de la transition de phase étant toujours plus grand que νpur. La prédiction de Weinrib-Halperin ne s'applique alors pas à notre modèle. / This thesis studies models of polymers in random environment: we focus on the case of a directed polymer in dimension d+1 that interacts with a one-dimensional defect. The interactions are possibly inhomogeneous, and are represented by random variables. We deal with the question of the influence of disorder on the localization phenomenon: we want to determine if the presence of inhomogeneities modifies the critical properties of the system, and especially the characteristics of the phase transition (in that case disorder is said to be pertinent). In particular, we prove that if the defect is a random walk, disorder is relevant in dimension d≥3. We then study the pinning model in random correlated environment. There is a non-rigourous criterion (due to Weinrib and Halperin), that we can apply to our model, and that predicts disorder relevance/irrelevance, according to the value of the critical exponent of the homogeneous system, denoted νpur, and of the correlation decay exponent. When disorder is Gaussian and correlations are summable, we show that the Weinrib-Halperin criterion is valid: we prove this in the hierarchical version of the model, and also, partially, in the non-hierachical (standard) framework. Moreover, we obtained a surprising result: when correlations are sufficiently strong, and in particular when they are non-summable (in the gaussian framework), a new regime in which disorder is always relevant appears, the order of the phase transition being always larger than νpur. The Weinrib-Halperin prediction therefore does not apply to our model.
|
244 |
Tamanho amostral para estimar a concentração de organismos em água de lastro: uma abordagem bayesiana / Sample size for estimating the organism concentration in ballast water: a Bayesian approachCosta, Eliardo Guimarães da 05 June 2017 (has links)
Metodologias para obtenção do tamanho amostral para estimar a concentração de organismos em água de lastro e verificar normas internacionais são desenvolvidas sob uma abordagem bayesiana. Consideramos os critérios da cobertura média, do tamanho médio e da minimização do custo total sob os modelos Poisson com distribuição a priori gama e binomial negativo com distribuição a priori Pearson Tipo VI. Além disso, consideramos um processo Dirichlet como distribuição a priori no modelo Poisson com o propósito de obter maior flexibilidade e robustez. Para fins de aplicação, implementamos rotinas computacionais usando a linguagem R. / Sample size methodologies for estimating the organism concentration in ballast water and for verifying international standards are developed under a Bayesian approach. We consider the criteria of average coverage, of average length and of total cost minimization under the Poisson model with a gamma prior distribution and the negative binomial model with a Pearson type VI prior distribution. Furthermore, we consider a Dirichlet process as a prior distribution in the Poisson model with the purpose to gain more flexibility and robustness. For practical applications, we implemented computational routines using the R language.
|
245 |
Development And Design Optimization Of Laminated Composite Structures Using Failure Mechanism Based Failure CriterionNaik, G Narayana 12 1900 (has links)
In recent years, use of composites is increasing in most fields of engineering such as aerospace, automotive, civil construction, marine, prosthetics, etc., because of its light weight, very high specific strength and stiffness, corrosion resistance, high thermal resistance etc. It can be seen that the specific strength of fibers are many orders more compared to metals. Thus, laminated fiber reinforced plastics have emerged to be attractive materials for many engineering applications. Though the uses of composites are enormous, there is always an element of fuzziness in the design of composites. Composite structures are required to be designed to resist high stresses. For this, one requires a reliable failure criterion. The anisotropic behaviour of composites makes it very difficult to formulate failure criteria and experimentally verify it, which require one to perform necessary bi-axial tests and plot the failure envelopes. Failure criteria are usually based on certain assumption, which are some times questionable. This is because, the failure process in composites is quite complex. The failure in a composite is normally based on initiating failure mechanisms such as fiber breaks, fiber compressive failure, matrix cracks, matrix crushing, delamination, disbonds or a combination of these. The initiating failure mechanism is the one, which is/are responsible for initiating failure in a laminated composites. Initiating failure mechanisms are generally dependant on the type of loading, geometry, material properties, condition of manufacture, boundary conditions, weather conditions etc. Since, composite materials exhibit directional properties, their applications and failure conditions should be properly examined and in addition to this, robust computational tools have to be exploited for the design of structural components for efficient utilisation of these materials.
Design of structural components requires reliable failure criteria for the safe design of the components. Several failure criteria are available for the design of composite laminates. None of the available anisotropic strength criteria represents observed results sufficiently accurate to be employed confidently by itself in design. Most of the failure criteria are validated based on the available uniaxial test data, whereas, in practical situations, laminates are subjected to at least biaxial states of stresses. Since, the generation of biaxial test data are very difficult and time consuming to obtain, it is indeed a necessity to develop computational tools for modelling the biaxial behavior of the composite laminates. Understanding of the initiating failure mechanisms and the development of reliable failure criteria is an essential prerequisite for effective utilization of composite materials. Most of the failure criteria, considers the uniaxial test data with constant shear stress to develop failure envelopes, but in reality, structures are subjected to biaxial normal stresses as well as shear stresses. Hence, one can develop different failure envelopes depending upon the percentage of the shear stress content.
As mentioned earlier, safe design of the composite structural components require reliable failure criterion. Currently two broad approaches, namely, (1) Damage Tolerance Based Design and (2)Failure Criteria Based Design are in use for the design of laminated structures in aerospace industry. Both approaches have some limitations. The damage tolerance based design suffers from a lack of proper definition of damage and the inability of analytical tools to handle realistic damage. The failure criteria based design, although relatively, more attractive in view of the simplicity, it forces the designer to use unverified design points in stress space, resulting in unpredictable failure conditions. Generally, failure envelopes are constructed using 4 or 5 experimental constants. In this type of approach, small experimental errors in these constants lead to large shift in the failure boundaries raising doubts about the reliability of the boundary in some segments. Further, they contain segments which have no experimental support and so can lead to either conservative or nonconservative designs. Conservative design leads to extra weight, a situation not acceptable in aerospace industry. Whereas, a nonconservative design, is obviously prohibitive, as it implies failure. Hence, both the damage tolerance based design and failure criteria based design have limitations. A new method, which combines the advantages of both the approaches is desirable. This issue is also thoroughly debated in many international conference on composites. Several pioneers in the composite industry indicated the need for further research work in the development of reliable failure criteria. Hence, this is motivated to carry out research work for the development of new failure criterion for the design of composite structures.
Several experts meetings held world wide towards the assessment of existing failure theories and computer codes for the design of composite structures. One such meeting is the experts meeting held at United Kingdom in 1991.This meeting held at St. Albans(UK) on ’Failure of Polymeric Composites and Structures: Mechanisms and Criteria for the Prediction of Performance’, in 1991 by UK Science & Engineering Council and UK Institute of Mechanical Engineers. After thorough deliberations it was concluded that
1. There is no universal definition of failure of composites.
2. There is little or lack of faith in the failure criteria that are in current use and
3. There is a need to carry out World Wide Failure Exercise(WWFE)
Based on the experts suggestions, Hinton and Soden initiated the WWFE in consultation with Prof.Bryan Harris (Editor, Journal of Composite Science and Tech-nology)to have a program to get comparative assessment of existing failure criteria and codes with following aims
1. Establish the current level of maturity of theories for predicting the failure response of fiber reinforced plastic(FRP)laminates.
2. Closing the knowledge gap between theoreticians and design practitioners in this field.
3. Stimulating the composites’ community into providing design engineers with more robust and accurate failure prediction methods, and the confidence to use them.
The organisers invited pioneers in the composite industry for the program of WWFE. Among the pioneer in the composite industry Professor Hashin declined to participate in the program and had written a letter to the organisers saying that, My only work in this subject relates to failure criteria of unidirectional fiber composites, not to laminates. I do not believe that even the most complete information about failure of single plies is sufficient to predict the failure of a laminate, consisting of such plies. A laminate is a structure which undergoes a complex damage process (mostly of cracking) until it finally fails. The analysis of such a process is a prerequisite for failure analysis. ”While significant advances have been made in this direction we have not yet arrived at the practical goal of failure prediction”.
Another important conference held in France in 1999, Composites for the next Millennium (Proceedingof Symposium in honor of S.W.Tsaion his 70th Birth Day Torus, France, July 2-3, 1999, pp.19.) also concludedon similar line to the meeting held at UK in 1991. Paul A Lagace and S. Mark Spearing, have pointed out that, by referring to the article on ’Predicting Failure in Composite Laminates: the background to the exercise’, by M.J.Hinton & P.D.Soden, Composites Science and Technology, Vol.58, No.7(1998), pp.1005. ”After Over thirty years of work ’The’ composite failure criterion is still an elusive entity”. Numerous researchers have produced dozens of approaches. Hundreds of papers, manuscripts and reports were written and presentations made to address the latest thoughts, add data to accumulated knowledge bases and continue the scholarly debate.
Thus, the out come of these experts meeting, is that, there is a need to develop new failure theories and due to complexities associated with experimentation, especially getting bi-axial data, computational methods are the only viable alternative. Currently, biaxial data on composites is very limited as the biaxial testing of laminates is very difficult and standardization of biaxial data is yet to be done. All these experts comments and suggestions motivated us to carry out research work towards the development of new failure criterion called ’Failure Mechanism Based Failure Criterion’ based on initiating failure mechanisms.
The objectives of the thesis are
1. Identification of the failure mechanism based failure criteria for the specific initiating failure mechanism and to assign the specific failure criteria for specific initiating failure mechanism,
2. Use of the ’failure mechanism based design’ method for composite pressurant tanks and to evaluate it, by comparing it with some of the standard ’failure criteria’ based designs from the point of view of overall weight of the pressurant tank,
3. Development of new failure criterion called ’Failure Mechanism Based Failure Criterion’ without shear stress content and the corresponding failure envelope,
4. Development of different failure envelopes with the effect of shear stress depending upon the percentage of shear stress content and
5. Design of composite laminates with the Failure Mechanism Based Failure Criterion using optimization techniques such as Genetic Algorithms(GA) and Vector Evaluated Particle Swarm Optimization(VEPSO) and the comparison of design with other failure criteria such as Tsai-Wu and Maximum Stress failure criteria.
The following paragraphs describe about the achievement of these objectives.
In chapter 2, a rectangular panel subjected to boundary displacements is used as an example to illustrate the concept of failure mechanism based design. Composite Laminates are generally designed using a failure criteria, based on a set of standard experimental strength values. Failure of composite laminates involves different failure mechanisms depending upon the stress state and so different failure mechanisms become dominant at different points on the failure envelope. Use of a single failure criteria, as is normally done in designing laminates, is unlikely to be satisfactory for all combination of stresses. As an alternate use of a simple failure criteria to identify the dominant failure mechanism and the design of the laminate using appropriate failure mechanism based criteria is suggested in this thesis. A complete 3-D stress analysis has been carried out using a general purpose NISA Finite Element Software. Comparison of results using standard failure criteria such as Maximum Stress, Maximum Strain, Tsai-Wu, Yamada-Sun, Maximum Fiber Strain, Grumman, O’brien, & Lagace, indicate substantial differences in predicting the first ply failure. Results for Failure Load Factors, based on the failure mechanism based approach are included. Identification of the failure mechanism at highly stressed regions and the design of the component, to withstand an artificial defect, representative this failure mechanism, provides a realistic approach to achieve necessary strength without adding unnecessary weight to the structure.
It is indicated that the failure mechanism based design approach offers a reliable way of assessing critically stressed regions to eliminate the uncertainties associated with the failure criteria.
In chapter 3, the failure mechanism based design approach has been applied to a composite pressurant tanks of upper stages of launch vehicles and propulsion systems of space crafts. The problem is studied using the failure mechanism based design approach, by introducing an artificial matrix crack representative of the initiating failure mechanism in the highly stressed regions and the strain energy release rate (SERR) are calculated. The total SERR value is obtained as 3330.23 J/m2, which is very high compared to the Gc(135 J/m2) value, which means the crack will grow further. The failure load fraction at which the crack has a tendency to grow is estimated to be 0.04054.Results indicates that there are significant differences in the failure load fraction for different failure criteria.Comparison with Failure Mechanism Based Criterion (FMBC) clearly indicates matrix cracks occur at loads much below the design load yet fibers are able to carrythe design load.
In chapter 4, a Failure Mechanism Based Failure Criterion(FMBFC)has been proposed for the development of failure envelope for unidirectional composite plies. A representative volume element of the laminate under local loading is micromechanically modelled to predict the experimentally determined strengths and this model is then used to predict points on the failure envelope in the neighborhood of the experimental points. The NISA finite element software has been used to determine the stresses in the representative volume element. From these micro-stresses, the strength of the lamina is predicted. A correction factor is used to match the prediction of the present model with the experimentally determined strength so that, the model can be expected to provide accurate prediction of the strength in the neighborhood of the experimental points. A procedure for the construction of the failure envelope in the stress space has been outlined and the results are compared with the some of the standard and widely used failure criteria in the composite industry. Comparison of results with the Tsai-Wu failure criterion shows that there are significant differences, particularly in the third quadrant, when the ply is under bi-axial compressive loading. Comparison with maximum stress criterion indicates better correlation. The present failure mechanism based failure criterion approach opens a new possibility of constructing reliable failure envelopes for bi-axial loading applications, using the standard uniaxialtest data.
In chapter 5, the new failure criterion for laminated composites developed based on initiating failure mechanism as mentioned in chapter 4 for without shear stress condition is extended to obtain the failure envelopes with the shear stress condition. The approach is based on Micromechanical analysis of composites, wherein a representative volume consists of a fiber surrounded by matrix in appropriate volume fraction and modeled using 3-D finite elements to predict the strengths.In this chapter, different failure envelopes are developed by varying shear stress say from 0% of shear strength to 100% of shear strength in steps of 25% of shear strength. Results obtained from this approach are compared with Tsai-Wu and Maximum stress failure criteria. The results show that the predicted strengths match more closely with maximum stress criterion. Hence, it can be concluded that influence of shear stress on the failure of the lamina is of little consequence as far as the prediction of strengths in laminates.
In chapter 6, the failure mechanism based failure criterion, developed by the authors is used for the design optimization of the laminates and the percentage savings in total weight of the laminate is presented. The design optimization of composite laminates are performed using Genetic Algorithms. The genetic algorithm is one of the robust tools available for the optimum design of composite laminates. The genetic algorithms employ techniques originated from biology and dependon the application of Darwin’s principle of survival of the fittest. When a population of biological creatures is permitted to evolve over generations, individual characteristics that are beneficial for survival have a tendency to be passed on to future generations, since individuals carrying them get more chances to breed. In biological populations, these characteristics are stored in chromosomal strings. The mechanics of natural genetics is derived from operations that result in arranged yet randomized exchange of genetic information between the chromosomal strings of the reproducing parents and consists of reproduction, cross over, mutation, and inversion of the chromosomal strings. Here, optimization of the weight of the composite laminates for given loading and material properties is considered. The genetic algorithms have the capability of selecting choice of orientation, thickness of single ply, number of plies and stacking sequence of the layers.
In this chapter, minimum weight design of composite laminates is presented using the Failure Mechanism Based(FMB), Maximum Stress and Tsai-Wu failure criteria. The objective is to demonstrate the effectiveness of the newly proposed FMB Failure Criterion(FMBFC) in composite design. The FMBFC considers different failure mechanisms such as fiber breaks, matrix cracks, fiber compressive failure, and matrix crushing which are relevant for different loadin gconditions. FMB and Maximum Stress failure criteria predicts byupto 43 percent savings in weight of the laminates compared to Tsai-Wu failure criterion in some quadrants of the failure envelope. The Tsai-Wu failure criterion over predicts the weight of the laminate by up to 86 percent in the third quadrant of the failure envelope compared to FMB and Maximum Stress failure criteria, when the laminate is subjected to biaxial compressive loading. It is found that the FMB and Maximum Stress failure criteria give comparable weight estimates. The FMBFC can be considered for use in the strength design of composite structures.
In chapter 7, Particle swarm optimization is used for design optimization of composite laminates. Particle swarm optimization(PSO)is a novel meta-heuristic inspired by the flocking behaviour of birds. The application of PSO to composite design optimization problems has not yet been extensively explored. Composite laminate optimization typically consists in determining the number of layers, stacking sequence and thickness of ply that gives the desired properties. This chapter details the use of Vector Evaluated Particle Swarm Optimization(VEPSO) algorithm, a multi-objective variant of PSO for composite laminate design optimization. VEPSO is a modern coevolutionary algorithm which employs multiple swarms to handle the multiple objectives and the information migration between these swarms ensures that a global optimum solution is reached. The current problem has been formulated as a classical multi-objective optimization problem, with objectives of minimizing weight of the component for a required strength and minimizing the totalcost incurred, such that the component does not fail. In this chapter, an optimum configuration for a multi-layered unidirectional carbon/epoxy laminate is determined using VEPSO. The results are presented for various loading configurations of the composite structures. The VEPSO predicts the same minimum weight optimization and percentage savings in weight of the laminate when compared to GA for all loading conditions.There is small difference in results predicted by VEPSO and GA for some loading and stacking sequence configurations, which is mainly due to random selection of swarm particles and generation of populations re-spectively.The difference can be prevented by running the same programme repeatedly.
The Thesis is concluded by highlighting the future scope of several potential applications based on the developments reported in the thesis.
|
246 |
Automated construction of generalized additive neural networks for predictive data mining / Jan Valentine du ToitDu Toit, Jan Valentine January 2006 (has links)
In this thesis Generalized Additive Neural Networks (GANNs) are studied in the context of predictive Data Mining. A GANN is a novel neural network implementation of a Generalized Additive Model. Originally GANNs were constructed interactively by considering partial residual plots.
This methodology involves subjective human judgment, is time consuming, and can result in suboptimal
results. The newly developed automated construction algorithm solves these difficulties by
performing model selection based on an objective model selection criterion. Partial residual plots
are only utilized after the best model is found to gain insight into the relationships between inputs
and the target. Models are organized in a search tree with a greedy search procedure that identifies
good models in a relatively short time. The automated construction algorithm, implemented
in the powerful SAS® language, is nontrivial, effective, and comparable to other model selection
methodologies found in the literature. This implementation, which is called AutoGANN, has a
simple, intuitive, and user-friendly interface. The AutoGANN system is further extended with an
approximation to Bayesian Model Averaging. This technique accounts for uncertainty about the
variables that must be included in the model and uncertainty about the model structure. Model
averaging utilizes in-sample model selection criteria and creates a combined model with better predictive
ability than using any single model. In the field of Credit Scoring, the standard theory of
scorecard building is not tampered with, but a pre-processing step is introduced to arrive at a more
accurate scorecard that discriminates better between good and bad applicants. The pre-processing
step exploits GANN models to achieve significant reductions in marginal and cumulative bad rates.
The time it takes to develop a scorecard may be reduced by utilizing the automated construction
algorithm. / Thesis (Ph.D. (Computer Science))--North-West University, Potchefstroom Campus, 2006.
|
247 |
Automated construction of generalized additive neural networks for predictive data mining / Jan Valentine du ToitDu Toit, Jan Valentine January 2006 (has links)
In this thesis Generalized Additive Neural Networks (GANNs) are studied in the context of predictive Data Mining. A GANN is a novel neural network implementation of a Generalized Additive Model. Originally GANNs were constructed interactively by considering partial residual plots.
This methodology involves subjective human judgment, is time consuming, and can result in suboptimal
results. The newly developed automated construction algorithm solves these difficulties by
performing model selection based on an objective model selection criterion. Partial residual plots
are only utilized after the best model is found to gain insight into the relationships between inputs
and the target. Models are organized in a search tree with a greedy search procedure that identifies
good models in a relatively short time. The automated construction algorithm, implemented
in the powerful SAS® language, is nontrivial, effective, and comparable to other model selection
methodologies found in the literature. This implementation, which is called AutoGANN, has a
simple, intuitive, and user-friendly interface. The AutoGANN system is further extended with an
approximation to Bayesian Model Averaging. This technique accounts for uncertainty about the
variables that must be included in the model and uncertainty about the model structure. Model
averaging utilizes in-sample model selection criteria and creates a combined model with better predictive
ability than using any single model. In the field of Credit Scoring, the standard theory of
scorecard building is not tampered with, but a pre-processing step is introduced to arrive at a more
accurate scorecard that discriminates better between good and bad applicants. The pre-processing
step exploits GANN models to achieve significant reductions in marginal and cumulative bad rates.
The time it takes to develop a scorecard may be reduced by utilizing the automated construction
algorithm. / Thesis (Ph.D. (Computer Science))--North-West University, Potchefstroom Campus, 2006.
|
248 |
Logistic regression to determine significant factors associated with share price changeMuchabaiwa, Honest 19 February 2014 (has links)
This thesis investigates the factors that are associated with annual changes in the share price of Johannesburg Stock Exchange (JSE) listed companies. In this study, an increase in value of a share is when the share price of a company goes up by the end of the financial year as compared to the previous year. Secondary data that was sourced from McGregor BFA website was used. The data was from 2004 up to 2011.
Deciding which share to buy is the biggest challenge faced by both investment companies and individuals when investing on the stock exchange. This thesis uses binary logistic regression to identify the variables that are associated with share price increase.
The dependent variable was annual change in share price (ACSP) and the independent variables were assets per capital employed ratio, debt per assets ratio, debt per equity ratio, dividend yield, earnings per share, earnings yield, operating profit margin, price earnings ratio, return on assets, return on equity and return on capital employed.
Different variable selection methods were used and it was established that the backward elimination method produced the best model. It was established that the probability of success of a share is higher if the shareholders are anticipating a higher return on capital employed, and high earnings/ share. It was however, noted that the share price is negatively impacted by dividend yield and earnings yield. Since the odds of an increase in share price is higher if there is a higher return on capital employed and high earning per share, investors and investment companies are encouraged to choose companies with high earnings per share and the best returns on capital employed.
The final model had a classification rate of 68.3% and the validation sample produced a classification rate of 65.2% / Mathematical Sciences / M.Sc. (Statistics)
|
249 |
Model selectionHildebrand, Annelize 11 1900 (has links)
In developing an understanding of real-world problems,
researchers develop mathematical and statistical models. Various
model selection methods exist which can be used to obtain a
mathematical model that best describes the real-world situation
in some or other sense. These methods aim to assess the merits
of competing models by concentrating on a particular criterion.
Each selection method is associated with its own criterion and
is named accordingly. The better known ones include Akaike's
Information Criterion, Mallows' Cp and cross-validation, to name
a few. The value of the criterion is calculated for each model
and the model corresponding to the minimum value of the criterion
is then selected as the "best" model. / Mathematical Sciences / M. Sc. (Statistics)
|
250 |
Métodos sem malha: aplicações do Método de Galerkin sem elementos e do Método de Interpolação de Ponto em casos estruturais. / Meshless methods: applications of Galerkin method and point interpolation method in structural cases.Franklin Delano Cavalcanti Leitão 19 February 2010 (has links)
Apesar de serem intensamente estudados em muitos países que caminham
na vanguarda do conhecimento, os métodos sem malha ainda são pouco explorados
pelas universidades brasileiras. De modo a gerar uma maior difusão ou, para
a maioria, fazer sua introdução, esta dissertação objetiva efetuar o entendimento
dos métodos sem malha baseando-se em aplicações atinentes à mecânica dos
sólidos. Para tanto, são apresentados os conceitos primários dos métodos sem
malha e o seu desenvolvimento histórico desde sua origem no método smooth
particle hydrodynamic até o método da partição da unidade, sua forma mais
abrangente. Dentro deste contexto, foi investigada detalhadamente a forma mais
tradicional dos métodos sem malha: o método de Galerkin sem elementos, e
também um método diferenciado: o método de interpolação de ponto. Assim,
por meio de aplicações em análises de barras e chapas em estado plano de
tensão, são apresentadas as características, virtudes e deficiências desses métodos
em comparação aos métodos tradicionais, como o método dos elementos
finitos. É realizado ainda um estudo em uma importante área de aplicação dos
métodos sem malha, a mecânica da fratura, buscando compreender como é efetuada
a representação computacional da trinca, com especialidade, por meio dos
critérios de visibilidade e de difração. Utilizando-se esses critérios e os conceitos
da mecânica da fratura, é calculado o fator de intensidade de tensão através do
conceito da integral J. / Meshless are certainly very researched in many countries that are in state
of art of scientific knowledge. However these methods are still unknown by many
brazilian universities. To create more diffusion or, for many people, to introduce
them, this work tries to understand the meshless based on solid mechanic applications.
So basic concepts of meshless and its historic development are introduced
since its origin, with smooth particle hydrodynamic until partition of unity, its
more general form. In this context, most traditional form of meshless was investigated
deeply: element free Galerkin method and also another different method:
point interpolation method. This way characteristics, advantages and disadvantages,
comparing to finite elements methods, are introduced by applications in
analyses in bars and plates in state of plane stress. This work still researched an
important area of meshless application, fracture mechanical, to understand how
a crack is computationally represented, particularly, with visibility and diffraction
criterions. By these criterions and using fracture mechanical concepts, stress intensity
factor is calculated by J-integral concept.
|
Page generated in 0.0856 seconds