• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 65
  • 16
  • 13
  • 10
  • 9
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 386
  • 386
  • 79
  • 66
  • 55
  • 50
  • 50
  • 44
  • 41
  • 40
  • 37
  • 34
  • 34
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Discriminant Analysis and Support Vector Regression in High Dimensions: Sharp Performance Analysis and Optimal Designs

Sifaou, Houssem 04 1900 (has links)
Machine learning is emerging as a powerful tool to data science and is being applied in almost all subjects. In many applications, the number of features is com- parable to the number of samples, and both grow large. This setting is usually named the high-dimensional regime. In this regime, new challenges arise when it comes to the application of machine learning. In this work, we conduct a high-dimensional performance analysis of some popular classification and regression techniques. In a first part, discriminant analysis classifiers are considered. A major challenge towards the use of these classifiers in practice is that they depend on the inverse of covariance matrices that need to be estimated from training data. Several estimators for the inverse of the covariance matrices can be used. The most common ones are estimators based on the regularization approach. In this thesis, we propose new estimators that are shown to yield better performance. The main principle of our proposed approach is the design of an optimized inverse covariance matrix estimator based on the assumption that the covariance matrix is a low-rank perturbation of a scaled identity matrix. We show that not only the proposed classifiers are easier to implement but also, outperform the classical regularization-based discriminant analysis classifiers. In a second part, we carry out a high-dimensional statistical analysis of linear support vector regression. Under some plausible assumptions on the statistical dis- tribution of the data, we characterize the feasibility condition for the hard support vector regression and, when feasible, derive an asymptotic approximation for its risk. Similarly, we study the test risk for the soft support vector regression as a function of its parameters. The analysis is then extended to the case of kernel support vector regression under generalized linear models assumption. Based on our analysis, we illustrate that adding more samples may be harmful to the test performance of these regression algorithms, while it is always beneficial when the parameters are optimally selected. Our results pave the way to understand the effect of the underlying hyper- parameters and provide insights on how to optimally choose the kernel function.
92

Assessing corporate financial distress in South Africa

Hlahla, Bothwell Farai 10 November 2011 (has links)
This study develops a bankruptcy prediction model for South African companies listed on the Johannesburg Stock Exchange. The model is of considerable efficiency and the findings reported extend bankruptcy literature to developing countries. 64 financial ratios for 28 companies, grouped into failed and non-failed companies, were tested using multiple discriminant analysis after conducting normality tests. Three variables were found to be significant which are: Times Interest Earned, Cash to Debt and Working Capital to Turnover. The model correctly classified about 75% of failed and non-failed in the original and cross validation procedures. This study went on to conduct an external validation of the model superiority by introducing a sample of failed companies, which showed that the model predictive accuracy is more than chance. Despite the popularity of the topic among researchers this study highlighted the importance and relevance of the topic to corporate managers, policy makers and to investors especially in a developing market perspective, thereby contributing significantly towards understanding the factors that lead to corporate bankruptcy.
93

A study of the determinants of transfer pricing. The evaluation of the relationship between a number of company variables and transfer pricing methods used by UK companies in domestic and international markets

Mostafa, Azza Mostafa Mohamed January 1981 (has links)
The transfer pricing, literature indicates that an investigation of some aspects of this subject could usefully be undertaken in order to contribute to the understanding of transfer pricing in both domestic and international markets. This study aims at exploring the current state of transfer pricing practice and establishing the importance attached to the ranking of transfer pricing determinants (i. e. objectives and environmental variables) and the extent to which the ranking varies across markets, industry, and according to the transfer pricing method used. It also seeks to discover interrelationship among the transfer pricing determinants in order to produce a reduced set of basic factors. Lastly, it aims at evaluating the relationship between transfer pricing determinants and transfer pricing methods and at discovering a means of predicting the latter from the company's perception of the relative importance of these determinants. To achieve the above objectives, an empirical study covering both domestic and international markets was undertaken in UK companies. The conclusions are concerned with transfer pricing policy, methods currently used, and problems apparent in practice. The overall ranking-by survey respondents of the transfer pricing determinants is given as well as the results of tests of certain hypotheses which relate to this ranking. The transfer pricing determinants used in the survey for domestic and international. markets (twelve and twenty respectively) have been reduced by Factor Analysis to four and six factors. The study made use of the results to obtain measures of the ranking of discovered factors. Finally, the relationship between the transfer pricing determinants and transfer pricing methods was quantitatively evaluated in the form of a set of classification functions by using Multi-Discriminant Analysis. The classification functions are able to predict the transfer pricing method actually used in companies with an acceptable degree of success. The study's results have been reviewed with a small number of senior managers who are involved in establishing transfer pricing policy within their companies. / Egyptian Government and Al-Azher University
94

On Fractionally-Supervised Classification: Weight Selection and Extension to the Multivariate t-Distribution

Gallaugher, Michael P.B. January 2017 (has links)
Recent work on fractionally-supervised classification (FSC), an approach that allows classification to be carried out with a fractional amount of weight given to the unla- belled points, is extended in two important ways. First, and of fundamental impor- tance, the question over how to choose the amount of weight given to the unlabelled points is addressed. Then, the FSC approach is extended to mixtures of multivariate t-distributions. The first extension is essential because it makes FSC more readily applicable to real problems. The second, although less fundamental, demonstrates the efficacy of FSC beyond Gaussian mixture models. / Thesis / Master of Science (MSc)
95

THE PREDICTIVE ACCURACY OF BOOSTED CLASSIFICATION TREES RELATIVE TO DISCRIMINANT ANALYSIS AND LOGISTIC REGRESSION

CRISANTI, MARK 27 June 2007 (has links)
No description available.
96

A study of the generalized eigenvalue decomposition in discriminant analysis

Zhu, Manli 12 September 2006 (has links)
No description available.
97

A discriminant analysis of attitudes related to the nuclear power controversy in central and southwestern Ohio and northern Kentucky /

Girondi, Alfred Joseph January 1980 (has links)
No description available.
98

Leaders and Followers Among Security Analysts

Wang, Li 05 1900 (has links)
<p> We developed and tested procedures to rank the performance of security analysts according to the timeliness of their earning forecasts. We compared leaders and followers among analysts on various performance attributes, such as accuracy, boldness, experience, brokerage size and so on. We also use discriminant analysis and logistic regression model to examine what attributes have an effect on the classification. Further, we examined whether the timeliness of forecasts is related to their impact on stock prices. We found that the lead analysts identified by the measure of forecast timeliness have a greater impact on stock price than follower analysts. Our initial sample includes all firms on the Institutional Brokers Estimate System (I/B/E/S) database and security return data on the daily CRSP file for the years 1994 through 2003.</p> / Thesis / Master of Science (MSc)
99

Linear discriminant analysis

Riffenburgh, Robert Harry January 1957 (has links)
Linear discriminant analysis is the classification of an individual as having arisen from one or the other of two populations on the basis of a scalar linear function of measurements of the individual. This paper is a population and large sample study of linear discriminant analysis. The population study is carried out on three levels: (1.1) (a) with loss functions and prior probabilities, (b) without loss functions but with prior probabilities, (c) with neither. The first level leads to consideration of risks which may be split into two components, one for each type of misclassification, i.e. classification of an individual into population I given it arose from population II, and classification of it into II given it arose from I. Similarly, the second level leads to consideration of expected errors and the third level leads to consideration of conditional probabilities of misclassification, both again which may be divided into the same two components. At each level the "optimum" discriminator should jointly minimize the two probability components. These quantities are all positive for all hyperplanes. Either one or any pair may be made equal to zero by classifying all individuals of a sample into the appropriate population; but this maximizes the other one. Consequently, joint minmization must be obtained by some compromise, e.g. by selecting a single criterion to be minimized. Two types of criteria for judging discriminators are considered at each level: (1.4) (i) Total risk (a) (1.5) Total expected errors (b) (1.6) . Sum of conditional probabilities of misclassification (c) (1.7) (ii) Larger risk (a) (1.8) Larger expected error (b) (1.9) Larger conditional probability of misclassification (c). These criteria are not particularly new, but have not been applied to linear discrimination and not been all used jointly. If A is a k-dimensional row vector of direction numbers, X a k-dimensional row vector of variables, and a constant, a linear discriminator is (1.10) AX' = o, which also represents a hyperplane in k-space. An individual is classified as being from one or the other population on the basis of its position relative to the hyperplane. The parameters A and c ot (1.10) were investigated to find those sets of values which minimize each of the two criteria at various levels. Exact results were found for A under some circumstances and approximate results in others. At the levels (b) and (c), when exact results were obtained, they were the same for both criteria and were independent or c. Investigation of the c’s showed the c’s to be exact functions of A and the parameters and yielded one c for each criterion. At level (c), the c's for criteria (i) and (ii), c(min) and c(σ), respectively, were compared to c(m), a population analog of the c suggested by other authors, to discover the conditions under which it was better (i.e. having lesser criteria) than both c(min), c(σ) on criterion (ii), (i) respectively. In the large sample study, variances and covariances were found (in many cases approximately) for all estimates of the parameters entering into the conditional probabilities of misclassification (level (c)). Extension of results to level (b) and to special cases of level (a) were given. From these variances and covariances were derived the expectations of these probabilities for both criteria, at level (c), and comparisons were made where feasible. Results were tabulated. / Ph. D.
100

A Deterministic Approach to Partitioning Neural Network Training Data for the Classification Problem

Smith, Gregory Edward 28 September 2006 (has links)
The classification problem in discriminant analysis involves identifying a function that accurately classifies observations as originating from one of two or more mutually exclusive groups. Because no single classification technique works best for all problems, many different techniques have been developed. For business applications, neural networks have become the most commonly used classification technique and though they often outperform traditional statistical classification methods, their performance may be hindered because of failings in the use of training data. This problem can be exacerbated because of small data set size. In this dissertation, we identify and discuss a number of potential problems with typical random partitioning of neural network training data for the classification problem and introduce deterministic methods to partitioning that overcome these obstacles and improve classification accuracy on new validation data. A traditional statistical distance measure enables this deterministic partitioning. Heuristics for both the two-group classification problem and k-group classification problem are presented. We show that these heuristics result in generalizable neural network models that produce more accurate classification results, on average, than several commonly used classification techniques. In addition, we compare several two-group simulated and real-world data sets with respect to the interior and boundary positions of observations within their groups' convex polyhedrons. We show by example that projecting the interior points of simulated data to the boundary of their group polyhedrons generates convex shapes similar to real-world data group convex polyhedrons. Our two-group deterministic partitioning heuristic is then applied to the repositioned simulated data, producing results superior to several commonly used classification techniques. / Ph. D.

Page generated in 0.0656 seconds