• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 186
  • 65
  • 16
  • 13
  • 10
  • 9
  • 7
  • 7
  • 5
  • 5
  • 4
  • 4
  • 3
  • 2
  • 2
  • Tagged with
  • 386
  • 386
  • 79
  • 66
  • 55
  • 50
  • 50
  • 44
  • 41
  • 40
  • 37
  • 34
  • 34
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Solving a mixed-integer programming formulation of a classification model with misclassification limits

Brooks, J. Paul 25 August 2005 (has links)
Classification, the development of rules for the allocation of observations to one or more groups, is a fundamental problem in machine learning and has been applied to many problems in medicine and business. We consider aspects of a classification model developed by Gallagher, Lee, and Patterson that is based on a result by Anderson. The model seeks to maximize the probability of correct G-group classification, subject to limits on misclassification probabilities. The mixed-integer programming formulation of the model is an empirical method for estimating the parameters of an optimal classification rule, which are identified as coefficients of linear functions by Anderson. The model is shown to be a consistent method for estimating the parameters of the optimal solution to the problem of maximizing the probability of correct classification subject to limits on inter-group misclassification probabilities. A polynomial time algorithm is described for two-group instances. The method is NP-complete for a general number of groups, and an approximation is formulated as a mixed-integer program (MIP). The MIP is difficult to solve due to the formulation of constraints wherein certain variables are equal to the maximum of a set of linear functions. These constraints are conducive to an ill-conditioned coefficient matrix. Methods for generating edges of the conflict graph and conflict hypergraphs are discussed. The conflict graph is employed for finding cuts in a branch-and-bound framework. This technique and others lead to improvement in solution time over industry-standard software on instances generated by real-world data. The classification accuracy of the model in relation to standard classification methods on real-world and simulated data is also noted.
62

Application of Multivariate Statistical and Time Series Methods to Evaluate the Effects of Constructed Wetland on Water Quality Improvement

Wu, Fang-Ling 30 August 2010 (has links)
In recent years, many construct wetlands in Taiwan have been built for the purposes of wastewater treatment, river water purification, and ecology conservation. To evaluate the effectiveness of constructed wetlands on water purification, frequent water quality monitoring is needed. In this study, the multivariate statistical analysis was applied to evaluate the contaminant removal efficiency in a constructed wetland, and the time series method was then used to predict the trend of the indicative pollutant concentration in the wetland. Multivariate statistical analysis simplifies the original data into representative factors, or hive off the similarity between data to cluster, and then identify clustering outcomes. In this study, an artificial wetlands at the site around an old bridge locates at the Kaoping River Basin was used as the study site. The statistical software SPSS 12.0 was used to perform the multivariate statistical analysis to evaluate water quality characteristics of its. Results from this study show that the removal efficiency for the total coliforms (TC) of System A and B was 98%, 55% for biochemical oxygen demand (BOD), 53% for chemical Oxygen demand (COD), 55% for ammonia nitrogen (NH3-N), and 39% for total nitrogen (TN). Moreover, suspended solids (SS) couldn¡¦t be removed in both A and B systems. The box-and-whisker plot indicates that the water quality of inflow was unstable and variable; however, outflow was turning stable with its flow direction. The major pollutant indicators, except SS, were all in a decreasing tendency. The paired t-test shows p value of each item were lower than 0.05, except total phosphorus (TP) in System A, nitrate nitrogen (NO3-N) and Chlorophyll a (Chl-a) in System B. The correlation parameters from TN, nitrogen oxides (NOx), NO3-N and nitrite nitrogen (NO2-N) and so on were all higher than 0.7. The factor analysis of SPSS shows that 17 water-quality items of the study site could obtain four to six principal components, including nitrate nutrition factor, phosphorus nutrition factor, eutrophication factor, organic factor, and environmental background factor, the major influencing components are nutrition factor and eutrophication factor. The ponds of the study site were classified into two or three clusters depend on in-and-out flow location. This study attempted to establish a forecasting model of wetland pollutants concentration through the time series (ARIMA), results show that the outcome of the B7 pond was better than others. Results indicate that the ARIMA model can be used to simulate the trend of treatment efficiency using the wetland system. Experience and results obtained from this study would provide solutions for water quality control.
63

Critical Factors of KMS adoption: An Empirical Study

Lien, Bi-nien 11 September 2006 (has links)
As a result of tough competition in the marketplace and the shift form a resource-based economy to a knowledge-based economy, companies are looking more and more at gaining competitive advantage through managing and maximizing their most valuable asset ¡V knowledge. In line with the increasing need to manage knowledge systems (KMS), which involve the application of IT systems and other organizational resources to manage knowledge strategically, are growing in popularity. Given the fact that it is not difficult to find applications relevant to KMS in organizations, the topic of KMS has not been well explored by researchers and scholars. Besides, even among the limited literature on KMS, there is a scarcity of studies on the empirical perspective of KMS, especially in the area of adoption, which is an important issue of managerial capacity. This research tries to address this gap by studying the adoption of KMS in Taiwanese organizations. Specifically, we want to find the significant factors of the KMS adoption. This study is based on innovative perspective combining with some important factors to mold an integrated model. Three dimensions are involved in, including: (1) Innovative characteristics of KMS: includes relative advantage, complexity, compatibility and cost of KMS (2) Organizational factors: includes IT infrastructure inside organization, employees¡¦ IS knowledge, management support, slack resources and business size (3) External factors: only includes competitive pressure. We try to find out whether there is something critical to adopt KMS. An empirical survey methodology is applied to test the research model and hypotheses proposed in this study. Eight out of nine hypotheses are validated in our research model with Discriminant Analysis. The research result reveals that management support of a firm has the strongest discriminability; on the other hand, competitive pressure also affects the adoption of KMS of a firm strongly. In conclusion all the variables have discriminant power expect relative advantage.
64

Key Success Factors on Website Charging Strategy¡XInfluences of Website Attributes and User¡¦s Willingness-to-Pay

Tung, Chia-ta 02 February 2010 (has links)
In the beginning of internet development, the advertising revenue is the most important income of a website. After the burst of internet bubbles, some scholars mentioned that the information content is priced and user-charged. Nowadays, because of the broadband network, users spend less money and more time in connecting to internet. The owners of websites also find out more online business models for earning advertising fee, recharging fee, transacting fee and license fee from customers and providers. This research intends to discuss what difference in website attributes between chargeable websites and free website and what kinds of service and content are attractive to users¡¦ willing to pay. Based on previous studies, this research concludes six website attributes: fit to purpose, ease of use, interaction, personalization, customization and trust. The 85 samples are the most popular websites in Taiwan and some experts scored their six website attributes, website awareness and competitive situation. After discriminant analysis, the result indicates that personalization, interaction and trust are discriminative between different chargeable models. Besides, an online questionnaire survey is used to know users¡¦ experience and willing to pay. There are three kinds of principle component after factor analysis: efficiency, design and personalization. The managers of websites can make their pricing strategy by measuring this attributes and factors.
65

Financial ratios, discriminant analysis and the prediction of corporate financial distress in Hong Kong /

Chan, Ho-cheong. January 1900 (has links)
Thesis (M.B.A.)--University of Hong Kong, 1985.
66

An evaluation of financial performance of companies : the financial performance of companies is investigated using multiple discriminant analysis together with methods for the identification of potential high performance companies

Belhoul, Djamal January 1983 (has links)
The objective of this study is to establish whether companies that utilise their resources more efficiently present specific characteristics in their financial profile, and whether on the basis of these characteristics a classification model can be constructed that includes, alongside resource utilisation measures, predictors related to other financial dimensions calculated from published information. The- research proceeds by examining the factors influencing companies' performance, and the reliabilty of published accounts. Discriminant analysis is chosen as the most appropriate technique of analysis. Its applications in the field of financial analysis are discussed -and an examination of the discriminant analysis technique is undertaken. For reasons of comparability and access to a large quantity of information, the analytical part of the study is based on data extracted from a computer readable tape provided by Extel Statistical Services Ltd. It starts by describing the financial variables to be used later on in the study, and proposing a classification framework that would be of assistance in identifying the financial dimensions of importance in relation to the problem under investigation. A discriminant model that correctly classifies 85 per cent of the companies is then constructed. It includes, besides measures of resources utilisation, measures of financial levarage, working capital management, cash position and stability of past performance. The-part of the analysis on the identification of potential well performing companies indicates that, although specific characteristics can be noticed up to five year before, it is only possible to construct a classification model with sufficient accuracy one year before a high level of performance is actually reached. Finally, an index of financial performance based on normal approximations of the z-score distributions from the model used to identify well performing companies is suggested and an assessment of the structural change experienced by companies rising from a less well to a well performaing status is presented.
67

A Comparison of Two Modeling Techniques in Customer Targeting For Bank Telemarketing

Tang, Hong 17 December 2014 (has links)
Customer targeting is the key to the success of bank telemarketing. To compare the flexible discriminant analysis and the logistic regression in customer targeting, a survey dataset from a Portuguese bank was used. For the flexible discriminant analysis model, the backward elimination of explanatory variables was used with several rounds of manual re-defining of dummy variables. For the logistic regression model, the automatic stepwise selection was performed to decide which explanatory variables should be left in the final model. Ten-fold stratified cross validation was performed to estimate the model parameters and accuracies. Although employing different sets of explanatory variables, the flexible discriminant analysis model and the logistic regression model show equally satisfactory performances in customer classification based on the areas under the receiver operating characteristic curves. Focusing on the predicted “right” customers, the logistic regression model shows slightly better classification and higher overall correct prediction rate.
68

A Study on Effects of Migration in MOGA with Island Model by Visualization

Furuhashi, Takeshi, Yoshikawa, Tomohiro, Yamamoto, Masafumi January 2008 (has links)
Session ID: SA-G4-2 / Joint 4th International Conference on Soft Computing and Intelligent Systems and 9th International Symposium on advanced Intelligent Systems, September 17-21, 2008, Nagoya University, Nagoya, Japan
69

Predicting Insolvency : A comparison between discriminant analysis and logistic regression using principal components

Geroukis, Asterios, Brorson, Erik January 2014 (has links)
In this study, we compare the two statistical techniques logistic regression and discriminant analysis to see how well they classify companies based on clusters – made from the solvency ratio ­– using principal components as independent variables. The principal components are made with different financial ratios. We use cluster analysis to find groups with low, medium and high solvency ratio of 1200 different companies found on the NASDAQ stock market and use this as an apriori definition of risk. The results shows that the logistic regression outperforms the discriminant analysis in classifying all of the groups except for the middle one. We conclude that this is in line with previous studies.
70

[Credit] scoring : predicting, understanding and explaining consumer behaviour

Hamilton, Robert January 2005 (has links)
This thesis stems from my research into the broad area of (credit) scoring and the predicting, understanding and explaining of consumer behaviour. This research started at the Univers1ty of Edinburgh on an ESRC funded project in 1988. This work, which is being submitted as the partial fulfilment of the requirements for the award of Doctor of Philosophy of Loughborough Unvers1ty, consists of an introductory chapter and a selection of papers published 1991 - 2001 (inclusive). The papers address some of the key issues and areas of interest and concern arising from the rapidly evolving and expanding credit (card) market and the highly competitive nature of the credit industry. These features were particularly evident during the late 1980's and throughout the 90's Chapter One provides a general background to the research and outlines some of the key (practical) issues involved in building a (credit) scorecard Additionally, it provides a brief summary of each of the research papers appearing in full in Chapters 2- 9 (inclusive) and ends with some general limitations and conclusions. The research papers appearing in Chapters 2-9 inclusive) are all concerned with predicting, understanding and explaining different types of consumer behaviour in relation to the use of credit cards. For example discriminating between 'GOOD' and 'BAD' repayers of credit card debt on the basis of different definitions of good and bad, the identification of 'slow payers' using different statistical methods; examining the characteristics of credit card users and non-users, and identifying the characteristics of credit card holders most likely to return their credit card.

Page generated in 0.081 seconds