• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 199
  • 46
  • 21
  • 19
  • 16
  • 15
  • 13
  • 13
  • 11
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 420
  • 73
  • 72
  • 54
  • 50
  • 47
  • 47
  • 47
  • 40
  • 39
  • 36
  • 35
  • 33
  • 33
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Redundant and Irrelevant Attribute Elimination using Autoencoders / Redundant och irrelevant attributeliminering med autoencoders

Granskog, Tim January 2017 (has links)
Real-world data can often be high-dimensional and contain redundant or irrelevant attributes. High-dimensional data are problematic for machine learning as the high dimensionality causes learning to take more time and, unless the dataset is sufficiently large to provide an ample number of samples for each class, the accuracy will suffer. Redundant and irrelevant attributes cause the data to take on a higher dimensionality than necessary and obfuscates the important attributes. Because of this, it is of interest to be able to reduce the dimensionality of the data whilst preserving the important attributes. Several techniques have been presented in the field of computer science in order to reduce the dimensionality of data. One of these is the autoencoder which is an unsupervised learning neural network which uses its input as the target output, and by limiting the number of neurons in the hidden layer the autoencoder is forced to learn a lower dimensional representation of the data. This study focuses on using the autoencoder to reduce the dimensionality, and eliminate irrelevant or redundant attributes, of four different datasets from different domains. The results show that the autoencoder can eliminate redundant attributes, that are a linear combination of the other attributes, and provide a better lower dimensional representation of the data than that of the unreduced data. However, in data that is gathered under a controlled and carefully managed situation, the autoencoder cannot always provide a better lower dimensional representation than the data with redundant attributes. Lastly, the results show that the autoencoder cannot eliminate irrelevant attributes which have no correlation to the class or other attributes. / Verklig data kan ofta vara högdimensionella och innehålla överflödiga eller irrelevanta attribut. Högdimensionell data är problematisk för maskininlärning, eftersom det medför att lärandet tar längre tid och om inte datasetet är tillräckligt stort för att ge ett tillräckligt antal instanser för varje klass kommer precisionen att drabbas. Överflödiga och irrelevanta attribut gör att datan får en högre dimension än vad som är nödvändigt och gör de svårare att avgöra vilka de viktiga attributen är. På grund av detta är det av intresse att kunna reducera datans dimensionalitet samtidigt som de viktiga attributen bevaras. Flera tekniker har presenterats för dimensionsreducering av data. En utav dessa tekniker är autoencodern, som är ett oövervakat lärande neuralt nätverk som använder sin indata som målutdata, och genom att begränsa antalet neuroner i det dolda lagret tvingas autoencodern att lära sig en representation av datan i en lägre dimension. Denna studie fokuserar på att använda autoencodern för att minska dimensionerna och eliminera irrelevanta eller överflödiga attribut, av fyra olika dataset från olika domäner. Resultaten visar att autoenkodern kan eliminera redundanta attribut, som är en linjär kombination av de andra attributen, och ge en bättre lägre dimensionell representation av datan än den ej reducerade datan. I data som samlats in under en kontrollerad och noggrant hanterad situation kan emellertid autoencodern inte alltid ge en bättre lägre dimensionell representation än datan med redundanta attribut. Slutligen visar resultaten att autoencodern inte kan eliminera irrelevanta attribut, som inte har någon korrelation med klassen eller andra attribut.
62

Optimal datalink selection for future aeronautical telecommunication networks

Alam, Atm S., Hu, Yim Fun, Pillai, Prashant, Xu, K., Baddoo, J. 08 May 2017 (has links)
Yes / Modern aeronautical telecommunication networks (ATN) make use of different simultaneous datalinks to deliver robust, secure and efficient ATN services. This paper proposes a Multiple Attribute Decision Making based optimal datalink selection algorithm which considers different attributes including safety, QoS, costs and user/operator preferences. An intelligent TRigger-based aUtomatic Subjective weighTing (i-TRUST) method is also proposed for computing subjective weights necessary to provide user flexibility. Simulation results demonstrate that the proposed algorithm significantly improves the performance of the ATN system. / Innovate U.K. Project SINCBAC-Secure Integrated Network Communications for Broadband and ATM Connectivity: Application number 18650-134196.
63

The Influence of Evaluative Reactions to Attribute Frames and Accounting Data on Capital Budgeting Decisions

Allport, Christopher Douglas 14 July 2005 (has links)
The purpose of this dissertation was to analyze the susceptibility of capital budgeting decisions to bias. Based on the political nature of many of these decisions, attribute framing effects were analyzed in a capital budgeting decision context. Specifically, two independent variables were analyzed: accounting data and attribute frames. This research proposed that attribute framing effects would be conditional on the nature of the accounting data being considered. When the accounting data elicited a positive or negative evaluative reaction, attribute frames were expected to be unobtrusive to capital budgeting decisions. However, when the accounting data was neutral, eliciting an ambiguous evaluative reaction, attribute frames were predicted to bias these decisions. An experiment was conducted that considered the issue across two types of capital budgeting decisions: accept/reject decisions (dichotomous decision) and strategic alliance judgments (monetary allocations). Experimental findings strongly support the predicted relationships. These results suggest that persuasive descriptions are not effective in capital budgeting contexts when accounting data provides a clear picture as to the investment's future success; however, these tactics may be vitally important when accounting information is unclear about the investment's future success. / Ph. D.
64

Conditional, Structural and Unobserved Heterogeneity: three essays on preference heterogeneity in the design of financial incentives to increase weight loss program reach

Yuan, Yuan Clara 27 August 2015 (has links)
This dissertation consists of three essays on forms of preference heterogeneity in discrete choice models. The first essay uses a model of heterogeneity conditional on observed individual-specific characteristics to tailor financial incentives to enhance weight loss program participation among target demographics. Financial incentives in weight loss programs have received attention mostly with respect to effectiveness rather than participation and representativeness. This essay examines the impact of financial incentives on participation with respect to populations vulnerable to obesity and understudied in the weight loss literature. We found significant heterogeneity across target sub-populations and suggest a strategy of offering multiple incentive designs to counter the dispersive effects of preference heterogeneity. The second essay investigates the ability of a novel elicitation format to reveal decision strategy heterogeneity. Attribute non-attendance, the behaviour of ignoring some attributes when performing a choice task, violates fundamental assumptions of the random utility model. However, self-reported attendance behaviour on dichotomous attendance scales has been shown to be unreliable. In this essay, we assess the ability of a polytomous attendance scale to ameliorate self-report unreliability. We find that the lowest point on the attendance scale corresponds best to non-attendance, attendance scales need be no longer than two or three points, and that the polytomous attendance scale had limited success in producing theoretically consistent results. The third essay explores available approaches to model different features of unobserved heterogeneity. Unobserved heterogeneity is popularly modelled using the mixed logit model, so called because it is a mixture of standard conditional logit models. Although the mixed logit model can, in theory, approximate any random utility model with an appropriate mixing distribution, there is little guidance on how to select such a distribution. This essay contributes to suggestions on distribution selection by describing the heterogeneity features which can be captured by established parametric mixing distributions and more recently introduced nonparametric mixing distributions, both of a discrete and continuous nature. We provide empirical illustrations of each feature in turn using simple mixing distributions which focus on the feature at hand. / Ph. D.
65

Analogy-based software project effort estimation : contributions to projects similarity measurement, attribute selection and attribute weighting algorithms for analogy-based effort estimation

Azzeh, Mohammad Y. A. January 2010 (has links)
Software effort estimation by analogy is a viable alternative method to other estimation techniques, and in many cases, researchers found it outperformed other estimation methods in terms of accuracy and practitioners' acceptance. However, the overall performance of analogy based estimation depends on two major factors: similarity measure and attribute selection & weighting. Current similarity measures such as nearest neighborhood techniques have been criticized that have some inadequacies related to attributes relevancy, noise and uncertainty in addition to the problem of using categorical attributes. This research focuses on improving the efficiency and flexibility of analogy-based estimation to overcome the abovementioned inadequacies. Particularly, this thesis proposes two new approaches to model and handle uncertainty in similarity measurement method and most importantly to reflect the structure of dataset on similarity measurement using Fuzzy modeling based Fuzzy C-means algorithm. The first proposed approach called Fuzzy Grey Relational Analysis method employs combined techniques of Fuzzy set theory and Grey Relational Analysis to improve local and global similarity measure and tolerate imprecision associated with using different data types (Continuous and Categorical). The second proposed approach presents the use of Fuzzy numbers and its concepts to develop a practical yet efficient approach to support analogy-based systems especially at early phase of software development. Specifically, we propose a new similarity measure and adaptation technique based on Fuzzy numbers. We also propose a new attribute subset selection algorithm and attribute weighting technique based on the hypothesis of analogy-based estimation that assumes projects that are similar in terms of attribute value are also similar in terms of effort values, using row-wise Kendall rank correlation between similarity matrix based project effort values and similarity matrix based project attribute values. A literature review of related software engineering studies revealed that the existing attribute selection techniques (such as brute-force, heuristic algorithms) are restricted to the choice of performance indicators such as (Mean of Magnitude Relative Error and Prediction Performance Indicator) and computationally far more intensive. The proposed algorithms provide sound statistical basis and justification for their procedures. The performance figures of the proposed approaches have been evaluated using real industrial datasets. Results and conclusions from a series of comparative studies with conventional estimation by analogy approach using the available datasets are presented. The studies were also carried out to statistically investigate the significant differences between predictions generated by our approaches and those generated by the most popular techniques such as: conventional analogy estimation, neural network and stepwise regression. The results and conclusions indicate that the two proposed approaches have potential to deliver comparable, if not better, accuracy than the compared techniques. The results also found that Grey Relational Analysis tolerates the uncertainty associated with using different data types. As well as the original contributions within the thesis, a number of directions for further research are presented. Most chapters in this thesis have been disseminated in international journals and highly refereed conference proceedings.
66

Combining data mining techniques with multicriteria decision aid in classification problems with composition probabilistc of preferences in trichotomic procedure (CPP-TRI)

Silva, Glauco Barbosa da 27 July 2017 (has links)
Submitted by Secretaria Pós de Produção (tpp@vm.uff.br) on 2017-07-27T19:24:17Z No. of bitstreams: 1 D2016 - Glauco Barbosa da Silva.pdf: 5776264 bytes, checksum: d50812d465487f52e8c02b7f636435ab (MD5) / Made available in DSpace on 2017-07-27T19:24:17Z (GMT). No. of bitstreams: 1 D2016 - Glauco Barbosa da Silva.pdf: 5776264 bytes, checksum: d50812d465487f52e8c02b7f636435ab (MD5) / Problem: From Modeling decision maker preferences, the Multicriteria Decision Aid (MCDA) is a field dedicated to the study of real-world decision-making problems that are, usually, too complex and not so well-structured to be considered through the examination of a single point of view (criteria). This feature of MCDA implies that a comprehensive model of a decision maker situation cannot be “created”, but instead, the model should be developed to meet the requirements of the Decision Maker (DM). In general, the development of such a model can only be achieved through an iterative and interactive process, until the preferences of the decision maker are consistently represented in the model. However, an interactive method is a procedure that consists of an alternation of calculation and discussion stages, which presumes that the decision maker is willing to answer a large number of relatively difficult questions. For instance, one of the main difficulties to be faced when interacting with a Decision Maker in order to build a decision aid procedure is the various parameters’ elicitation of the preference model. Methodology: In this thesis, as an alternative to interactive process, among the main streams of MCDA, a Preference Disaggregation Analysis method was used, which is considered to assess or to infer preference models from the given preferential structures and to address decision-aiding activities to elicit preferential information and to construct decision models from decision examples. Combining Composition Probabilistic of Preference with Data Mining techniques, a proposal of a three-step process is presented: attribute selection; clustering; and classification. The first and the second ones are data mining tasks and the last one is a multicriteria task. Purpose: This thesis aims to present a new approach with a Data Mining Layer (attribute selection and/or clustering) in Composition Probabilistic of Preferences in Trichotomic procedure (CPP-TRI), which combines data mining techniques with a Multicriteria Decision Aid method in classification (sorting) problems. Findings: The decision maker ability to comprehend without powerful tools has been exceeded. Therefore, important decisions are often made based not on the information-rich data stored in data repositories, but rather, on intuition of the decision maker. Involved in similar problems, the connections between disaggregation methods and data mining (identifying patterns, extracting knowledge from data, eliciting preferential information and constructing decision models from decision examples) are explored to combine and improve the CPP-TRI Method from Attribute Selection Techniques. / Problem: From Modeling decision maker preferences, the Multicriteria Decision Aid (MCDA) is a field dedicated to the study of real-world decision-making problems that are, usually, too complex and not so well-structured to be considered through the examination of a single point of view (criteria). This feature of MCDA implies that a comprehensive model of a decision maker situation cannot be “created”, but instead, the model should be developed to meet the requirements of the Decision Maker (DM). In general, the development of such a model can only be achieved through an iterative and interactive process, until the preferences of the decision maker are consistently represented in the model. However, an interactive method is a procedure that consists of an alternation of calculation and discussion stages, which presumes that the decision maker is willing to answer a large number of relatively difficult questions. For instance, one of the main difficulties to be faced when interacting with a Decision Maker in order to build a decision aid procedure is the various parameters’ elicitation of the preference model. Methodology: In this thesis, as an alternative to interactive process, among the main streams of MCDA, a Preference Disaggregation Analysis method was used, which is considered to assess or to infer preference models from the given preferential structures and to address decision-aiding activities to elicit preferential information and to construct decision models from decision examples. Combining Composition Probabilistic of Preference with Data Mining techniques, a proposal of a three-step process is presented: attribute selection; clustering; and classification. The first and the second ones are data mining tasks and the last one is a multicriteria task. Purpose: This thesis aims to present a new approach with a Data Mining Layer (attribute selection and/or clustering) in Composition Probabilistic of Preferences in Trichotomic procedure (CPP-TRI), which combines data mining techniques with a Multicriteria Decision Aid method in classification (sorting) problems. Findings: The decision maker ability to comprehend without powerful tools has been exceeded. Therefore, important decisions are often made based not on the information-rich data stored in data repositories, but rather, on intuition of the decision maker. Involved in similar problems, the connections between disaggregation methods and data mining (identifying patterns, extracting knowledge from data, eliciting preferential information and constructing decision models from decision examples) are explored to combine and improve the CPP-TRI Method from Attribute Selection Techniques.
67

An intelligent hybrid model for customer requirements interpretation and product design targets determination

Fung, Ying-Kit (Richard) January 1997 (has links)
The transition of emphasis in business competition from a technology-led age to a market-oriented era has led to a rapid shift from the conventional "economy of scale" towards the "economy of scope" in contemporary manufacturing. Hence, it is necessary and essential to be able to respond to the dynamic market and customer requirements systematically and consistently. The central theme of this research is to rationalise and improve the conventional means of analysing and interpreting the linguistic and often imprecise customer requirements in order to identify the essential product features and determine their appropriate design targets dynamically and quantitatively through a series of well proven methodologies and techniques. The major objectives of this research are: a) To put forward a hybrid approach for decoding and processing the Voice of Customer (VoC) in order to interpret the specific customer requirements and market demands into definitive product design features, and b) To quantify the essential product design features with the appropriate technical target values for facilitating the downstream planning and control activities in delivering the products or services. These objectives would be accomplished through activities as follows: • Investigating and understanding the fundamental nature and variability of customer attributes (requirements); • Surveying and evaluating the contemporary approaches in handling customer attributes; • Proposing an original and generic hybrid model for categorising, prioritising and interpreting specific customer attributes into the relevant product attributes with tangible target values; • Developing a software system to facilitate the implementation of the proposed model; • Demonstrating the functions of the hybrid model through a practical case study. This research programme begins with a thorough overview of the roles, the changing emphasis and the dynamic characteristics of the contemporary customer demand with a view to gaining a better understanding on the fundamental nature and variability of customer attributes. It is followed by a review of a number of well proven tools and techniques including QFD, HoQ, Affinity Diagram and AHP etc. on their applicability and effectiveness in organising, analysing and responding to dynamic customer requirements. Finally, an intelligent hybrid model amalgamating a variety of these techniques and a fuzzy inference sub-system is proposed to handle the diverse, ever-changing and often imprecise VoC. The proposed hybrid model is subsequently demonstrated in a practical case study.
68

Improving Classification and Attribute Clustering: An Iterative Semi-supervised Approach

Seifi, Farid January 2015 (has links)
This thesis proposes a novel approach to attribute clustering. It exploits the strength of semi-supervised learning to improve the quality of attribute clustering particularly when labeled data is limited. The significance of this work derives in part from the broad, and increasingly important, usage of attribute clustering to address outstanding problems within the machine learning community. This form of clustering has also been shown to have strong practical applications, being usable in heavyweight industrial applications. Although researchers have focused on supervised and unsupervised attribute clustering in recent years, semi-supervised attribute clustering has not received substantial attention. In this research, we propose an innovative two step iterative semi-supervised attribute clustering framework. This new framework, in each iteration, uses the result of attribute clustering to improve a classifier. It then uses the classifier to augment the training data used by attribute clustering in next iteration. This iterative framework outputs an improved classifier and attribute clustering at the same time. It gives more accurate clusters of attributes which better fit the real relations between attributes. In this study we proposed two new usages for attribute clustering to improve classification: solving the automatic view definition problem for multi-view learning and improving missing attribute-value handling at induction and prediction time. The application of these two new usages of attribute clustering in our proposed semi-supervised attribute clustering is evaluated using real world data sets from different domains.
69

Analogy-based software project effort estimation. Contributions to projects similarity measurement, attribute selection and attribute weighting algorithms for analogy-based effort estimation.

Azzeh, Mohammad Y.A. January 2010 (has links)
Software effort estimation by analogy is a viable alternative method to other estimation techniques, and in many cases, researchers found it outperformed other estimation methods in terms of accuracy and practitioners¿ acceptance. However, the overall performance of analogy based estimation depends on two major factors: similarity measure and attribute selection & weighting. Current similarity measures such as nearest neighborhood techniques have been criticized that have some inadequacies related to attributes relevancy, noise and uncertainty in addition to the problem of using categorical attributes. This research focuses on improving the efficiency and flexibility of analogy-based estimation to overcome the abovementioned inadequacies. Particularly, this thesis proposes two new approaches to model and handle uncertainty in similarity measurement method and most importantly to reflect the structure of dataset on similarity measurement using Fuzzy modeling based Fuzzy C-means algorithm. The first proposed approach called Fuzzy Grey Relational Analysis method employs combined techniques of Fuzzy set theory and Grey Relational Analysis to improve local and global similarity measure and tolerate imprecision associated with using different data types (Continuous and Categorical). The second proposed approach presents the use of Fuzzy numbers and its concepts to develop a practical yet efficient approach to support analogy-based systems especially at early phase of software development. Specifically, we propose a new similarity measure and adaptation technique based on Fuzzy numbers. We also propose a new attribute subset selection algorithm and attribute weighting technique based on the hypothesis of analogy-based estimation that assumes projects that are similar in terms of attribute value are also similar in terms of effort values, using row-wise Kendall rank correlation between similarity matrix based project effort values and similarity matrix based project attribute values. A literature review of related software engineering studies revealed that the existing attribute selection techniques (such as brute-force, heuristic algorithms) are restricted to the choice of performance indicators such as (Mean of Magnitude Relative Error and Prediction Performance Indicator) and computationally far more intensive. The proposed algorithms provide sound statistical basis and justification for their procedures. The performance figures of the proposed approaches have been evaluated using real industrial datasets. Results and conclusions from a series of comparative studies with conventional estimation by analogy approach using the available datasets are presented. The studies were also carried out to statistically investigate the significant differences between predictions generated by our approaches and those generated by the most popular techniques such as: conventional analogy estimation, neural network and stepwise regression. The results and conclusions indicate that the two proposed approaches have potential to deliver comparable, if not better, accuracy than the compared techniques. The results also found that Grey Relational Analysis tolerates the uncertainty associated with using different data types. As well as the original contributions within the thesis, a number of directions for further research are presented. Most chapters in this thesis have been disseminated in international journals and highly refereed conference proceedings. / Applied Science University, Jordan.
70

The Effects of Workload Transitions in a Multitasking Environment

Bowers, Margaret Anna 30 August 2013 (has links)
No description available.

Page generated in 0.1098 seconds