• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Economic modelling using computational intelligence techniques

Khoza, Msizi Smiso 09 December 2013 (has links)
M.Ing. ( Electrical & Electronic Engineering Science) / Economic modelling tools have gained popularity in recent years due to the increasing need for greater knowledge to assist policy makers and economists. A number of computational intelligence approaches have been proposed for economic modelling. Most of these approaches focus on the accuracy of prediction and not much research has been allocated to investigate the interpretability of the decisions derived from these systems. This work proposes the use of computational intelligence techniques (Rough set theory (RST) and the Multi-layer perceptron (MLP) model) to model the South African economy. RST is a rule-based technique suitable for analysing vague, uncertain and imprecise data. RST extracts rules from the data to model the system. These rules are used for prediction and interpreting the decision process. The lesser the number of rules, the easier it is to interpret the model. The performance of the RST is dependent on the discretization technique employed. An equal frequency bin (EFB), Boolean reasoning (BR), entropy partition (EP) and the Naïve algorithm (NA) are used to develop an RST model. The model trained using EFB data performs better than the models trained using BR and EP. RST was used to model South Africa’s financial sector. Here, accuracy of 86.8%, 57.7%, 64.5% and 43% were achieved for EFB, BR, EP and NA respectively. This work also proposes an ensemble of rough set theory and the multi-layer perceptron model to model the South African economy wherein, a prediction of the direction of the gross domestic product is presented. This work also proposes the use of an auto-associative Neural Network to impute missing economic data. The auto-associative neural network imputed the ten variables or attributes that were used in the prediction model. These variables were: Construction contractors rating lack of skilled labour as constraint, Tertiary economic sector contribution to GDP, Income velocity of circulation of money, Total manufacturing production volume, Manufacturing firms rating lack of skilled labour as constraint, Total asset value of banking industry, Nominal unit labour cost, Total mass of Platinum Group Metals (PGMs) mined, Total revenue from sale of PGMs and the Gross Domestic Expenditure (GDE). The level of imputation accuracy achieved varied with the attribute. The accuracy ranged from 85.9% to 98.7%.
2

Machine-Learning Fairness in Data Markets: Challenges and Opportunities

Maio, Roland January 2025 (has links)
Machine learning promises to unlock troves of economic value. As advanced machine-learning techniques proliferate, they raise acute fairness concerns. These concerns must be addressed in order for the economic surpluses and externalities generated by machine learning to benefit society equitably. In this thesis, we focus on the economic context of data markets and theoretically study the impacts of intervening to achieve machine-learning fairness. We find that to effectively and efficiently intervene requires taking the data market into account in the design and application of the fairness intervention, i.e., how the intervention impacts the data market, how the data market impacts the intervention, and how their impacts interact. We study this interaction in two data-market settings to understand what information is necessary. We find that without taking into account the incentive structure and economics of a data market, fairness interventions can induce greater losses to efficiency than are necessary to achieve fairness—even potentially inducing market collapse. Yet, we also find that these losses can be recovered or even amortized away by suitably designing the intervention with appropriate information or under favorable market conditions. Overall, this thesis elucidates how data markets present both novel challenges and opportunities for machine-learning fairness. It demonstrates that efficiently intervening for machine-learning fairness can be more complicated in data markets—even infeasible! Excitingly, however, it also demonstrates that under favorable market conditions, fairness can be achieved at lower relative cost to efficiency than has previously been understood to be possible. We hope that these initial theoretical findings ultimately contribute to the development of efficient and practical fairness interventions suitable for real-world application.

Page generated in 0.0825 seconds