• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 99
  • Tagged with
  • 99
  • 99
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Decision support for the production and distribution of electricity under load shedding

Rakotonirainy, Rosephine Georgina January 2016 (has links)
Every day national power system networks provide thousands of MW of electric power from generating units to consumers. This process requires different operations and planning to ensure the security of the entire system. Part of the daily or weekly operation system is the so called Unit Commitment problem which consists of scheduling the available resources in order to meet the system demand. But the continuous growth in electricity demand might put pressure on the ability of the generation system to sufficiently provide supply. In such case load shedding (a controlled, enforced reduction in electricity supply) is necessary to prevent the risk to system collapse. In South Africa at the present time, a systematic lack of supply has meant that regular load shedding has taken place, with substantial economic and social costs. In this research project we study two optimization problems related to load shedding. The first is how load shedding can be integrated into the unit commitment problem. The second is how load shedding can be fairly and efficiently allocated across areas. We develop deterministic and stochastic linear and goal programming models for these purposes. Several case studies are conducted to explore the possible solutions that the proposed models can offer.
42

Multivariate analysis of the immune response upon recent acquisition of Mycobacterium tuberculosis infection

Lloyd, Tessa 03 March 2022 (has links)
Tuberculosis (TB), caused by the pathogen Mycobacterium tuberculosis (M.tb), is the leading cause of mortality due to an infectious agent worldwide. Based on data from an adolescent cohort study carried out from May 2005 to February 2009, we studied and compared the immune responses of individuals from four cohorts that were defined based on their longitudinal QFT results: the recent QFT converters, the QFT reverters, the persistent QFT positives and negatives. Analysis was based on the integration of different arms of the immune response, including adaptive and “innaptive” responses, measured on the cohorts. COMPASS was used to filter the adaptive dataset and identify bioligically meaningful subsets, while, for the innaptive dataset, we came up with a novel filtering method. Once the datasets were integrated, they were standardized using variance stabilizing (vast) standardization and missing values were imputed using a multiple factor analysis (MFA)-based approach. We first set out to define a set of immune features that changed during recent M.tb infection. This was achieved by employing the kmlShape clustering algorithm to the recent QFT converters. We identified 55 cell subsets to either increase or decrease post-infection. When we assessed how the associations between these changed pre- and post-infection using correlation networks, we found no notable differences. By comparing the recent QFT converters and the persistent QFT positives, a blood-based biomarker to distinguish between recent and established infection, namely ESAT6/CFP10-specific expression of HLA-DR on total Th1 cells, was identified using elastic net (EN) models (average AUROC = 0.87). The discriminatory ability of this variable was confirmed using two tree-based models. Lastly, to assess whether the QFT reverters are a biologically distinct group of individuals, we compared them to the persistent QFT positive and QFT negative individuals using a Projection to Latent Space Discriminant Analysis (PLS-DA) model. The results indicated that reverters appeared more similar to QFT negative individuals rather than QFT positive. Hence, QFT reversion may be associated with clearance of M.tb infection. Immune signatures associated with recent infection could be used to refine end-points of clinical trials testing vaccine efficacy against acquisition of M.tb infection, while immune signatures associated with QFT reversion could be tested as correlates of protection from M.tb infection.
43

Fault diagnosis in multivariate statistical process monitoring

Mostert, Andre George 04 March 2022 (has links)
The application of multivariate statistical process monitoring (MSPM) methods has gained considerable momentum over the last couple of decades, especially in the processing industry for achieving higher throughput at sustainable rates, reducing safety related events and minimizing potential environmental impacts. Multivariate process deviations occur when the relationships amongst many process characteristics are different from the expected. The fault detection ability of methods such as principal component analysis (PCA) and process monitoring has been reported in literature and demonstrated in selective practical applications. However, the methodologies employed to diagnose the reason for the identified multivariate process faults have not gained the anticipated traction in practice. One explanation for this might be that the current diagnostic approaches attempt to rank process variables according to their individual contribution to process faults. However, the lack of these approaches to correctly identify the variables responsible for the process deviation is well researched and communicated in literature. Specifically, these approaches suffer from a phenomenon known as fault smearing. In this research it is argued, using several illustrations, that the objective of assigning individual importance rankings to process variables is not appropriate in a multivariate setting. A new methodology is introduced for performing fault diagnosis in multivariate process monitoring. More specifically, a multivariate diagnostic method is proposed that ranks variable pairs as opposed to individual variables. For PCA based MSPM, a novel fault diagnosis method is developed that decomposes the fault identification statistics into a sum of parts, with each part representing the contribution of a specific variable pair. An approach is also developed to quantify the statistical significance of each pairwise contribution. In addition, it is illustrated how the pairwise contributions can be analysed further to obtain an individual importance ranking of the process variables. Two methodologies are developed that can be applied to calculate the individual ranking following the pairwise contributions analysis. However, it is advised that the individual rankings should be interpreted together with the pairwise contributions. The application of this new approach to PCA based MSPM and fault diagnosis is illustrated using a simulated data set.
44

Performance analysis of text classification algorithms for PubMed articles

Savvi, Suzana 14 March 2022 (has links)
The Medical Subject Headings (MeSH) thesaurus is a controlled vocabulary developed by the US National Library of Medicine (NLM) for indexing articles in Pubmed Central (PMC) archive. The annotation process is a complex and time-consuming task relying on subjective manual assignment of MeSH concepts. Automating such tasks with machine learning may provide a more efficient way of organizing biomedical literature in a less ambiguous way. This research provides a case study which compares the performance of several different machine learning algorithms (Topic Modelling, Random Forest, Logistic Regression, Support Vector Classifiers, Multinomial Naive Bayes, Convolutional Neural Network and Long Short-Term Memory (LSTM)) in reproducing manually assigned MeSH annotations. Records for this study were retrieved from Pubmed using the E-utilities API to the Entrez system of databases at NCBI (National Centre for Biotechnology Information). The MeSH vocabulary is organised in a hierarchical structure and article abstracts labelled with a single MeSH term from the top second two layers were selected for training the machine learning models. Various strategies for text multiclass classification were considered. One was a Chi-square test for feature selection which identified words relevant to each MeSH label. The second approach used Named Entity Recognition (NER) to extract entities from the unstructured text and another approach relied on word embeddings able to capture latent knowledge from literature. At the start of the study text was tokenised using the Term Frequency Inverse Document Frequency (Tf-idf) technique and topic modelling performed with the objective to ascertain the correlation between assigned topics (unsupervised learning task) and MeSH terms in PubMed. Findings revealed the degree of coupling was low although significant. Of all of the classifier models trained, logistic regression on Tf-idf vectorised entities achieved highest accuracy. Performance varied across the different MeSH categories. In conclusion automated curation of articles by abstract may be possible for those target classes classified reliably and reproducibly.
45

Systematic asset allocation using flexible views for South African markets

Sebastian, Ponni 15 March 2022 (has links)
We implement a systematic asset allocation model using the Historical Simulation with Flexible Probabilities (HS-FP) framework developed by Meucci [142, 144, 145]. The HS-FP framework is a flexible non-parametric estimation approach that considers future asset class behavior to be conditional on time and market environments, and derives a forward-looking distribution that is consistent with this view while remaining as close as possible to the prior distribution. The framework derives the forward-looking distribution by applying unequal time and state conditioned probabilities to historical observations of asset class returns. This is achieved using relative entropy to find estimates with the least distortion to the prior distribution. Here, we use the HS-FP framework on South African financial market data for asset allocation purposes; by estimating expected returns, correlations and volatilities that are better represented through the measured market cycle. We demonstrate a range of state variables that can be useful towards understanding market environments. Concretely, we compare the out-of-sample performance for a specific configuration of the HS-FP model relative to classic Mean Variance Optimization(MVO) and Equally Weighted (EW) benchmark models. The framework displays low probability of backtest overfitting and the out-of-sample net returns and Sharpe ratio point estimates of the HS-FP model outperforms the benchmark models. However, the results are inconsistent when training windows are varied, the Sharpe ratio is seen to be inflated, and the method does not demonstrate statistically significant outperformance on a gross and net basis.
46

Applications of Machine Learning in Apple Crop Yield Prediction

van den Heever, Deirdre 22 March 2022 (has links)
This study proposes the application of machine learning techniques to predict yield in the apple industry. Crop yield prediction is important because it impacts resource and capacity planning. It is, however, challenging because yield is affected by multiple interrelated factors such as climate conditions and orchard management practices. Machine learning methods have the ability to model complex relationships between input and output features. This study considers the following machine learning methods for apple yield prediction: multiple linear regression, artificial neural networks, random forests and gradient boosting. The models are trained, optimised, and evaluated using both a random and chronological data split, and the out-of-sample results are compared to find the best-suited model. The methodology is based on a literature analysis that aims to provide a holistic view of the field of study by including research in the following domains: smart farming, machine learning, apple crop management and crop yield prediction. The models are built using apple production data and environmental factors, with the modelled yield measured in metric tonnes per hectare. The results show that the random forest model is the best performing model overall with a Root Mean Square Error (RMSE) of 21.52 and 14.14 using the chronological and random data splits respectively. The final machine learning model outperforms simple estimator models showing that a data-driven approach using machine learning methods has the potential to benefit apple growers.
47

The IUCN red list for ecosystems: how does it compare to South Africa's approach to listing threatened ecosystems?

Monyeki, Maphale Stella 03 March 2022 (has links)
The publication of the International Union for Conservation of Nature (IUCN) Red List of Ecosystems (RLE) standards is an important development that has received broad acceptance globally. More than 100 countries across the globe including South Africa and Myanmar have adopted the IUCN RLE standards as their national framework for assessing the risk of ecosystem collapse. The strongest motivations for the alignment include: (i) elimination of confusion and reducing the administrative burden for maintaining multiple lists of threatened ecosystems, (ii) increased legitimacy of the ecosystem threat status assessment by basing them on a body of sound scientific literature, (iii) comparable assessments across different environments and countries across the globe, (iv) for the threatened national ecosystems to be recorded under the IUCN RLE registry. Furthermore, the IUCN Red List makes it easier for countries to secure funding from international donors to achieve national biodiversity conservation objectives, and address knowledge and data gaps through focused research. The IUCN RLE standards only became available after many countries including South Africa and Australia each independently tailor-developed national indicators or standards for assessing threats to ecosystems. The Ecosystem Threat statuses (ETS) standards are developed to aid biodiversity monitoring efforts, and many have progressed into the legislated national list of the threatened ecosystems. In South Africa, the gazetted list of threatened ecosystems is ratified to inform policy development and land-use planning tools that mainstream biodiversity considerations into economic development activities. Considering the strong links between the gazetted list of threatened ecosystems and many of the policy and spatial planning tools, the change and/or update to the IUCN RLE standards may disrupt conservation and land-use plans. In addition, South Africa has limited data on ecosystem integrity with to apply the full range of the IUCN RLE functional criteria which may lead to the risk of ecosystem collapse being underestimated. However, the country has comprehensive data on threatened plant species which in many cases contain detailed information on drivers of environmental degradation and biotic disruptions. In addition, extensive efforts have been made to link threatened species and the ecosystem types in which they occur. Such efforts enable the country to look at degradation through species lenses to better understand the degree of underestimation of ecosystem risk. Nonetheless, there is a need to interrogate and holistically understand the implications that may emanate from this shift, hence the importance of this study. This thesis was focused on assessing the origins and history of the IUCN and South Africa's approach to assessing threats to ecosystems. In chapter 1, I reviewed the key concepts including the scientific basis and criteria to understand the purpose and philosophy of the South Africa (SA) ETS and IUCN RLE frameworks. In Chapter 2, I compared and contrasted the SA ETS and IUCN RLE assessment outcomes of ecosystems susceptible to only spatial threats. Finally, in Chapter 3, I tested whether the IUCN RLE is a good proxy for the distribution of threatened species in South Africa. The results revealed that the IUCN RLE and SA ETS standards have overarching similarities (e.g. spatial and functional criteria) as they both share the common ancestry (IUCN Red List of Threatened Species). Equally, there are key differences (e.g. decision thresholds) that explain the misalignments in the ecosystem threat status between the two systems. Meanwhile, the quantitative results alluded that the proportions of matching assessment outcomes are high when the risk categories (Critically Endangered: CR and Endangered: EN versus Vulnerable: VU and Least Concern: LC) are split in accordance with their policy uptake (i.e. National Environmental Management Act (NEMA) EIA regulations) but relatively low per individual risk category. Furthermore, the results suggest that not all ecosystem types undergoing spatial declines entirely reflect the status of threatened plant species they contain. Many of these threatened plant species overlap with ecosystems at immediate risk of collapse (CR and EN). Such species will indirectly benefit from broad-scale conservation interventions that are informed by the list of threatened ecosystems. However, the majority of plant species threatened by either habitat loss and/or land degradation occur within the least threatened ecosystems. These species will not benefit from conservation responses informed by the gazetted national list of threatened because spatial declines within these ecosystems are considered to either be minimal or stable to trigger conservation response. Encouragingly, there are existing legal conservation tools such as stewardship programmes, Key Biodiversity Areas, and Critical Biodiversity Areas that allow threatened and unprotected ecological features including species to be strategically targeted for conservation responses. However, there is a need for South Africa to intensify efforts that ensure that these legal tools are implemented correctly and successfully to maximise conservation impacts and arrest biodiversity loss.
48

Portfolio optimisation with quantitative and qualitative views

Remsing, Razvan Alexandru January 2005 (has links)
Includes bibliographical references. / Portfolio construction with quantitative and qualitative forecasts is described through the exposition of two asset allocation models. The two models arc the Black-Litterman Asset Allocation moodel and the Qualitative Forecasts : Model developed by Herold Ulf. The models are developed theoretically and made intuitively accessible with real market data examples. Methodology is developed using the two models to transport alpha across benchmarks.
49

Breeding production of Cape gannets Morus capensis at Malgas Island, 2002-03

Staverees, Linda January 2006 (has links)
Includes bibliographical references.
50

Agent-based model of the market penetration of a new product

Magadla, Thandulwazi January 2014 (has links)
Includes bibliographical references. / This dissertation presents an agent-based model that is used to investigate the market penetration of a new product within a competitive market. The market consists of consumers that belong to social network that serves as a substrate over which consumers exchange positive and negative word-of-mouth communication about the products that they use. Market dynamics are influenced by factors such as product quality; the level of satisfaction that consumers derive from using the products in the market; switching constraints that make it difficult for consumers to switch between products; the word-of-mouth that consumers exchange and the structure of the social network that consumers belong to. Various scenarios are simulated in order to investigate the effect of these factors on the market penetration of a new product. The simulation results suggest that: ■ A new product reaches fewer new consumers and acquires a lower market share when consumers switch less frequently between products. ■ A new product reaches more new consumers and acquires a higher market share when it is of a better quality to that of the existing products because more positive word-of-mouth is disseminated about it. ■ When there are products that have switching constraints in the market, launching a new product with switching constraints results in a higher market share compared to when it is launched without switching constraints. However, it reaches fewer new consumers because switching constraints result in negative word-of-mouth being disseminated about it which deters other consumers from using it. Some factors such as the fussiness of consumers; the shape and size of consumers' social networks; the type of messages that consumers transmit and with whom and how often they communicate about a product, may be beyond the control of marketing managers. However, these factors can potentially be influenced through a marketing strategy that encourages consumers to exchange positive word-of-mouth both with consumers that are familiar with a product and those who are not.

Page generated in 0.0858 seconds