Spelling suggestions: "subject:"bayesian classifier""
1 |
Performance analysis of active sonar classifiersHaddad, Nicholas K. January 1990 (has links)
No description available.
|
2 |
Using Mining Techniques to Identify External Web Environment of CompaniesChen, Hsaio 17 January 2006 (has links)
As the rapid growth of World Wide Web nowadays, many companies tend to disseminate relevant information such as the introduction of product and service through their commercial Web sites. A company¡¦s Web site is deemed as a certain kind of its business assets. Customers, suppliers, partners, associations and other outsiders who desire to get access to the assets from the Web construct a company¡¦s external Web environment. From a strategic planning point of view, identifying a company¡¦s external environment helps to create its business values.
Therefore, this research focuses on the issue of assisting a company to identify its external Web environment using mining techniques. Several research works pointed out that the hyperlink structure among Web pages could contribute to
classifying the relationships within a company¡¦s external environment. We then propose a classifier that combines Web content mining and hyperlink structure, CNB-HI, for such a purpose.
We apply our proposed approach to a real case to help identify the roles of customers, partners, media, and associations. Two experiments are conducted to examine the performance. In the first experiment, we compare CNB with other forms of Naïve Bayesian classifiers, and conclude that CNB achieves a better performance. However, even the performance by CNB is not satisfactory based exclusively on
content classification. The second experiment is conducted to examine the benefits with hyperlink information incorporated (CNB-HI). The result shows that the
performance of CNB-HI improves markedly. It thus justifies the feasibility of the proposed approach to real applications.
|
3 |
Identifying Induced Bias in Machine LearningChowdhury Mohammad Rakin Haider (18414885) 22 April 2024 (has links)
<p dir="ltr">The last decade has witnessed an unprecedented rise in the application of machine learning in high-stake automated decision-making systems such as hiring, policing, bail sentencing, medical screening, etc. The long-lasting impact of these intelligent systems on human life has drawn attention to their fairness implications. A majority of subsequent studies targeted the existing historically unfair decision labels in the training data as the primary source of bias and strived toward either removing them from the dataset (de-biasing) or avoiding learning discriminatory patterns from them during training. In this thesis, we show label bias is not a necessary condition for unfair outcomes from a machine learning model. We develop theoretical and empirical evidence showing that biased model outcomes can be introduced by a range of different data properties and components of the machine learning development pipeline.</p><p dir="ltr">In this thesis, we first prove that machine learning models are expected to introduce bias even when the training data doesn’t include label bias. We use the proof-by-construction technique in our formal analysis. We demonstrate that machine learning models, trained to optimize for joint accuracy, introduce bias even when the underlying training data is free from label bias but might include other forms of disparity. We identify two data properties that led to the introduction of bias in machine learning. They are the group-wise disparity in the feature predictivity and the group-wise disparity in the rates of missing values. The experimental results suggest that a wide range of classifiers trained on synthetic or real-world datasets are prone to introducing bias under feature disparity and missing value disparity independently from or in conjunction with the label bias. We further analyze the trade-off between fairness and established techniques to improve the generalization of machine learning models such as adversarial training, increasing model complexity, etc. We report that adversarial training sacrifices fairness to achieve robustness against noisy (typically adversarial) samples. We propose a fair re-weighted adversarial training method to improve the fairness of the adversarially trained models while sacrificing minimal adversarial robustness. Finally, we observe that although increasing model complexity typically improves generalization accuracy, it doesn’t linearly improve the disparities in the prediction rates.</p><p dir="ltr">This thesis unveils a vital limitation of machine learning that has yet to receive significant attention in FairML literature. Conventional FairML literature reduces the ML fairness task to as simple as de-biasing or avoiding learning discriminatory patterns. However, the reality is far away from it. Starting from deciding on which features collect up to algorithmic choices such as optimizing robustness can act as a source of bias in model predictions. It calls for detailed investigations on the fairness implications of machine learning development practices. In addition, identifying sources of bias can facilitate pre-deployment fairness audits of machine learning driven automated decision-making systems.</p>
|
4 |
Machine Learning for Exploring State Space Structure in Genetic Regulatory NetworksThomas, Rodney H. 01 January 2018 (has links)
Genetic regulatory networks (GRN) offer a useful model for clinical biology. Specifically, such networks capture interactions among genes, proteins, and other metabolic factors. Unfortunately, it is difficult to understand and predict the behavior of networks that are of realistic size and complexity. In this dissertation, behavior refers to the trajectory of a state, through a series of state transitions over time, to an attractor in the network. This project assumes asynchronous Boolean networks, implying that a state may transition to more than one attractor. The goal of this project is to efficiently identify a network's set of attractors and to predict the likelihood with which an arbitrary state leads to each of the network’s attractors. These probabilities will be represented using a fuzzy membership vector.
Predicting fuzzy membership vectors using machine learning techniques may address the intractability posed by networks of realistic size and complexity. Modeling and simulation can be used to provide the necessary training sets for machine learning methods to predict fuzzy membership vectors. The experiments comprise several GRNs, each represented by a set of output classes. These classes consist of thresholds τ and ¬τ, where τ = [τlaw,τhigh]; state s belongs to class τ if the probability of its transitioning to attractor belongs to the range [τlaw,τhigh]; otherwise it belongs to class ¬τ. Finally, each machine learning classifier was trained with the training sets that was previously collected. The objective is to explore methods to discover patterns for meaningful classification of states in realistically complex regulatory networks.
The research design took a GRN and a machine learning method as input and produced output class < Ατ > and its negation ¬ < Ατ >. For each GRN, attractors were identified, data was collected by sampling each state to create fuzzy membership vectors, and machine learning methods were trained to predict whether a state is in a healthy attractor or not. For T-LGL, SVMs had the highest accuracy in predictions (between 93.6% and 96.9%) and precision (between 94.59% and 97.87%). However, naive Bayesian classifiers had the highest recall (between 94.71% and 97.78%). This study showed that all experiments have extreme significance with pvalue < 0.0001. The contribution this research offers helps clinical biologist to submit genetic states to get an initial result on their outcomes. For future work, this implementation could use other machine learning classifiers such as xgboost or deep learning methods. Other suggestions offered are developing methods that improves the performance of state transition that allow for larger training sets to be sampled.
|
Page generated in 0.0583 seconds