Spelling suggestions: "subject:"feature."" "subject:"eature.""
351 |
Mister SandmanLurry, Kennedi D. 01 April 2022 (has links)
After an expectant black couple moves to be near the best obstetrician in the Midwest, the husband’s prophetic sleep paralysis episodes make him question the true intentions of his wife’s all-white medical staff.
|
352 |
Feature Selection and Analysis for Standard Machine Learning Classification of Audio Beehive SamplesGupta, Chelsi 01 August 2019 (has links)
The beekeepers need to inspect their hives regularly in order to protect them from various stressors. Manual inspection of hives require a lot of time and effort. Hence, many researchers have started using electronic beehive monitoring (EBM) systems to collect critical information from beehives, so as to alert the beekeepers of possible threats to the hive. EBM collects information by applying multiple sensors into the hive. The sensors collect information in the form of video, audio or temperature data from the hives.
This thesis involves the automatic classification of audio samples from a beehive into bee buzzing, cricket chirping and ambient noise, using machine learning models. The classification of samples in these three categories will help the beekeepers to determine the health of beehives by analyzing the sound patterns in a typical audio sample from beehive. Abnormalities in the classification pattern over a period of time can notify the beekeepers about potential risk to the hives such as attack by foreign bodies (Varroa mites or wing virus), climate changes and other stressors.
|
353 |
The Impact of Cost on Feature Selection for ClassifiersMcCrae, Richard Clyde 01 January 2018 (has links)
Supervised machine learning models are increasingly being used for medical diagnosis. The diagnostic problem is formulated as a binary classification task in which trained classifiers make predictions based on a set of input features. In diagnosis, these features are typically procedures or tests with associated costs. The cost of applying a trained classifier for diagnosis may be estimated as the total cost of obtaining values for the features that serve as inputs for the classifier. Obtaining classifiers based on a low cost set of input features with acceptable classification accuracy is of interest to practitioners and researchers. What makes this problem even more challenging is that costs associated with features vary with patients and service providers and change over time.
This dissertation aims to address this problem by proposing a method for obtaining low cost classifiers that meet specified accuracy requirements under dynamically changing costs. Given a set of relevant input features and accuracy requirements, the goal is to identify all qualifying classifiers based on subsets of the feature set. Then, for any arbitrary costs associated with the features, the cost of the classifiers may be computed and candidate classifiers selected based on cost-accuracy tradeoff. Since the number of relevant input features k tends to be large for typical diagnosis problems, training and testing classifiers based on all 2^k-1 possible non-empty subsets of features is computationally prohibitive. Under the reasonable assumption that the accuracy of a classifier is no lower than that of any classifier based on a subset of its input features, this dissertation aims to develop an efficient method to identify all qualifying classifiers.
This study used two types of classifiers – artificial neural networks and classification trees – that have proved promising for numerous problems as documented in the literature. The approach was to measure the accuracy obtained with the classifiers when all features were used. Then, reduced thresholds of accuracy were arbitrarily established which were satisfied with subsets of the complete feature set. Threshold values for three measures –true positive rates, true negative rates, and overall classification accuracy were considered for the classifiers. Two cost functions were used for the features; one used unit costs and the other random costs. Additional manipulation of costs was also performed.
The order in which features were removed was found to have a material impact on the effort required (removing the most important features first was most efficient, removing the least important features first was least efficient). The accuracy and cost measures were combined to produce a Pareto-Optimal Frontier. There were consistently few elements on this Frontier. At most 15 subsets were on the Frontier even when there were hundreds of thousands of acceptable feature sets. Most of the computational time is taken for training and testing the models. Given costs, models in the Pareto-Optimal Frontier can be efficiently identified and the models may be presented to decision makers. Both the Neural Networks and the Decision Trees performed in a comparable fashion suggesting that any classifier could be employed.
|
354 |
Multigraph visualization for feature classification of brain network dataWang, Jiachen 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / A Multigraph is a set of graphs with a common set of nodes but different sets of
edges. Multigraph visualization has not received much attention so far. In this thesis, I
will introduce an interactive application in brain network data analysis that has a strong
need for multigraph visualization. For this application, multigraph was used to represent
brain connectome networks of multiple human subjects. A volumetric data set was
constructed from the matrix representation of the multigraph. A volume visualization
tool was then developed to assist the user to interactively and iteratively detect
network features that may contribute to certain neurological conditions. I applied this
technique to a brain connectome dataset for feature detection in the classification of
Alzheimer's Disease (AD) patients. Preliminary results showed significant improvements
when interactively selected features are used.
|
355 |
Interactions between Visual Attention and Visual Working Memory / 視覚的注意と視覚性ワーキングメモリの相互作用に関する研究Li, Qi 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(人間・環境学) / 甲第19079号 / 人博第732号 / 新制||人||176(附属図書館) / 26||人博||732(吉田南総合図書館) / 32030 / 京都大学大学院人間・環境学研究科共生人間学専攻 / (主査)教授 齋木 潤, 教授 船橋 新太郎, 准教授 月浦 崇 / 学位規則第4条第1項該当 / Doctor of Human and Environmental Studies / Kyoto University / DGAM
|
356 |
Quad-Tree based Image Encoding Methods for Data-Adaptive Visual Feature Learning / データ適応型特徴学習のための四分木に基づく画像の構造的表現法Zhang, Cuicui 23 March 2015 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(情報学) / 甲第19111号 / 情博第557号 / 新制||情||98(附属図書館) / 32062 / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 松山 隆司, 教授 美濃 導彦, 准教授 梁 雪峰 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
357 |
Binary Classification With First Phase Feature Selection forGene Expression Survival DataLoveless, Ian 28 August 2019 (has links)
No description available.
|
358 |
Optimal Bayesian Feature Selection: A New Approach for Biomarker DiscoveryForoughi pour, Ali 25 September 2019 (has links)
No description available.
|
359 |
An Enhanced Approach using Time Series Segmentation for Fault Detection of Semiconductor Manufacturing ProcessTian, Runfeng 28 October 2019 (has links)
No description available.
|
360 |
Machine Learning Approaches in Kidney Transplantation Survival Analysis using Multiple Feature Representations of Donor and RecipientNemati, Mohammadreza January 2020 (has links)
No description available.
|
Page generated in 0.0473 seconds