• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 558
  • 295
  • 86
  • 38
  • 15
  • 11
  • 5
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 1150
  • 782
  • 396
  • 285
  • 270
  • 269
  • 196
  • 195
  • 181
  • 140
  • 116
  • 116
  • 114
  • 113
  • 108
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Supervisor behavior in counselor education the relationship of goal orientation, time, and supervisee lead.

Congram, Carole A. January 1900 (has links)
Thesis (Ph. D.)--University of Wisconsin--Madison, 1969. / Typescript. Vita. eContent provider-neutral record in process. Description based on print version record. Includes bibliographical references.
12

Protein secondary structure prediction using conditional random fields and profiles /

Shen, Rongkun. January 1900 (has links)
Thesis (M.S.)--Oregon State University, 2006. / Printout. Includes bibliographical references (leaves 42-46). Also available on the World Wide Web.
13

Learning From Snapshot Examples

Beal, Jacob 13 April 2005 (has links)
Examples are a powerful tool for teaching both humans and computers.In order to learn from examples, however, a student must first extractthe examples from its stream of perception. Snapshot learning is ageneral approach to this problem, in which relevant samples ofperception are used as examples. Learning from these examples can inturn improve the judgement of the snapshot mechanism, improving thequality of future examples. One way to implement snapshot learning isthe Top-Cliff heuristic, which identifies relevant samples using ageneralized notion of peaks. I apply snapshot learning with theTop-Cliff heuristic to solve a distributed learning problem and showthat the resulting system learns rapidly and robustly, and canhallucinate useful examples in a perceptual stream from a teacherlesssystem.
14

Canopy Change Assessment and Water Resources Utilization in the Civano Community, Arizona

Pan, Yajuan 12 1900 (has links)
The Civano community of Tucson, Arizona, is built for sustainability. Trees and plants are precious resources in the community and balancing human needs and natural resources. The design of rainwater harvesting systems and the usage of reclaimed water inside the community effectively irrigate plants and save drinking water. This project estimates canopy changes over time and explores the effect of water resources on plant growth for developed areas and natural areas, respectively. This project generates land cover classifications for 2007, 2010, and 2015 using supervised classification method and measures canopy cover change over time. Based on City of Tucson Water “harvesting rainwater guide to water-efficient landscaping”, this project discusses if water supply meets plant water demand in the developed areas of the community. Additionally, the normalized difference vegetation index (NDVI) data for developed area and natural area over ten years are compared and provide a correlation analysis with water sources. The results show that canopy cover across the entire community decreased from 2007 to 2010, then increased from 2010 to 2015. Water supply in the developed areas is sufficient for plant water demand. In natural areas plant growth changes dramatically as a result of precipitation fluctuation. In addition, it’s proved that 2011 National Land Cover Database (NLCD) tree canopy underestimates canopy cover in the Civano community. The final products not only provide the fundamental canopy cover data for other studies, also serve as a reference of water efficient landscaping within a community.
15

SUPERVISED CLASSIFICATION OF FRESH LEAFY GREENS AND PREDICTION OF THEIR PHYTOCHEMICAL CONTENTS USING NEAR INFRARED REFLECTANCE

Joshi, Prabesh 01 May 2018 (has links)
There is an increasing need of automation for routine tasks like sorting agricultural produce in large scale post-harvest processing. Among different kinds of sensors used for such automation tasks, near-infrared (NIR) technology provides a rapid and effective solution for quantitative analysis of quality indices in food products. As industries and farms are adopting modern data-driven technologies, there is a need for evaluation of the modelling tools to find the optimal solutions for problem solving. This study aims to understand the process of evaluation of the modelling tools, in view of near-infrared data obtained from green leafy vegetables. The first part of this study deals with prediction of the type of leafy green vegetable from the near-infrared reflectance spectra non-destructively taken from the leaf surface. Supervised classification methods used for the classification task were k-nearest neighbors (KNN), support vector machines (SVM), linear discriminant analysis (LDA) classifier, regularized discriminant analysis (RDA) classifier, naïve Bayes classifier, bagged trees, random forests, and ensemble discriminant subspace classifier. The second part of this study deals with prediction of total glucosinolate and total polyphenol contents in leaves using Partial Least Squares Regression (PLSR) and Principal Component Regression (PCR). Optimal combination of predictors were chosen by using recursive feature elimination. NIR spectra taken from 283 different samples were used for classification task. Accuracy rates of tuned classifiers were compared for a standard test set. The ensemble discriminant subspace classifier was found to yield the highest accuracy rates (89.41%) for the standard test set. Classifiers were also compared in terms of accuracy rates and F1 scores. Learning rates of classifiers were compared with cross-validation accuracy rates for different proportions of dataset. Ensemble subspace discriminants, SVM, LDA and KNN were found to be similar in their cross-validation accuracy rates for different proportions of data. NIR spectra as well as reference values for total polyphenol content and total glucosinolate contents were taken from 40 samples for each analyses. PLSR model for total glucosinolate prediction built with spectra treated with Savitzky-Golay second derivative yielded a RMSECV of 0.67 μmol/g of fresh weight and cross-validation R2 value of 0.63. Similarly, PLSR model built with spectra treated with Savitzky-Golay first derivative yielded a RMSECV of 6.56 Gallic Acid Equivalent (GAE) mg/100g of fresh weight and cross-validation R-squared value of 0.74. Feature selection for total polyphenol prediction suggested that the region of NIR between 1300 - 1600 nm might contain important information about total polyphenol content in the green leaves.
16

Learning with unlabeled data. / 在未標記的數據中的機器學習 / CUHK electronic theses & dissertations collection / Zai wei biao ji de shu ju zhong de ji qi xue xi

January 2009 (has links)
In the first part, we deal with the unlabeled data that are in good quality and follow the conditions of semi-supervised learning. Firstly, we present a novel method for Transductive Support Vector Machine (TSVM) by relaxing the unknown labels to the continuous variables and reducing the non-convex optimization problem to a convex semi-definite programming problem. In contrast to the previous relaxation method which involves O (n2) free parameters in the semi-definite matrix, our method takes advantage of reducing the number of free parameters to O (n), so that we can solve the optimization problem more efficiently. In addition, the proposed approach provides a tighter convex relaxation for the optimization problem in TSVM. Empirical studies on benchmark data sets demonstrate that the proposed method is more efficient than the previous semi-definite relaxation method and achieves promising classification results comparing with the state-of-the-art methods. Our second contribution is an extended level method proposed to efficiently solve the multiple kernel learning (MKL) problems. In particular, the level method overcomes the drawbacks of both the Semi-Infinite Linear Programming (SILP) method and the Subgradient Descent (SD) method for multiple kernel learning. Our experimental results show that the level method is able to greatly reduce the computational time of MKL over both the SD method and the SILP method. Thirdly, we discuss the connection between two fundamental assumptions in semi-supervised learning. More specifically, we show that the loss on the unlabeled data used by TSVM can be essentially viewed as an additional regularizer for the decision boundary. We further show that this additional regularizer induced by the TSVM is closely related to the regularizer introduced by the manifold regularization. Both of them can be viewed as a unified regularization framework for semi-supervised learning. / In the second part, we discuss how to employ the unlabeled data for building reliable classification systems in three scenarios: (1) only poorly-related unlabeled data are available, (2) good quality unlabeled data are mixed with irrelevant data and there are no prior knowledge on their composition, and (3) no unlabeled data are available but can be achieved from the Internet for text categorization. We build several frameworks to deal with the above cases. Firstly, we present a study on how to deal with the weakly-related unlabeled data, called the Supervised Self-taught Learning framework, which can transfer knowledge from the unlabeled data actively. The proposed model is able to select those discriminative features or representations, which are more appropriate for classification. Secondly, we also propose a novel framework that can learn from a mixture of unlabeled data, where good quality unlabeled data are mixed with unlabeled irrelevant samples. Moreover, we do not need the prior knowledge on which data samples are relevant or irrelevant. Consequently it is significantly different from the recent framework of semi-supervised learning with universum and the framework of Universum Support Vector Machine. As an important contribution, we have successfully formulated this new learning approach as a Semi-definite Programming problem, which can be solved in polynomial time. A series of experiments demonstrate that this novel framework has advantages over the semi-supervised learning on both synthetic and real data in many facets. Finally, for third scenario, we present a general framework for semi-supervised text categorization that collects the unlabeled documents via Web search engines and utilizes them to improve the accuracy of supervised text categorization. Extensive experiments have demonstrated that the proposed semi-supervised text categorization framework can significantly improve the classification accuracy. Specifically, the classification error is reduced by 30% averaged on the nine data sets when using Google as the search engine. / We consider the problem of learning from both labeled and unlabeled data through the analysis on the quality of the unlabeled data. Usually, learning from both labeled and unlabeled data is regarded as semi-supervised learning, where the unlabeled data and the labeled data are assumed to be generated from the same distribution. When this assumption is not satisfied, new learning paradigms are needed in order to effectively explore the information underneath the unlabeled data. This thesis consists of two parts: the first part analyzes the fundamental assumptions of semi-supervised learning and proposes a few efficient semi-supervised learning models; the second part discusses three learning frameworks in order to deal with the case that unlabeled data do not satisfy the conditions of semi-supervised learning. / Xu, Zenglin. / Advisers: Irwin King; Michael R. Lyu. / Source: Dissertation Abstracts International, Volume: 70-09, Section: B, page: . / Thesis (Ph.D.)--Chinese University of Hong Kong, 2009. / Includes bibliographical references (leaves 158-179). / Electronic reproduction. Hong Kong : Chinese University of Hong Kong, [2012] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Electronic reproduction. [Ann Arbor, MI] : ProQuest Information and Learning, [200-] System requirements: Adobe Acrobat Reader. Available via World Wide Web. / Abstracts in English and Chinese. / School code: 1307.
17

Object Detection Using Multiple Level Annotations

Xu, Mengmeng 04 1900 (has links)
Object detection is a fundamental problem in computer vision. Impressive results have been achieved on large-scale detection benchmarks by fully-supervised object detection (FSOD) methods. However, FSOD approaches require tremendous instance-level annotations, which are time-consuming to collect. In contrast, weakly supervised object detection (WSOD) exploits easily-collected image-level labels while it suffers from relatively inferior detection performance. This thesis studies hybrid learning methods on the object detection problems. We intend to train an object detector from a dataset where both instance-level and image-level labels are employed. Extensive experiments on the challenging PASCAL VOC 2007 and 2012 benchmarks strongly demonstrate the effectiveness of our method, which gives a trade-off between collecting fewer annotations and building a more accurate object detector. Our method is also a strong baseline bridging the wide gap between FSOD and WSOD performances. Based on the hybrid learning framework, we further study the problem of object detection from a novel perspective in which the annotation budget constraints are taken into consideration. When provided with a fixed budget, we propose a strategy for building a diverse and informative dataset that can be used to optimally train a robust detector. We investigate both optimization and learning-based methods to sample which images to annotate and which level of annotations (strongly or weakly supervised) to annotate them with. By combining an optimal image/annotation selection scheme with the hybrid supervised learning, we show that one can achieve the performance of a strongly supervised detector on PASCAL-VOC 2007 while saving 12:8% of its original annotation budget. Furthermore, when 100% of the budget is used, it surpasses this performance by 2:0 mAP percentage points.
18

Hierarchical Mixtures of Experts and the EM Algorithm

Jordan, Michael I., Jacobs, Robert A. 01 August 1993 (has links)
We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.
19

Semi-supervised and active training of conditional random fields for activity recognition

Mahdaviani, Maryam 05 1900 (has links)
Automated human activity recognition has attracted increasing attention in the past decade. However, the application of machine learning and probabilistic methods for activity recognition problems has been studied only in the past couple of years. For the first time, this thesis explores the application of semi-supervised and active learning in activity recognition. We present a new and efficient semi-supervised training method for parameter estimation and feature selection in conditional random fields (CRFs),a probabilistic graphical model. In real-world applications such as activity recognition, unlabeled sensor traces are relatively easy to obtain whereas labeled examples are expensive and tedious to collect. Furthermore, the ability to automatically select a small subset of discriminatory features from a large pool can be advantageous in terms of computational speed as well as accuracy. We introduce the semi-supervised virtual evidence boosting (sVEB)algorithm for training CRFs — a semi-supervised extension to the recently developed virtual evidence boosting (VEB) method for feature selection and parameter learning. sVEB takes advantage of the unlabeled data via mini-mum entropy regularization. The objective function combines the unlabeled conditional entropy with labeled conditional pseudo-likelihood. The sVEB algorithm reduces the overall system cost as well as the human labeling cost required during training, which are both important considerations in building real world inference systems. Moreover, we propose an active learning algorithm for training CRFs is based on virtual evidence boosting and uses entropy measures. Active virtual evidence boosting (aVEB) queries the user for most informative examples, efficiently builds up labeled training examples and incorporates unlabeled data as in sVEB. aVEB not only reduces computational complexity of training CRFs as in sVEB, but also outputs more accurate classification results for the same fraction of labeled data. Ina set of experiments we illustrate that our algorithms, sVEB and aVEB, benefit from both the use of unlabeled data and automatic feature selection, and outperform other semi-supervised and active training approaches. The proposed methods could also be extended and employed for other classification problems in relational data.
20

Supervised Methods for Fault Detection in Vehicle

Xiang, Gao, Nan, Jiang January 2010 (has links)
Uptime and maintenance planning are important issues for vehicle operators (e.g.operators of bus fleets). Unplanned downtime can cause a bus operator to be fined if the vehicle is not on time. Supervised classification methods for detecting faults in vehicles are compared in this thesis. Data has been collected by a vehicle manufacturer including three kinds of faulty states in vehicles (i.e. charge air cooler leakage, radiator and air filter clogging). The problem consists of differentiating between the normal data and the three different categories of faulty data. Evaluated methods include linear model, neural networks model, 1-nearest neighbor and random forest model. For every kind of model, a variable selection method should be used. In our thesis we try to find the best model for this problem, and also select the most important input signals. After we compare these four models, we found that the best accuracy (96.9% correct classifications) was achieved with the random forest model.

Page generated in 0.0596 seconds