• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1102
  • 257
  • 89
  • 85
  • 75
  • 26
  • 23
  • 21
  • 18
  • 17
  • 14
  • 13
  • 11
  • 7
  • 7
  • Tagged with
  • 2117
  • 524
  • 520
  • 488
  • 435
  • 357
  • 343
  • 317
  • 282
  • 270
  • 269
  • 262
  • 236
  • 180
  • 174
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Feature-based validation reasoning for intent-driven engineering design

Hounsell, Marcelo da Silva January 1998 (has links)
Feature based modelling represents the future of CAD systems. However, operations such as modelling and editing can corrupt the validity of a feature-based model representation. Feature interactions are a consequence of feature operations and the existence of a number of features in the same model. Feature interaction affects not only the solid representation of the part, but also the functional intentions embedded within features. A technique is thus required to assess the integrity of a feature-based model from various perspectives, including the functional intentional one, and this technique must take into account the problems brought about by feature interactions and operations. The understanding, reasoning and resolution of invalid feature-based models requires an understanding of the feature interaction phenomena, as well as the characterisation of these functional intentions. A system capable of such assessment is called a feature-based representation validation system. This research studies feature interaction phenomena and feature-based designer's intents as a medium to achieve a feature-based representation validation system.
122

Automatic Mapping of Off-road Trails and Paths at Fort Riley Installation, Kansas

Oller, Adam 01 May 2012 (has links)
The U.S. Army manages thousands of sites that cover millions of acres of land for various military training purposes and activities and often faces a great challenge on how to optimize the use of resources. A typical example is that the training activities often lead to off-road vehicle trails and paths and how to use the trails and paths in terms of minimizing maintenance cost becomes a problem. Being able to accurately extract and map the trails and paths is critical in advancing the U.S. Army's sustainability practices. The primary objective of this study is to develop a method geared specifically toward the military's needs of identifying and updating the off-road vehicle trails and paths for both environmental and economic purposes. The approach was developed using a well-known template matching program, called Feature Analyst, to analyze and extract the relevant trails and paths from Fort Riley's designated training areas. A 0.5 meter resolution false color infrared orthophoto with various spectral transformations/enhancements were used to extract the trails and paths. The optimal feature parameters for the highest accuracy of detecting the trails and paths were also investigated. A modified Heidke skill score was used for accuracy assessment of the outputs in comparison to the observed. The results showed the method was very promising, compared to traditional visual interpretation and hand digitizing. Moreover, suggested methods for extracting the trails and paths using remotely sensed images, including image spatial and spectral resolution, image transformations and enhancements, and kernel size, was obtained. In addition, the complexity of the trails and paths and the discussion on how to improve their extraction in the future were given.
123

Less is More, Until it Isn't: Feature-Richness in Experiential Purchases

January 2015 (has links)
abstract: When consumers make experiential purchases, they often have to decide between experiences that contain many or few features. Contrary to prior research demonstrating that consumers prefer feature-rich products before consumption but feature-poor products after consumption, the author reveals a reversal of this effect for experiences. Specifically, the author hypothesizes and finds that consumers prefer feature-poor experiences before consumption (a phenomenon denoted as `feature apprehension') but prefer feature-rich experiences after consumption. This feature apprehension occurs before consumption because consumers are concerned with the uncertainty associated with attaining a satisfying outcome from the experience. Manipulating the temporal distance with which consumers view the experience can attenuate this effect. Additionally, locus of control and social signaling moderate consumers' post-consumption preference for feature-rich experiences. The author proposes several recommendations for consumers and providers of experiences. / Dissertation/Thesis / Doctoral Dissertation Business Administration 2015
124

Exploração visual do espaço de características: uma abordagem para análise de imagens via projeção de dados multidimensionais / Visual feature space exploration: an approach to image analysis via multidimensional data projection

Bruno Brandoli Machado 13 December 2010 (has links)
Sistemas para análise de imagens partem da premissa de que o conjunto de dados sob investigação está corretamente representado por características. Entretanto, definir quais características representam apropriadamente um conjunto de dados é uma tarefa desafiadora e exaustiva. Grande parte das técnicas de descrição existentes na literatura, especialmente quando os dados têm alta dimensionalidade, são baseadas puramente em medidas estatísticas ou abordagens baseadas em inteligência artificial, e normalmente são caixas-pretas para os usuários. A abordagem proposta nesta dissertação busca abrir esta caixa-preta por meio de representações visuais criadas pela técnica Multidimensional Classical Scaling, permitindo que usuários capturem interativamente a essência sobre a representatividade das características computadas de diferentes descritores. A abordagem é avaliada sobre seis conjuntos de imagens que contém texturas, imagens médicas e cenas naturais. Os experimentos mostram que, conforme a combinação de um conjunto de características melhora a qualidade da representação visual, a acurácia de classificação também melhora. A qualidade das representações é medida pelo índice da silhueta, superando problemas relacionados com a subjetividade de conclusões baseadas puramente em análise visual. Além disso, a capacidade de exploração visual do conjunto sob análise permite que usuários investiguem um dos maiores desafios em classificação de dados: a presença de variação intra-classe. Os resultados sugerem fortemente que esta abordagem pode ser empregada com sucesso como um guia para auxiliar especialistas a explorar, refinar e definir as características que representam apropriadamente um conjunto de imagens / Image analysis systems rely on the fact that the dataset under investigation is correctly represented by features. However, defining a set of features that properly represents a dataset is still a challenging and, in most cases, an exhausting task. Most of the available techniques, especially when a large number of features is considered, are based on purely quantitative statistical measures or approaches based on artificial intelligence, and normally are black-boxes to the user. The approach proposed here seeks to open this black-box by means of visual representations via Multidimensional Classical Scaling projection technique, enabling users to get insight about the meaning and representativeness of the features computed from different feature extraction algorithms and sets of parameters. This approach is evaluated over six image datasets that contains textures, medical images and outdoor scenes. The results show that, as the combination of sets of features and changes in parameters improves the quality of the visual representation, the accuracy of the classification for the computed features also improves. In order to reduce this subjectiveness, a measure called silhouette index, which was originally proposed to evaluate results of clustering algorithms, is employed. Moreover, the visual exploration of datasets under analysis enable users to investigate one of the greatest challenges in data classification: the presence of intra-class variation. The results strongly suggest that our approach can be successfully employed as a guidance to defining and understanding a set of features that properly represents an image dataset
125

Feature selection via joint likelihood

Pocock, Adam Craig January 2012 (has links)
We study the nature of filter methods for feature selection. In particular, we examine information theoretic approaches to this problem, looking at the literature over the past 20 years. We consider this literature from a different perspective, by viewing feature selection as a process which minimises a loss function. We choose to use the model likelihood as the loss function, and thus we seek to maximise the likelihood. The first contribution of this thesis is to show that the problem of information theoretic filter feature selection can be rephrased as maximising the likelihood of a discriminative model. From this novel result we can unify the literature revealing that many of these selection criteria are approximate maximisers of the joint likelihood. Many of these heuristic criteria were hand-designed to optimise various definitions of feature "relevancy" and "redundancy", but with our probabilistic interpretation we naturally include these concepts, plus the "conditional redundancy", which is a measure of positive interactions between features. This perspective allows us to derive the different criteria from the joint likelihood by making different independence assumptions on the underlying probability distributions. We provide an empirical study which reinforces our theoretical conclusions, whilst revealing implementation considerations due to the varying magnitudes of the relevancy and redundancy terms. We then investigate the benefits our probabilistic perspective provides for the application of these feature selection criteria in new areas. The joint likelihood automatically includes a prior distribution over the selected feature sets and so we investigate how including prior knowledge affects the feature selection process. We can now incorporate domain knowledge into feature selection, allowing the imposition of sparsity on the selected feature set without using heuristic stopping criteria. We investigate the use of priors mainly in the context of Markov Blanket discovery algorithms, in the process showing that a family of algorithms based upon IAMB are iterative maximisers of our joint likelihood with respect to a particular sparsity prior. We thus extend the IAMB family to include a prior for domain knowledge in addition to the sparsity prior. Next we investigate what the choice of likelihood function implies about the resulting filter criterion. We do this by applying our derivation to a cost-weighted likelihood, showing that this likelihood implies a particular cost-sensitive filter criterion. This criterion is based on a weighted branch of information theory and we prove several novel results justifying its use as a feature selection criterion, namely the positivity of the measure, and the chain rule of mutual information. We show that the feature set produced by this cost-sensitive filter criterion can be used to convert a cost-insensitive classifier into a cost-sensitive one by adjusting the features the classifier sees. This can be seen as an analogous process to that of adjusting the data via over or undersampling to create a cost-sensitive classifier, but with the crucial difference that it does not artificially alter the data distribution. Finally we conclude with a summary of the benefits this loss function view of feature selection has provided. This perspective can be used to analyse other feature selection techniques other than those based upon information theory, and new groups of selection criteria can be derived by considering novel loss functions.
126

Segmentation and Line Filling of 2D Shapes

Pérez Rocha, Ana Laura January 2013 (has links)
The evolution of technology in the textile industry reached the design of embroidery patterns for machine embroidery. In order to create quality designs the shapes to be embroidered need to be segmented into regions that define different parts. One of the objectives of our research is to develop a method to automatically segment the shapes and by doing so making the process faster and easier. Shape analysis is necessary to find a suitable method for this purpose. It includes the study of different ways to represent shapes. In this thesis we focus on shape representation through its skeleton. We make use of a shape's skeleton and the shape's boundary through the so-called feature transform to decide how to segment a shape and where to place the segment boundaries. The direction of stitches is another important specification in an embroidery design. We develop a technique to select the stitch orientation by defining direction lines using the skeleton curves and information from the boundary. We compute the intersections of segment boundaries and direction lines with the shape boundary for the final definition of the direction line segments. We demonstrate that our shape segmentation technique and the automatic placement of direction lines produce sufficient constrains for automated embroidery designs. We show examples for lettering, basic shapes, as well as simple and complex logos.
127

Improving a Smartphone Wearable Mobility Monitoring System with Feature Selection and Transition Recognition

Capela, Nicole Alexandra January 2015 (has links)
Modern smartphones contain multiple sensors and long lasting batteries, making them ideal platforms for mobility monitoring. Mobility monitoring can provide rehabilitation professionals with an objective portrait of a patient’s daily mobility habits outside of a clinical setting. The objective of this thesis was to improve the performance of the human activity recognition within a custom Wearable Mobility Measurement System (WMMS). Performance of a current WMMS was evaluated on able-bodied and stroke participants to identify areas in need of improvement and differences between populations. Signal features for the waist-worn smartphone WMMS were selected using classifier-independent methods to identify features that were useful across populations. The newly selected features and a transition state recognition method were then implemented before evaluating the improved WMMS system’s activity recognition performance. This thesis demonstrated: 1) diverse population data is important for WMMS system design; 2) certain signal features are useful for human activity recognition across diverse populations; 3) the use of carefully selected features and transition state identification can provide accurate human activity recognition results without computationally complex methods.
128

Cloud environment selection and configuration : a software product lines-based approach / Sélection et configuration d’environnements de type Cloud : une approche basée sur les lignes de produits logiciels

Quinton, Clément 22 October 2014 (has links)
Pour tirer pleinement profit des avantages liés au déploiement dans les nuages, les applications doivent s’exécuter sur des environnements configurés de façon à répondre précisément aux besoins desdites applications. Nous considérons que la sélection et la configuration d’un nuage peut s’appuyer sur une approche basée sur les Lignes de Produits Logiciels (LPLs). Les LPLs sont définies pour exploiter les points communs par la définition d’éléments réutilisables. Cette thèse propose une approche basée sur les LPLs pour sélectionner et configurer un nuage en fonction des besoins d’une application à déployer. Concrètement, nous introduisons un modèle de variabilité permettant de décrire les points communs des différents nuages. Ce modèle de variabilité est notamment étendu avec des attributs et des multiplicités, qui peuvent être également impliqués dans des contraintes complexes. Ensuite, nous proposons une approche permettant de vérifier la cohérence de modèles de variabilité étendus avec des cardinalités, en particulier lorsque ceux-ci évoluent. En cas d’incohérence, nous fournissons un support permettant d’en expliquer son origine et sa cause. Enfin, nous proposons une plateforme automatisée de sélection et configuration de nuages, permettant la dérivation de fichiers de configuration relatifs aux besoins de l’application à déployer en fonction du nuage choisi. Ce travail de recherche s’est effectué dans le cadre du projet européen PaaSage. Les expérimentations menées montrent les avantages de notre approche basée sur les LPLs et, en particulier, comment son utilisation améliore la fiabilité lors d’un déploiement, tout en proposant une plateforme flexible et extensible. / To benefit from the promise of the cloud computing paradigm, applications must be deployed on well-suited and configured cloud environments fulfilling the application’s requirements. We consider that the selection and configuration of such environments can leverage Software Product Line (SPL) principles. SPLs were defined to take advantage of software commonalities through the definition of reusable artifacts. This thesis thus proposes an approach based on SPLs to select and configure cloud environments regarding the requirements related to the application to deploy. In particular, we introduce a variability model enabling the description of commonalities and variabilities between clouds as feature models. In addition, we extend this variability model with attributes and cardinalities, together with constraints over them. Then, we propose an approach to check the consistency of cardinality-based feature models when evolving those models. Our approach provides a support to detect and explain automatically a cardinality inconsistency. Finally, we propose an automated platform to select and configure cloud environments, which generates configuration scripts regarding the requirements of the application to deploy. This work has been done as part of the European PaaSage project. The experiments we conducted to evaluate our approach show that it is well suited to handle the configuration of cloud environments, being both scalable and practical while improving the reliability of the deployment.
129

Feature network methods for machine learning

Mu, Xinying 17 February 2021 (has links)
We develop a graph structure for feature vectors in machine learning, which we denote as a feature network (FN); this is different from sample-based networks, in which nodes simply represent samples. FNs reveal the underlying relationship among feature vector components and re-represent features as functions on a network. Our study focuses on using FN structures to extract underlying information and thus improve machine learning performance. Upon the representation of feature vectors as such functions, so-called graph signal processing, or graph functional analytic techniques can be implemented, consisting of analytic operations including differentiation and integration of feature vectors. Our motivation originated from a study using infrared spectroscopy data, where domain experts prefer using the second derivative information rather than the original data; this is an illustration of the potential power of understanding the underlying feature structure. We begin by developing a classification method based on the premise that is assuming data from different classes (e.g., different cancer subtypes) will have distinct underlying graph structures, for graphs consisting of genes as nodes and gene covariances as edges. That is, a feature vector from one class will tend to be "smooth" on the related FN, and "fluctuate" in the other FNs. This method, using an entirely new set of features from standard ones, on its own proves to somewhat outperform SVM and KNN in classifying cancer subtypes in infrared spectroscopy data and gene expression data. We are effectively also projecting high-dimensional data into a low dimensional representation of graph smoothness, providing a unique way of data visualization. Additionally, FNs represent new ways of thinking about data. With a graph structure for feature vectors, graphical functional analysis can be used to extract various types of information not apparent in the original feature vectors. Specifically, operations such as calculus, Fourier transforms, and convolutions can be performed on the graph vertex domain. We introduce a family of calculus-like operators in reproducing kernel Hilbert spaces for feature vector regularization to deal with two types of data deficiency, which we designate as noise and blurring. Such operations are generalized from widely used ones in computer vision. The derivative operations on feature vectors provide additional information by amplifying differences between highly correlated features. Integrating feature vectors smooths and denoises them. Applications show that those denoising and deblurring operators can improve classification algorithms. The feature network with deep learning can be naturally extended to graph convolutional networks. We proposed a deep multiscale clustering structure with small learning complexity on general graph distance structures. This framework substantially reduces the number of parameters, and it allows the introduction of general machine learning algorithms such as SVM to feed-forward in this deep structure.
130

SurfKE: A Graph-Based Feature Learning Framework for Keyphrase Extraction

Florescu, Corina Andreea 08 1900 (has links)
Current unsupervised approaches for keyphrase extraction compute a single importance score for each candidate word by considering the number and quality of its associated words in the graph and they are not flexible enough to incorporate multiple types of information. For instance, nodes in a network may exhibit diverse connectivity patterns which are not captured by the graph-based ranking methods. To address this, we present a new approach to keyphrase extraction that represents the document as a word graph and exploits its structure in order to reveal underlying explanatory factors hidden in the data that may distinguish keyphrases from non-keyphrases. Experimental results show that our model, which uses phrase graph representations in a supervised probabilistic framework, obtains remarkable improvements in performance over previous supervised and unsupervised keyphrase extraction systems.

Page generated in 0.0555 seconds