• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 185
  • 56
  • 24
  • 9
  • 9
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 377
  • 227
  • 86
  • 71
  • 67
  • 65
  • 47
  • 46
  • 46
  • 40
  • 37
  • 36
  • 34
  • 34
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Limitations of Principal Component Analysis for Dimensionality-Reduction for Classification of Hyperspectral Data

Cheriyadat, Anil Meerasa 13 December 2003 (has links)
It is a popular practice in the remote-sensing community to apply principal component analysis (PCA) on a higher-dimensional feature space to achieve dimensionality-reduction. Several factors that have led to the popularity of PCA include its simplicity, ease of use, availability as part of popular remote-sensing packages, and optimal nature in terms of mean square error. These advantages have prompted the remote-sensing research community to overlook many limitations of PCA when used as a dimensionality-reduction tool for classification and target-detection applications. This thesis addresses the limitations of PCA when used as a dimensionality-reduction technique for extracting discriminating features from hyperspectral data. Theoretical and experimental analyses are presented to demonstrate that PCA is not necessarily an appropriate feature-extraction method for high-dimensional data when the objective is classification or target-recognition. The influence of certain data-distribution characteristics, such as within-class covariance, between-class covariance, and correlation on PCA transformation, is analyzed in this thesis. The classification accuracies obtained using PCA features are compared to accuracies obtained using other feature-extraction methods like variants of Karhunen-Loève transform and greedy search algorithms on spectral and wavelet domains. Experimental analyses are conducted for both two-class and multi-class cases. The classification accuracies obtained from higher-order PCA components are compared to the classification accuracies of features extracted from different regions of the spectrum. The comparative study done on the classification accuracies that are obtained using above feature-extraction methods, ascertain that PCA may not be an appropriate tool for dimensionality-reduction of certain hyperspectral data-distributions, when the objective is classification or target-recognition.
42

Characterizing Dimensionality Reduction Algorithm Performance in terms of Data Set Aspects

Sulecki, Nathan 08 May 2017 (has links)
No description available.
43

DIMENSIONALITY REDUCTION FOR DATA DRIVEN PROCESS MODELING

DWIVEDI, SAURABH January 2003 (has links)
No description available.
44

High Level Design Methodology for Reconfigurable Systems

Ding, Mingwei January 2005 (has links)
No description available.
45

AN EVALUATION OF DIMENSIONALITY REDUCTION ON CELL FORMATION EFFICACY

Sharma, Vikas Manesh 28 August 2007 (has links)
No description available.
46

Multi-Platform Genomic Data Fusion with Integrative Deep Learning

Oni, Olatunji January 2019 (has links)
The abundance of next-generation sequencing (NGS) data has encouraged the adoption of machine learning methods to aid in the diagnosis and treatment of human disease. In particular, the last decade has shown the extensive use of predictive analytics in cancer research due to the prevalence of rich cellular descriptions of genetic and transcriptomic profiles of cancer cells. Despite the availability of wide-ranging forms of genomic data, few predictive models are designed to leverage multidimensional data sources. In this paper, we introduce a deep learning approach using neural network based information fusion to facilitate the integration of multi-platform genomic data, and the prediction of cancer cell sub-class. We propose the dGMU (deep gated multimodal unit), a series of multiplicative gates that can learn intermediate representations between multi-platform genomic data and improve cancer cell stratification. We also provide a framework for interpretable dimensionality reduction and assess several methods that visualize and explain the decisions of the underlying model. Experimental results on nine cancer types and four forms of NGS data (copy number variation, simple nucleotide variation, RNA expression, and miRNA expression) showed that the dGMU model improved the classification agreement of unimodal approaches and outperformed other fusion strategies in class accuracy. The results indicate that deep learning architectures based on multiplicative gates have the potential to expedite representation learning and knowledge integration in the study of cancer pathogenesis. / Thesis / Master of Science (MSc)
47

Andromeda in Education: Studies on Student Collaboration and Insight Generation with Interactive Dimensionality Reduction

Taylor, Mia Rachel 04 October 2022 (has links)
Andromeda is an interactive visualization tool that projects high-dimensional data into a scatterplot-like visualization using Weighted Multidimensional Scaling (WMDS). The visualization can be explored through surface-level interaction (viewing data values), parametric interaction (altering underlying parameterizations), and observation-level interaction (directly interacting with projected points). This thesis presents analyses on the collaborative utility of Andromeda in a middle school class and the insights college-level students generate when using Andromeda. The first study discusses how a middle school class collaboratively used Andromeda to explore and compare their engineering designs. The students analyzed their designs, represented as high-dimensional data, as a class. This study shows promise for introducing collaborative data analysis to middle school students in conjunction with other technical concepts such as the engineering design process. Participants in the study on college-level students were given a version of Andromeda, with access to different interactions, and were asked to generate insights on a dataset. By applying a novel visualization evaluation methodology on students' natural language insights, the results of this study indicate that students use different vocabulary supported by the interactions available to them, but not equally. The implications, as well as limitations, of these two studies are further discussed. / Master of Science / Data is often high-dimensional. A good example of this is a spreadsheet with many columns. Visualizing high-dimensional data is a difficult task because it must capture all information in 2 or 3 dimensions. Andromeda is a tool that can project high-dimensional data into a scatterplot-like visualization. Data points that are considered similar are plotted near each other and vice versa. Users can alter how important certain parts of the data are to the plotting algorithm as well as move points directly to update the display based on the user-specified layout. These interactions within Andromeda allow data analysts to explore high-dimensional data based on their personal sensemaking processes. As high dimensional thinking and exploratory data analysis are being introduced into more classrooms, it is important to understand the ways in which students analyze high-dimensional data. To address this, this thesis presents two studies. The first study discusses how a middle school class used Andromeda for their engineering design assignments. The results indicate that using Andromeda in a collaborative way enriched the students' learning experience. The second study analyzes how college-level students, when given access to different interaction types in Andromeda, generate insights into a dataset. Students use different vocabulary supported by the interactions available to them, but not equally. The implications, as well as limitations, of these two studies are further discussed.
48

Psychology in the Field of Being: Merleau-Ponty, Ontology and Social Constructionism.

Burkitt, Ian January 2003 (has links)
No / In this paper I take up the various ontological positions forwarded in social constructionism. While acknowledging its advances over other approaches to psychology, I nevertheless argue that the various ontological positions create confusion over the nature of human perception and the sensible realization of a world that does not rest wholly in language. Using the phenomenology of Merleau-Ponty, I argue for a more fundamental ontology that grasps the relation of the whole human being to the world. Essential to this are the metaphors of `field of Being', `dimensionality' and `transformation'. The field of Being is realized in bodily perception of the sensible world, which is then articulated and transformed in linguistic expression. This has to be understood as a naturally embodied topography as well as a culturally and historically articulated and transformed space. I therefore present these metaphors as an extension of constructionism, seeing psychological phenomena as existing more broadly in a field of Being.
49

Measuring Approach-Avoidance Motivation: Expanding the Dimensionality and the Implied Outcomes Problem

Scott, Mark David 16 January 2012 (has links)
The current study sought to examine how best to fully represent and measure approach-avoidance motivational orientation using self-reports. Participants responded to a variety of existing, revised, and new scales across the theoretical spectrum of approach-avoidance motivation. Exploratory factor analyses were conducted to identify the items to be retained for evaluating the adequacy of competing confirmatory measurement structures. Overall results supported the validity of the second-order approach-avoidance overarching framework and indicated that the use of items with clear specification of reward/punishment context improves the psychometric properties of approach-avoidance scales. Moreover, the newly developed scales reflecting constructs that represent increasing non-gains via approach and increasing non-losses via avoidance meaningfully expanded the approach-avoidance construct space. It also appeared that the proposed four-dimensional model of approach-avoidance is a viable alternative measurement structure. Finally, the current results suggested that contamination by implied outcomes does not invalidate approach-avoidance scales where reward/punishment context is specified. Implications and recommendations for future research are discussed. / Ph. D.
50

Novel computationally intelligent machine learning algorithms for data mining and knowledge discovery

Gheyas, Iffat A. January 2009 (has links)
This thesis addresses three major issues in data mining regarding feature subset selection in large dimensionality domains, plausible reconstruction of incomplete data in cross-sectional applications, and forecasting univariate time series. For the automated selection of an optimal subset of features in real time, we present an improved hybrid algorithm: SAGA. SAGA combines the ability to avoid being trapped in local minima of Simulated Annealing with the very high convergence rate of the crossover operator of Genetic Algorithms, the strong local search ability of greedy algorithms and the high computational efficiency of generalized regression neural networks (GRNN). For imputing missing values and forecasting univariate time series, we propose a homogeneous neural network ensemble. The proposed ensemble consists of a committee of Generalized Regression Neural Networks (GRNNs) trained on different subsets of features generated by SAGA and the predictions of base classifiers are combined by a fusion rule. This approach makes it possible to discover all important interrelations between the values of the target variable and the input features. The proposed ensemble scheme has two innovative features which make it stand out amongst ensemble learning algorithms: (1) the ensemble makeup is optimized automatically by SAGA; and (2) GRNN is used for both base classifiers and the top level combiner classifier. Because of GRNN, the proposed ensemble is a dynamic weighting scheme. This is in contrast to the existing ensemble approaches which belong to the simple voting and static weighting strategy. The basic idea of the dynamic weighting procedure is to give a higher reliability weight to those scenarios that are similar to the new ones. The simulation results demonstrate the validity of the proposed ensemble model.

Page generated in 0.1337 seconds