41 |
Limitations of Principal Component Analysis for Dimensionality-Reduction for Classification of Hyperspectral DataCheriyadat, Anil Meerasa 13 December 2003 (has links)
It is a popular practice in the remote-sensing community to apply principal component analysis (PCA) on a higher-dimensional feature space to achieve dimensionality-reduction. Several factors that have led to the popularity of PCA include its simplicity, ease of use, availability as part of popular remote-sensing packages, and optimal nature in terms of mean square error. These advantages have prompted the remote-sensing research community to overlook many limitations of PCA when used as a dimensionality-reduction tool for classification and target-detection applications. This thesis addresses the limitations of PCA when used as a dimensionality-reduction technique for extracting discriminating features from hyperspectral data. Theoretical and experimental analyses are presented to demonstrate that PCA is not necessarily an appropriate feature-extraction method for high-dimensional data when the objective is classification or target-recognition. The influence of certain data-distribution characteristics, such as within-class covariance, between-class covariance, and correlation on PCA transformation, is analyzed in this thesis. The classification accuracies obtained using PCA features are compared to accuracies obtained using other feature-extraction methods like variants of Karhunen-Loève transform and greedy search algorithms on spectral and wavelet domains. Experimental analyses are conducted for both two-class and multi-class cases. The classification accuracies obtained from higher-order PCA components are compared to the classification accuracies of features extracted from different regions of the spectrum. The comparative study done on the classification accuracies that are obtained using above feature-extraction methods, ascertain that PCA may not be an appropriate tool for dimensionality-reduction of certain hyperspectral data-distributions, when the objective is classification or target-recognition.
|
42 |
Characterizing Dimensionality Reduction Algorithm Performance in terms of Data Set AspectsSulecki, Nathan 08 May 2017 (has links)
No description available.
|
43 |
DIMENSIONALITY REDUCTION FOR DATA DRIVEN PROCESS MODELINGDWIVEDI, SAURABH January 2003 (has links)
No description available.
|
44 |
High Level Design Methodology for Reconfigurable SystemsDing, Mingwei January 2005 (has links)
No description available.
|
45 |
AN EVALUATION OF DIMENSIONALITY REDUCTION ON CELL FORMATION EFFICACYSharma, Vikas Manesh 28 August 2007 (has links)
No description available.
|
46 |
Multi-Platform Genomic Data Fusion with Integrative Deep LearningOni, Olatunji January 2019 (has links)
The abundance of next-generation sequencing (NGS) data has encouraged the adoption of machine learning methods to aid in the diagnosis and treatment of human disease. In particular, the last decade has shown the extensive use of predictive analytics in cancer research due to the prevalence of rich cellular descriptions of genetic and transcriptomic profiles of cancer cells. Despite the availability of wide-ranging forms of genomic data, few predictive models are designed to leverage multidimensional data sources. In this paper, we introduce a deep learning approach using neural network based information fusion to facilitate the integration of multi-platform genomic data, and the prediction of cancer cell sub-class. We propose the dGMU (deep gated multimodal unit), a series of multiplicative gates that can learn intermediate representations between multi-platform genomic data and improve cancer cell stratification. We also provide a framework for interpretable dimensionality reduction and assess several methods that visualize and explain the decisions of the underlying model. Experimental results on nine cancer types and four forms of NGS data (copy number variation, simple nucleotide variation, RNA expression, and miRNA expression) showed that the dGMU model improved the classification agreement of unimodal approaches and outperformed other fusion strategies in class accuracy. The results indicate that deep learning architectures based on multiplicative gates have the potential to expedite representation learning and knowledge integration in the study of cancer pathogenesis. / Thesis / Master of Science (MSc)
|
47 |
Psychology in the Field of Being: Merleau-Ponty, Ontology and Social Constructionism.Burkitt, Ian January 2003 (has links)
No / In this paper I take up the various ontological positions forwarded in social constructionism. While acknowledging its advances over other approaches to psychology, I nevertheless argue that the various ontological positions create confusion over the nature of human perception and the sensible realization of a world that does not rest wholly in language. Using the phenomenology of Merleau-Ponty, I argue for a more fundamental ontology that grasps the relation of the whole human being to the world. Essential to this are the metaphors of `field of Being', `dimensionality' and `transformation'. The field of Being is realized in bodily perception of the sensible world, which is then articulated and transformed in linguistic expression. This has to be understood as a naturally embodied topography as well as a culturally and historically articulated and transformed space. I therefore present these metaphors as an extension of constructionism, seeing psychological phenomena as existing more broadly in a field of Being.
|
48 |
Measuring Approach-Avoidance Motivation: Expanding the Dimensionality and the Implied Outcomes ProblemScott, Mark David 16 January 2012 (has links)
The current study sought to examine how best to fully represent and measure approach-avoidance motivational orientation using self-reports. Participants responded to a variety of existing, revised, and new scales across the theoretical spectrum of approach-avoidance motivation. Exploratory factor analyses were conducted to identify the items to be retained for evaluating the adequacy of competing confirmatory measurement structures. Overall results supported the validity of the second-order approach-avoidance overarching framework and indicated that the use of items with clear specification of reward/punishment context improves the psychometric properties of approach-avoidance scales. Moreover, the newly developed scales reflecting constructs that represent increasing non-gains via approach and increasing non-losses via avoidance meaningfully expanded the approach-avoidance construct space. It also appeared that the proposed four-dimensional model of approach-avoidance is a viable alternative measurement structure. Finally, the current results suggested that contamination by implied outcomes does not invalidate approach-avoidance scales where reward/punishment context is specified. Implications and recommendations for future research are discussed. / Ph. D.
|
49 |
Andromeda in Education: Studies on Student Collaboration and Insight Generation with Interactive Dimensionality ReductionTaylor, Mia Rachel 04 October 2022 (has links)
Andromeda is an interactive visualization tool that projects high-dimensional data into a scatterplot-like visualization using Weighted Multidimensional Scaling (WMDS). The visualization can be explored through surface-level interaction (viewing data values), parametric interaction (altering underlying parameterizations), and observation-level interaction (directly interacting with projected points). This thesis presents analyses on the collaborative utility of Andromeda in a middle school class and the insights college-level students generate when using Andromeda. The first study discusses how a middle school class collaboratively used Andromeda to explore and compare their engineering designs. The students analyzed their designs, represented as high-dimensional data, as a class. This study shows promise for introducing collaborative data analysis to middle school students in conjunction with other technical concepts such as the engineering design process. Participants in the study on college-level students were given a version of Andromeda, with access to different interactions, and were asked to generate insights on a dataset. By applying a novel visualization evaluation methodology on students' natural language insights, the results of this study indicate that students use different vocabulary supported by the interactions available to them, but not equally. The implications, as well as limitations, of these two studies are further discussed. / Master of Science / Data is often high-dimensional. A good example of this is a spreadsheet with many columns. Visualizing high-dimensional data is a difficult task because it must capture all information in 2 or 3 dimensions. Andromeda is a tool that can project high-dimensional data into a scatterplot-like visualization. Data points that are considered similar are plotted near each other and vice versa. Users can alter how important certain parts of the data are to the plotting algorithm as well as move points directly to update the display based on the user-specified layout. These interactions within Andromeda allow data analysts to explore high-dimensional data based on their personal sensemaking processes. As high dimensional thinking and exploratory data analysis are being introduced into more classrooms, it is important to understand the ways in which students analyze high-dimensional data. To address this, this thesis presents two studies. The first study discusses how a middle school class used Andromeda for their engineering design assignments. The results indicate that using Andromeda in a collaborative way enriched the students' learning experience. The second study analyzes how college-level students, when given access to different interaction types in Andromeda, generate insights into a dataset. Students use different vocabulary supported by the interactions available to them, but not equally. The implications, as well as limitations, of these two studies are further discussed.
|
50 |
Image reconstruction through multiple 1D approximationsWang, Bohan 10 January 2025 (has links)
2025 / Function approximation is a fundamental aspect of computational models and machine learning, often relying on neural networks due to their ability to effectively model complex functions and relationships. However, neural networks can be computationally intensive and lack interpretability. In this thesis, we explore an alternative approach to approximating two-dimensional (2D) functions by decomposing them into multiple one-dimensional (1D) approximations. Our method aims to enhance computational efficiency and interpretability while maintaining high approximation quality. We propose a framework that projects to approximate 2D functions through a series of 1D interpolations and also uses greedy sampling. By generating uniformly distributed projections and projecting pixel coordinates onto these projections, we form 1D curves and use interpolation to predict the values of the original function. Linear interpolation is employed for its simplicity and speed in estimating values between sampled points. A greedy algorithm is used to select sampling points that significantly reduce approximation error, optimizing the sampling strategy. We conducted extensive experiments on some images to evaluate the performance of our method. Metrics such as Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR) were used to assess reconstruction quality. Additionally, we ran neural network model and some other traditional models for comparison. Our results demonstrate that the proposed method provides a different focus compared to other methods, especially excelling in the restoration of high-contrast details in images. The findings suggest that multiple 1D approximations can reconstruct 2D functions with efficiency. Contrary to our initial intuition, the results reveal that increasing the number of sample points has a more significant impact on reconstruction quality than increasing the number of projections. Specifically, we observed that under the same parameter count, using as many sample points as possible led to better reconstruction results. Increasing the number of projections, while beneficial for reducing artifacts, has a less pronounced effect compared to increasing sample points. However, adding more projections can improve edge clarity and enhance the accuracy of each step in the greedy selection process, which helps in achieving better sample point locations during reconstruction. Additionally, we tested various sampling methods, such as uniform sampling and greedy MSE selection, and found that greedy selection of sample points based on MSE yielded significantly improved clarity, particularly around key features of the image. The experiments also showed that incorporating spatial diversity and edge information into the selection process did not always yield better results, highlighting the importance of selecting sample points that balance both edge and surrounding details. This work contributes to the field by providing an alternative method for function approximation that addresses some limitations of neural networks, particularly in terms of computational efficiency. Future work includes extending the approach to higher-dimensional data, exploring advanced interpolation techniques, and integrating the method with machine learning models to balance performance and transparency. Additionally, further research is needed to optimize the balance between projections and sample points to achieve the best reconstruction quality under different parameter constraints.
|
Page generated in 0.0935 seconds