• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 187
  • 56
  • 24
  • 9
  • 9
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 381
  • 230
  • 87
  • 73
  • 69
  • 66
  • 48
  • 46
  • 46
  • 40
  • 38
  • 37
  • 35
  • 34
  • 31
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

SqueezeFit Linear Program: Fast and Robust Label-aware Dimensionality Reduction

Lu, Tien-hsin 01 October 2020 (has links)
No description available.
142

Advances on Dimension Reduction for Univariate and Multivariate Time Series

Mahappu Kankanamge, Tharindu Priyan De Alwis 01 August 2022 (has links) (PDF)
Advances in modern technologies have led to an abundance of high-dimensional time series data in many fields, including finance, economics, health, engineering, and meteorology, among others. This causes the “curse of dimensionality” problem in both univariate and multivariate time series data. The main objective of time series analysis is to make inferences about the conditional distributions. There are some methods in the literature to estimate the conditional mean and conditional variance functions in time series. However, most of those are inefficient, computationally intensive, or suffer from the overparameterization. We propose some dimension reduction techniques to address the curse of dimensionality in high-dimensional time series dataFor high-dimensional matrix-valued time series data, there are a limited number of methods in the literature that can preserve the matrix structure and reduce the number of parameters significantly (Samadi, 2014, Chen et al., 2021). However, those models cannot distinguish between relevant and irrelevant information and yet suffer from the overparameterization. We propose a novel dimension reduction technique for matrix-variate time series data called the "envelope matrix autoregressive model" (EMAR), which offers substantial dimension reduction and links the mean function and the covariance matrix of the model by using the minimal reducing subspace of the covariance matrix. The proposed model can identify and remove irrelevant information and can achieve substantial efficiency gains by significantly reducing the total number of parameters. We derive the asymptotic properties of the proposed maximum likelihood estimators of the EMAR model. Extensive simulation studies and a real data analysis are conducted to corroborate our theoretical results and to illustrate the finite sample performance of the proposed EMAR model.For univariate time series, we propose sufficient dimension reduction (SDR) methods based on some integral transformation approaches that can preserve sufficient information about the response. In particular, we use the Fourier and Convolution transformation methods (FM and CM) to perform sufficient dimension reduction in univariate time series and estimate the time series central subspace (TS-CS), the time series mean subspace (TS-CMS), and the time series variance subspace (TS-CVS). Using FM and CM procedures and with some distributional assumptions, we derive candidate matrices that can fully recover the TS-CS, TS-CMS, and TS-CVS, and propose an explicit estimate of the candidate matrices. The asymptotic properties of the proposed estimators are established under both normality and non-normality assumptions. Moreover, we develop some data-drive methods to estimate the dimension of the time series central subspaces as well as the lag order. Our simulation results and real data analyses reveal that the proposed methods are not only significantly more efficient and accurate but also offer substantial computational efficiency compared to the existing methods in the literature. Moreover, we develop an R package entitled “sdrt” to easily perform our program code in FM and CM procedures to estimate suffices dimension reduction subspaces in univariate time series.
143

Dimensionality Reduction of Hyperspectral Imagery Using Random Projections

Menon, Vineetha 09 December 2016 (has links)
Hyperspectral imagery is often associated with high storage and transmission costs. Dimensionality reduction aims to reduce the time and space complexity of hyperspectral imagery by projecting data into a low-dimensional space such that all the important information in the data is preserved. Dimensionality-reduction methods based on transforms are widely used and give a data-dependent representation that is unfortunately costly to compute. Recently, there has been a growing interest in data-independent representations for dimensionality reduction; of particular prominence are random projections which are attractive due to their computational efficiency and simplicity of implementation. This dissertation concentrates on exploring the realm of computationally fast and efficient random projections by considering projections based on a random Hadamard matrix. These Hadamard-based projections are offered as an alternative to more widely used random projections based on dense Gaussian matrices. Such Hadamard matrices are then coupled with a fast singular value decomposition in order to implement a two-stage dimensionality reduction that marries the computational benefits of the data-independent random projection to the structure-capturing capability of the data-dependent singular value transform. Finally, random projections are applied in conjunction with nonnegative least squares to provide a computationally lightweight methodology for the well-known spectral-unmixing problem. Overall, it is seen that random projections offer a computationally efficient framework for dimensionality reduction that permits hyperspectral-analysis tasks such as unmixing and classification to be conducted in a lower-dimensional space without sacrificing analysis performance while reducing computational costs significantly.
144

Identifying cell type-specific proliferation signatures in spatial transcriptomics data and inferring interactions driving tumour growth

Wærn, Felix January 2023 (has links)
Cancer is a dangerous disease caused by mutations in the host's genome that makes the cells proliferateuncontrollably and disrupts bodily functions. The immune system tries to prevent this, but tumours have methods ofdisrupting the immune system's ability to combat the cancer. These immunosuppression events can for examplehappen when the immune system interacts with the tumour to recognise it or try and destroy it. The tumours can bychanging their displayed proteins on the cell surface avoid detection or by excreting proteins, they can neutralisedangerous immune cells. This happens within the tumour microenvironment (TME), the immediate surrounding of atumour where there is a plethora of different cells both aiding and suppressing the tumour. Some of these cells arenot cancer cells but can still aid the tumour due to how the tumour has influenced them. For example, throughangiogenesis, where new blood vessels are formed which feeds the tumour. The interactions in the TME can be used as a target for immunotherapy, a field of treatments which improves theimmune system's own ability at defending against cancer. Immunotherapy can for example help the immune systemby guiding immune cells towards the tumour. It is therefore essential to understand the complex system ofinteractions within the TME to be able to create new methods of immunotherapy and thus treat cancers moreefficiently. Concurrently new methods of mapping what happens in a tissue have been developed in recent years,namely spatial transcriptomics (ST). It allows for the retrieval of transcriptomic information of cells throughsequencing while still retaining spatial information. However, the ST methods which capture the wholetranscriptome of the cells and reveal the cell-to-cell interactions are not of single-cell resolution yet. They capturemultiple cells in each spot, creating a mix of cells in the sequencing. This mix of cells can be detangled, and theproportions of each cell type revealed through the process of deconvolution. Deconvolution works by mapping thesingle cell expression profile of different cell types onto the ST data and figuring out what proportions of expressioneach cell type produces the expression of the mix. This reveals the cellular composition of the microenvironment.But since the interactions in the TME depend on the cells current expression we need to deconvolute according tophenotype and not just cell type. In this project we were able to create a tool which automatically finds phenotypes in the single-cell data and usesthose phenotypes to deconvolute ST data. Phenotypes are found using dimensionality reduction methods todifferentiate cells according to their contribution to the variability in the data. The resulting deconvoluted data wasthen used as the foundation for describing the growth of a cancer as a system of phenotype proportions in the tumourmicroenvironment. From this system a mathematical model was created which predicts the growth and couldprovide insight into how the phenotypes interact. The tool created worked as intended and the model explains thegrowth of a tumour in the TME with not just cancer cells phenotypes but other cell phenotypes as well. However, nonew interaction could be discovered by the final model and no phenotype found could provide us with new insightsto the structure of the TME. But our analysis was able to identify structures we expect to see in a tumour, eventhough they might not be so obvious, so an improved version of our tools might be able to find even more detailsand perhaps new, more subtle interactions.
145

Tvorba testových baterií pro diagnostiku motorických projevů laterality - vztah mezi mozečkovou dominancí a výkonností horní končetiny / Development of Test Batteries for Diagnostics of Motor Laterality Manifestation - Link between Cerebellar Dominance and Hand Performance

Musálek, Martin January 2012 (has links)
The aim of this study is to contribute to the standardization of the new diagnostic tools assessing the motor manifestations of laterality in adults and children aged 8 to 10 years, both in terms of determining the theoretical concept and the selection of appropriate items, and the verification of structural hypotheses concerning the design of acceptable models, including the diagnostic quality of individual parts of the test battery. Moreover in this study we try to suggest new approach in assessing of motor laterality manifestation by means of relationship between cerebellar dominance and hand performance. The first part of this thesis deals with the concept of laterality, its manifestations and meaning in non-living systems and living organisms. As a human characteristic, laterality is manifested in a variety of functional and structural asymmetries. This part also discusses ways of diagnosing motor manifestations of laterality and the issue of cerebellar dominance, including its reflection in the form of asymmetry of the extinction physiological syndrome of upper limbs. The second part focuses on the process of the standardization study, the statistical method of structural equation modelling, and the actual design of test battery construction. The last part of this thesis presents the results...
146

Essays on the impact of aid types

Fakutiju, Michael Ade 06 August 2021 (has links) (PDF)
The literature has shown that aggregate aid is mostly ineffective (Doucouliagos and Paldam, 2011). However, new studies on foreign aid also show that the effect of aid depends on both aid type and the donor type (Clemens et al., 2012; Isaksson and Kotsadam, 2018). Thus, the first essay investigates the impact of education aid on educational outcomes. The study uses panel data for 83 developing countries from 2000-2014 to examine World Bank education aid. The results suggest that there is no robust evidence that education aid is effective in improving educational outcomes. The paper finds some evidence that aid improves enrollment rates in primary and secondary but not tertiary education. The results show that aid's effectiveness is determined, to a large extent, by the type of aid and the economic outcomes aid targeted. Likewise, the second essay examines whether specific types of aid are more effective across different donors. The study uses factor analysis to separate aid flows into interpretable categories, economic purposes, social purposes, and infrastructure. In addition, the study compares three donors, the World Bank, the U.S., and China. Examining the growth effect of each aid type for each donor shows that the impacts depend on aid type. All the aid types are positive irrespective of the donor, though only the U.S. aid types show some improvements economic growth. The Chinese economic aid is a complement of the World Bank economic aid. However, the Chinese social aid and the World Bank social aid are both substitutes. Both studies show that most foreign aid to developing countries is not effective, but disaggregating aid by type can lead to moderate improvements in developing countries.
147

The Strength of Multidimensional Item Response Theory in Exploring Construct Space that is Multidimensional and Correlated

Spencer, Steven Gerry 08 December 2004 (has links) (PDF)
This dissertation compares the parameter estimates obtained from two item response theory (IRT) models: the 1-PL IRT model and the MC1-PL IRT model. Several scenarios were explored in which both unidimensional and multidimensional item-level and personal-level data were used to generate the item responses. The Monte Carlo simulations mirrored the real-life application of the two correlated dimensions of Necessary Operations and Calculations in the basic mathematics domain. In all scenarios, the MC1-PL IRT model showed greater precision in the recovery of the true underlying item difficulty values and person theta values along each primary dimension as well as along a second general order factor. The fit statistics that are generally applied to the 1-PL IRT model were not sensitive to the multidimensional item-level structure, reinforcing the requisite assumption of unidimensionality when applying the 1-PL IRT model.
148

Advancing the Effectiveness of Non-Linear Dimensionality Reduction Techniques

Gashler, Michael S. 18 May 2012 (has links) (PDF)
Data that is represented with high dimensionality presents a computational complexity challenge for many existing algorithms. Limiting dimensionality by discarding attributes is sometimes a poor solution to this problem because significant high-level concepts may be encoded in the data across many or all of the attributes. Non-linear dimensionality reduction (NLDR) techniques have been successful with many problems at minimizing dimensionality while preserving intrinsic high-level concepts that are encoded with varying combinations of attributes. Unfortunately, many challenges remain with existing NLDR techniques, including excessive computational requirements, an inability to benefit from prior knowledge, and an inability to handle certain difficult conditions that occur in data with many real-world problems. Further, certain practical factors have limited advancement in NLDR, such as a lack of clarity regarding suitable applications for NLDR, and a general inavailability of efficient implementations of complex algorithms. This dissertation presents a collection of papers that advance the state of NLDR in each of these areas. Contributions of this dissertation include: • An NLDR algorithm, called Manifold Sculpting, that optimizes its solution using graduated optimization. This approach enables it to obtain better results than methods that only optimize an approximate problem. Additionally, Manifold Sculpting can benefit from prior knowledge about the problem. • An intelligent neighbor-finding technique called SAFFRON that improves the breadth of problems that existing NLDR techniques can handle. • A neighborhood refinement technique called CycleCut that further increases the robustness of existing NLDR techniques, and that can work in conjunction with SAFFRON to solve difficult problems. • Demonstrations of specific applications for NLDR techniques, including the estimation of state within dynamical systems, training of recurrent neural networks, and imputing missing values in data. • An open source toolkit containing each of the techniques described in this dissertation, as well as several existing NLDR algorithms, and other useful machine learning methods.
149

A Geometric Framework for Transfer Learning Using Manifold Alignment

Wang, Chang 01 September 2010 (has links)
Many machine learning problems involve dealing with a large amount of high-dimensional data across diverse domains. In addition, annotating or labeling the data is expensive as it involves significant human effort. This dissertation explores a joint solution to both these problems by exploiting the property that high-dimensional data in real-world application domains often lies on a lower-dimensional structure, whose geometry can be modeled as a graph or manifold. In particular, we propose a set of novel manifold-alignment based approaches for transfer learning. The proposed approaches transfer knowledge across different domains by finding low-dimensional embeddings of the datasets to a common latent space, which simultaneously match corresponding instances while preserving local or global geometry of each input dataset. We develop a novel two-step transfer learning method called Procrustes alignment. Procrustes alignment first maps the datasets to low-dimensional latent spaces reflecting their intrinsic geometries and then removes the translational, rotational and scaling components from one set so that the optimal alignment between the two sets can be achieved. This approach can preserve either global geometry or local geometry depending on the dimensionality reduction approach used in the first step. We propose a general one-step manifold alignment framework called manifold projections that can find alignments, both across instances as well as across features, while preserving local domain geometry. We develop and mathematically analyze several extensions of this framework to more challenging situations, including (1) when no correspondences across domains are given; (2) when the global geometry of each input domain needs to be respected; (3) when label information rather than correspondence information is available. A final contribution of this thesis is the study of multiscale methods for manifold alignment. Multiscale alignment automatically generates alignment results at different levels by discovering the shared intrinsic multilevel structures of the given datasets, providing a common representation across all input datasets.
150

Variational Autoencoder and Sensor Fusion for Robust Myoelectric Controls

Currier, Keith A 01 January 2023 (has links) (PDF)
Myoelectric control schemes aim to utilize the surface electromyography (EMG) signals which are the electric potentials directly measured from skeletal muscles to control wearable robots such as exoskeletons and prostheses. The main challenge of myoelectric controls is to increase and preserve the signal quality by minimizing the effect of confounding factors such as muscle fatigue or electrode shift. Current research in myoelectric control schemes are developed to work in ideal laboratory conditions, but there is a persistent need to have these control schemes be more robust and work in real-world environments. Following the manifold hypothesis, complexity in the world can be broken down from a high-dimensional space to a lower-dimensional form or representation that can explain how the higher-dimensional real world operates. From this premise, the biological actions and their relevant multimodal signals can be compressed and optimally pertinent when performed in both laboratory and non-laboratory settings once the learned representation or manifold is discovered. This thesis outlines a method that incorporates the use of a contrastive variational autoencoder with an integrated classifier on multimodal sensor data to create a compressed latent space representation that can be used in future myoelectric control schemes.

Page generated in 0.0946 seconds