• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 263
  • 38
  • 25
  • 24
  • 5
  • 4
  • 4
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 437
  • 87
  • 68
  • 62
  • 56
  • 53
  • 46
  • 40
  • 40
  • 39
  • 38
  • 38
  • 37
  • 34
  • 34
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Design of a Broadband Doherty Power Amplifier with a Graphical User Interface Tool

Gong, Pingzhu 27 October 2022 (has links)
No description available.
192

Systematic Assessment of Structural Features-Based Graph Embedding Methods with Application to Biomedical Networks

Zhu, Xiaoting 04 November 2020 (has links)
No description available.
193

Eigen-analysis of kernel operators for nonlinear dimension reduction and discrimination

Liang, Zhiyu 02 June 2014 (has links)
No description available.
194

An Invariant Embedding Approach to Domain Decomposition

Volzer, Joseph R. 12 June 2014 (has links)
No description available.
195

Linguistic Knowledge Transfer for Enriching Vector Representations

Kim, Joo-Kyung 12 December 2017 (has links)
No description available.
196

Human-in-the-loop Machine Learning: Algorithms and Applications

Liang, Jiongqian 25 September 2018 (has links)
No description available.
197

Novel Frameworks for Mining Heterogeneous and Dynamic Networks

Fang, Chunsheng January 2011 (has links)
No description available.
198

Optimization of a paraffin cooling system for an automated tissue embedding center

Landis, Adam January 2004 (has links)
No description available.
199

LSTM Feature Engineering Through Time Series Similarity Embedding / Aspektkonstruktion för LSTM-nätverk genom inbäddning av tidsserielikheter

Bångerius, Sebastian January 2022 (has links)
Time series prediction has many applications. In cases with simultaneous series (like measurements of weather from multiple stations, or multiple stocks on the stock market)it is not unlikely that these series from different measurement origins behave similarly, or respond to the same contextual signals. Training input to a prediction model could be constructed from all simultaneous measurements to try and capture the relations between the measurement origins. A generalized approach is to train a prediction model on samples from any individual measurement origin. The data mass is the same in both cases, but in the first case, fewer samples of a larger width are used, while the second option uses a higher number of smaller samples. The first, high-width option, risks over-fitting as a result of fewer training samples per input variable. The second, general option, would have no way to learn relations between the measurement origins. Amending the general model with contextual information would allow for keeping a high samples-per-variable ratio without losing the ability to take the origin of the measurements into account. This thesis presents a vector embedding method for measurement origins in an environment with shared response to contextual signals. The embeddings are based on multi-variate time series from the origins. The embedding method is inspired by co-occurrence matrices commonly used in Natural Language Processing. The similarity measures used between the series are Dynamic Time Warping (DTW), Step-wise Euclidean Distance, and Pearson Correlation. The dimensionality of the resulting embeddings is reduced by Principal Component Analysis (PCA) to increase information density, and effectively preserve variance in the similarity space. The created embedding system allows contextualization of samples, akin to the human intuition that comes from knowing where measurements were taken from, like knowing what sort of company a stock ticker represents, or what environment a weather station is located in. In the embedded space, embeddings of series from fundamentally similar measurement origins are closely located, so that information regarding the behavior of one can be generalized to its neighbors. The resulting embeddings from this work resonate well with existing clustering methods in a weather dataset, and partially in a financial dataset, and do provide performance improvement for an LSTM network acting on said financial dataset. The similarity embeddings also outperform an embedding layer trained together with the LSTM.
200

Semi-Supervised Half-Quadratic Nonnegative Matrix Factorization for Face Recognition

Alghamdi, Masheal M. 05 1900 (has links)
Face recognition is a challenging problem in computer vision. Difficulties such as slight differences between similar faces of different people, changes in facial expressions, light and illumination condition, and pose variations add extra complications to the face recognition research. Many algorithms are devoted to solving the face recognition problem, among which the family of nonnegative matrix factorization (NMF) algorithms has been widely used as a compact data representation method. Different versions of NMF have been proposed. Wang et al. proposed the graph-based semi-supervised nonnegative learning (S2N2L) algorithm that uses labeled data in constructing intrinsic and penalty graph to enforce separability of labeled data, which leads to a greater discriminating power. Moreover the geometrical structure of labeled and unlabeled data is preserved through using the smoothness assumption by creating a similarity graph that conserves the neighboring information for all labeled and unlabeled data. However, S2N2L is sensitive to light changes, illumination, and partial occlusion. In this thesis, we propose a Semi-Supervised Half-Quadratic NMF (SSHQNMF) algorithm that combines the benefits of S2N2L and the robust NMF by the half- quadratic minimization (HQNMF) algorithm.Our algorithm improves upon the S2N2L algorithm by replacing the Frobenius norm with a robust M-Estimator loss function. A multiplicative update solution for our SSHQNMF algorithmis driven using the half- 4 quadratic (HQ) theory. Extensive experiments on ORL, Yale-A and a subset of the PIE data sets for nine M-estimator loss functions for both SSHQNMF and HQNMF algorithms are investigated, and compared with several state-of-the-art supervised and unsupervised algorithms, along with the original S2N2L algorithm in the context of classification, clustering, and robustness against partial occlusion. The proposed algorithm outperformed the other algorithms. Furthermore, SSHQNMF with Maximum Correntropy (MC) loss function obtained the best results for most test cases.

Page generated in 0.0653 seconds