• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 20
  • 14
  • 13
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 170
  • 170
  • 56
  • 41
  • 38
  • 37
  • 31
  • 28
  • 27
  • 26
  • 26
  • 26
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Estimating and testing of functional data with restrictions

Lee, Sang Han 15 May 2009 (has links)
The objective of this dissertation is to develop a suitable statistical methodology for functional data analysis. Modern advanced technology allows researchers to collect samples as functional which means the ideal unit of samples is a curve. We consider each functional observation as the resulting of a digitized recoding or a realization from a stochastic process. Traditional statistical methodologies often fail to be applied to this functional data set due to the high dimensionality. Functional hypothesis testing is the main focus of my dissertation. We suggested a testing procedure to determine the significance of two curves with order restriction. This work was motivated by a case study involving high-dimensional and high-frequency tidal volume traces from the New York State Psychiatric Institute at Columbia University. The overall goal of the study was to create a model of the clinical panic attack, as it occurs in panic disorder (PD), in normal human subjects. We proposed a new dimension reduction technique by non-negative basis matrix factorization (NBMF) and adapted a one-degree of freedom test in the context of multivariate analysis. This is important because other dimension techniques, such as principle component analysis (PCA), cannot be applied in this context due to the order restriction. Another area that we investigated was the estimation of functions with constrained restrictions such as convexification and/or monotonicity, together with the development of computationally efficient algorithms to solve the constrained least square problem. This study, too, has potential for applications in various fields. For example, in economics the cost function of a perfectly competitive firm must be increasing and convex, and the utility function of an economic agent must be increasing and concave. We propose an estimation method for a monotone convex function that consists of two sequential shape modification stages: (i) monotone regression via solving a constrained least square problem and (ii) convexification of the monotone regression estimate via solving an associated constrained uniform approximation problem.
2

Modèles de signaux musicaux informés par la physiques des instruments : Application à l'analyse automatique de musique pour piano par factorisation en matrices non-négatives / Models of music signals informed by physics : Application to piano music analysis by non-negative matrix factorization

Rigaud, François 02 December 2013 (has links)
Cette thèse introduit des nouveaux modèles de signaux musicaux informés par la physique des instruments. Alors que les communautés de l'acoustique instrumentale et du traitement du signal considèrent la modélisation des sons instrumentaux suivant deux approches différentes (respectivement, une modélisation du mécanisme de production du son, opposée à une modélisation des caractéristiques "morphologiques" générales du son), cette thèse propose une approche collaborative en contraignant des modèles de signaux génériques à l'aide d'information basée sur l'acoustique. L'effort est ainsi porté sur la construction de modèles spécifiques à un instrument, avec des applications aussi bien tournées vers l'acoustique (apprentissage de paramètres liés à la facture et à l'accord) que le traitement du signal (transcription de musique). En particulier nous nous concentrons sur l'analyse de musique pour piano, instrument pour lequel les sons produits sont de nature inharmonique. Cependant, l'inclusion d'une telle propriété dans des modèles de signaux est connue pour entraîner des difficultés d'optimisation, allant jusqu'à endommager les performances (en comparaison avec un modèle harmonique plus simple) dans des tâches d'analyse telles que la transcription. Un objectif majeur de cette thèse est d'avoir une meilleure compréhension des difficultés liées à l'inclusion explicite de l'inharmonicité dans des modèles de signaux, et d'étudier l'influence de l'apport de cette information sur les performances d'analyse, en particulier dans une tâche de transcription. / This thesis introduces new models of music signals informed by the physics of the instruments. While instrumental acoustics and audio signal processing target the modeling of musical tones from different perspectives (modeling of the production mechanism of the sound vs modeling of the generic "morphological'' features of the sound), this thesis aims at mixing both approaches by constraining generic signal models with acoustics-based information. Thus, it is here intended to design instrument-specific models for applications both to acoustics (learning of parameters related to the design and the tuning) and signal processing (transcription). In particular, we focus on piano music analysis for which the tones have the well-known property of inharmonicity. The inclusion of such a property in signal models however makes the optimization harder, and may even damage the performance in tasks such as music transcription when compared to a simpler harmonic model. A major goal of this thesis is thus to have a better understanding about the issues arising from the explicit inclusion of the inharmonicity in signal models, and to investigate whether it is really valuable when targeting tasks such as polyphonic music transcription.
3

Time Series Forecasting using Temporal Regularized Matrix Factorization and Its Application to Traffic Speed Datasets

Zeng, Jianfeng 30 September 2021 (has links)
No description available.
4

Kernel Methods for Collaborative Filtering

Sun, Xinyuan 25 January 2016 (has links)
The goal of the thesis is to extend the kernel methods to matrix factorization(MF) for collaborative ltering(CF). In current literature, MF methods usually assume that the correlated data is distributed on a linear hyperplane, which is not always the case. The best known member of kernel methods is support vector machine (SVM) on linearly non-separable data. In this thesis, we apply kernel methods on MF, embedding the data into a possibly higher dimensional space and conduct factorization in that space. To improve kernelized matrix factorization, we apply multi-kernel learning methods to select optimal kernel functions from the candidates and introduce L2-norm regularization on the weight learning process. In our empirical study, we conduct experiments on three real-world datasets. The results suggest that the proposed method can improve the accuracy of the prediction surpassing state-of-art CF methods.
5

SOURCE APPORTIONMENT OF PM2.5 SHIP EMISSIONS IN HALIFAX, NOVA SCOTIA, CANADA

Toganassova, Dilyara 21 March 2013 (has links)
This study investigated the source attribution of ship emissions to atmospheric particulate matter with a median aerodynamic diameter less than, or equal to 2.5 micron (PM2.5) in the port city of Halifax, Nova Scotia, Canada. The USEPA PMF model successfully determined the following sources with the average mass (percentage) contribution: Sea salt 0.147 µg m-3 (5.3%), Surface dust 0.23 µg m-3 (8.3%), LRT Secondary (ammonium sulfate) 0.085 µg m-3 (3.1%), LRT Secondary (nitrate and sulfate) 0.107 µg m-3 (3.9%), Ship emissions 0.182 µg m-3 (6.6%), and Vehicles and re-suspended gypsum 2.015 µg m-3 (72.8%). A good correlation was achieved between PM2.5 total mass predicted and observed with R2 = 0.83, bias = -0.23, and RMSE = 0.09 µg m-3. In addition, a 2.5 times (60%) reduction in sulfate was estimated, when compared to 2006-2008 Government data in Halifax.
6

Recommendation Approaches Using Context-Aware Coupled Matrix Factorization

Agagu, Tosin January 2017 (has links)
In general, recommender systems attempt to estimate user preference based on historical data. A context-aware recommender system attempts to generate better recommendations using contextual information. However, generating recommendations for specific contexts has been challenging because of the difficulties in using contextual information to enhance the capabilities of recommender systems. Several methods have been used to incorporate contextual information into traditional recommendation algorithms. These methods focus on incorporating contextual information to improve general recommendations for users rather than identifying the different context applicable to the user and providing recommendations geared towards those specific contexts. In this thesis, we explore different context-aware recommendation techniques and present our context-aware coupled matrix factorization methods that use matrix factorization for estimating user preference and features in a specific contextual condition. We develop two methods: the first method attaches user preference across multiple contextual conditions, making the assumption that user preference remains the same, but the suitability of items differs across different contextual conditions; i.e., an item might not be suitable for certain conditions. The second method assumes that item suitability remains the same across different contextual conditions but user preference changes. We perform a number of experiments on the last.fm dataset to evaluate our methods. We also compared our work to other context-aware recommendation approaches. Our results show that grouping ratings by context and jointly factorizing with common factors improves prediction accuracy.
7

Algoritmos de detección de estrellas variables en imágenes astronómicas basados en factorización no negativa de matrices

Berrocal Zapata, Emanuel Antonio January 2015 (has links)
Ingeniero Civil Matemático / En este trabajo se presenta una metodología para de detectar estrellas variables en estampillas de 21 x 21 píxels provenientes de imágenes astronómicas a partir de un pipeline de procesamiento de imágenes para el telescopio Dark Energy Camera (DECam). Para aquello se realizará un aprendizaje con una muestra de imágenes que están etiquetadas como estrellas y no estrellas con el objetivo de detectar nuevas estrellas variables en imágenes desconocidas. Los objetos astronómicos que se observan en las imágenes, pueden ser agrupados en las categorías de: estrellas variables, estrellas fijas, rayos cósmicos, fuentes puntuales falsas, malas restas, supernovas y objetos desconocidos. Para la labor de detectar estrellas se ocuparon 2 algoritmos basados en NMF (Nonnegative matrix factorization) y un tercer algoritmo basado en Análisis de Componentes Principales (PCA). El enfoque del primer algoritmo NMF es la rapidez, mientras que el segundo se enfoca en la dispersión de información. Para el uso de los algoritmos anteriores se generó una metodología con la que trabajarán, los cuales ocupan técnicas de optimización no lineal y descomposición de matrices; finalmente se demuestra cual es el mejor de ellos para detectar estrellas gracias a una herramienta llamada Curva ROC. Una vez escogido el mejor método se realiza un estudio de sensibilidad de parámetros para mejorar la detección de estrellas y obtener una representación más genérica a través de la descomposición de matrices. El resultado final es que utilizando uno de los algoritmos basados en NMF, se obtienen los mejores resultados de clasificación, esto se concluye del análisis de la Curva ROC.
8

Learning to Rank with Contextual Information

Han, Peng 15 November 2021 (has links)
Learning to rank is utilized in many scenarios, such as disease-gene association, information retrieval and recommender system. Improving the prediction accuracy of the ranking model is the main target of existing works. Contextual information has a significant influence in the ranking problem, and has been proved effective to increase the prediction performance of ranking models. Then we construct similarities for different types of entities that could utilize contextual information uniformly in an extensible way. Once we have the similarities constructed by contextual information, how to uti- lize them for different types of ranking models will be the task we should tackle. In this thesis, we propose four algorithms for learning to rank with contextual informa- tion. To refine the framework of matrix factorization, we propose an area under the ROC curve (AUC) loss to conquer the sparsity problem. Clustering and sampling methods are used to utilize the contextual information in the global perspective, and an objective function with the optimal solution is proposed to exploit the contex- tual information in the local perspective. Then, for the deep learning framework, we apply the graph convolutional network (GCN) on the ranking problem with the combination of matrix factorization. Contextual information is utilized to generate the input embeddings and graph kernels for the GCN. The third method in this thesis is proposed to directly exploit the contextual information for ranking. Laplacian loss is utilized to solve the ranking problem, which could optimize the ranking matrix directly. With this loss, entities with similar contextual information will have similar ranking results. Finally, we propose a two-step method to solve the ranking problem of the sequential data. The first step in this two-step method is to generate the em- beddings for all entities with a new sampling strategy. Graph neural network (GNN) and long short-term memory (LSTM) are combined to generate the representation of sequential data. Once we have the representation of the sequential data, we could solve the ranking problem of them with pair-wise loss and sampling strategy.
9

Comparasion of recommender systems for stock inspiration

Broman, Nils January 2021 (has links)
Recommender systems are apparent in our lives through multiple different ways, such asrecommending what items to purchase when online shopping, recommending movies towatch and recommending restaurants in your area. This thesis aims to apply the sametechniques of recommender systems on a new area, namely stock recommendations basedon your current portfolio. The data used was collected from a social media platform forinvestments, Shareville, and contained multiple users portfolios. The implicit data wasthen used to train matrix factorization models, and the state-of-the-art LightGCN model.Experiments regarding different data splits was also conducted. Results indicate that rec-ommender systems techniques can be applied successfully to generate stock recommen-dations. Also, that the relative performance of the models on this dataset are in line withprevious research. LightGCN greatly outperforms matrix factorization models on this pro-posed dataset. The results also show that different data splits also greatly impact the re-sults, which is discussed in further detail in this thesis.
10

Source Apportionment Analysis of Measured Volatile Organic Compounds in Corpus Christi, Texas

Abood, Ahmed T. 05 1900 (has links)
Corpus Christi among of the largest industrialized coastal urban areas in Texas. The strategic location of the city along the Gulf of Mexico allows for many important industries and an international business to be located. The cluster of industries and businesses in the region contribute to the air pollution from emissions that are harmful to the environment and to the people living in and visiting the area. Volatile organic compounds (VOC) constitute an important class of pollutants measured in the area. The automated gas chromatography (Auto GC) data was collected from Texas Commission of Environmental Quality (TCEQ) and source apportionment analysis was conducted on this data to identify key sources of VOC affecting this study region. EPA PMF 3.0 was employed in this sources apportionment study of measured VOC concentration during 2005 - 2012 in Corpus Christi, Texas. The study identified nine optimal factors (Source) that could explain the concentration of VOC at two urbane monitoring sites in the study region. Natural gas was found to be the largest contributor of VOC in the area, followed by gasoline and vehicular exhaust. Diesel was the third highest contributor with emissions from manufacturing and combustion processes. Refineries gases and evaporative fugitive emissions were other major contributors in the area; Flaring operations, solvents, and petrochemicals also impacted the measured VOC in the urban area. It was noted that he measured VOC concentrations were significantly influenced by the economic downturn in the region and this was highlighted in the annual trends of the apportioned VOC.

Page generated in 0.1285 seconds