• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 20
  • 14
  • 13
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 170
  • 170
  • 56
  • 41
  • 38
  • 37
  • 31
  • 28
  • 27
  • 26
  • 26
  • 26
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Metric reconstruction of multiple rigid objects

De Vaal, Jan Hendrik 03 1900 (has links)
Thesis (MScEng (Mathematical Sciences. Applied Mathematics))--University of Stellenbosch, 2009. / Engineers struggle to replicate the capabilities of the sophisticated human visual system. This thesis sets out to recover motion and 3D structure of multiple rigid objects up to a similarity. The motion of these objects are either recorded in a single video sequence, or images of the objects are recorded on multiple, di erent cameras. We assume a perspective camera model with optional provision for calibration information. The Structure from Motion (SfM) problem is addressed from a matrix factorization point of view. This leads to a reconstruction correct up to a projectivity of little use in itself. Using techniques from camera autocalibration the projectivity is upgraded to a similarity. This reconstruction is also applied to multiple objects through motion segmentation. The SfM system developed in this thesis is a batch-processing algorithm, requiring few frames for a solution and readily accepts images from very di erent viewpoints. Since a solution can be obtained with just a few frames, it can be used to initialize sequential methods with slower convergence rates, such as the Kalman lter. The SfM system is critically evaluated against an extensive set of motion sequences.
32

Matrix factorization in recommender systems : How sensitive are matrix factorization models to sparsity?

Strömqvist, Zakris January 2018 (has links)
One of the most popular methods in recommender systems are matrix factorization (MF) models. In this paper, the sensitivity of sparsity of these models are investigated using a simulation study. Using the MovieLens dataset as a base several dense matrices are created. These dense matrices are then made sparse in two different ways to simulate different kinds of data. The accuracy of MF is then measured on each of the simulated sparse matrices. This shows that the matrix factorization models are sensitive to the degree of information available. For high levels of sparsity the MF performs badly but as the information level increases the accuracy of the models improve, for both samples.
33

Classification of Twitter disaster data using a hybrid feature-instance adaptation approach

Mazloom, Reza January 1900 (has links)
Master of Science / Department of Computer Science / Doina Caragea / Huge amounts of data that are generated on social media during emergency situations are regarded as troves of critical information. The use of supervised machine learning techniques in the early stages of a disaster is challenged by the lack of labeled data for that particular disaster. Furthermore, supervised models trained on labeled data from a prior disaster may not produce accurate results. To address these challenges, domain adaptation approaches, which learn models for predicting the target, by using unlabeled data from the target disaster in addition to labeled data from prior source disasters, can be used. However, the resulting models can still be affected by the variance between the target domain and the source domain. In this context, we propose to use a hybrid feature-instance adaptation approach based on matrix factorization and the k-nearest neighbors algorithm, respectively. The proposed hybrid adaptation approach is used to select a subset of the source disaster data that is representative of the target disaster. The selected subset is subsequently used to learn accurate supervised or domain adaptation Naïve Bayes classifiers for the target disaster. In other words, this study focuses on transforming the existing source data to bring it closer to the target data, thus overcoming the domain variance which may prevent effective transfer of information from source to target. A combination of selective and transformative methods are used on instances and features, respectively. We show experimentally that the proposed approaches are effective in transferring information from source to target. Furthermore, we provide insights with respect to what types and combinations of selections/transformations result in more accurate models for the target.
34

Détection de changements en imagerie hyperspectrale : une approche directionnelle / Change detection in hyperspectral imagery : a directional approach

Brisebarre, Godefroy 24 November 2014 (has links)
L’imagerie hyperspectrale est un type d’imagerie émergent qui connaît un essor important depuis le début des années 2000. Grâce à une structure spectrale très fine qui produit un volume de donnée très important, elle apporte, par rapport à l’imagerie visible classique, un supplément d’information pouvant être mis à profit dans de nombreux domaines d’exploitation. Nous nous intéressons spécifiquement à la détection et l’analyse de changements entre deux images de la même scène, pour des applications orientées vers la défense.Au sein de ce manuscrit, nous commençons par présenter l’imagerie hyperspectrale et les contraintes associées à son utilisation pour des problématiques de défense. Nous présentons ensuite une méthode de détection et de classification de changements basée sur la recherche de directions spécifiques dans l’espace généré par le couple d’images, puis sur la fusion des directions proches. Nous cherchons ensuite à exploiter l’information obtenue sur les changements en nous intéressant aux possibilités de dé-mélange de séries temporelles d’images d’une même scène. Enfin, nous présentons un certain nombre d’extensions qui pourront être réalisées afin de généraliser ou améliorer les travaux présentés et nous concluons. / Hyperspectral imagery is an emerging imagery technology which has known a growing interest since the 2000’s. This technology allows an impressive growth of the data registered from a specific scene compared to classical RGB imagery. Indeed, although the spatial resolution is significantly lower, the spectral resolution is very small and the covered spectral area is very wide. We focus on change detection between two images of a given scene for defense oriented purposes.In the following, we start by introducing hyperspectral imagery and the specificity of its exploitation for defence purposes. We then present a change detection and analysis method based on the search for specifical directions in the space generated by the image couple, followed by a merging of the nearby directions. We then exploit this information focusing on theunmixing capabilities of multitemporal hyperspectral data. Finally, we will present a range of further works that could be done in relation with our work and conclude about it.
35

Collaborative filtering approaches for single-domain and cross-domain recommender systems

Parimi, Rohit January 1900 (has links)
Doctor of Philosophy / Computing and Information Sciences / Doina Caragea / Increasing amounts of content on the Web means that users can select from a wide variety of items (i.e., items that concur with their tastes and requirements). The generation of personalized item suggestions to users has become a crucial functionality for many web applications as users benefit from being shown only items of potential interest to them. One popular solution to creating personalized item suggestions to users is recommender systems. Recommender systems can address the item recommendation task by utilizing past user preferences for items captured as either explicit or implicit user feedback. Numerous collaborative filtering (CF) approaches have been proposed in the literature to address the recommendation problem in the single-domain setting (user preferences from only one domain are used to recommend items). However, increasingly large datasets often prevent experimentation of every approach in order to choose the one that best fits an application domain. The work in this dissertation on the single-domain setting studies two CF algorithms, Adsorption and Matrix Factorization (MF), considered to be state-of-the-art approaches for implicit feedback and suggests that characteristics of a domain (e.g., close connections versus loose connections among users) or characteristics of data available (e.g., density of the feedback matrix) can be useful in selecting the most suitable CF approach to use for a particular recommendation problem. Furthermore, for Adsorption, a neighborhood-based approach, this work studies several ways to construct user neighborhoods based on similarity functions and on community detection approaches, and suggests that domain and data characteristics can also be useful in selecting the neighborhood approach to use for Adsorption. Finally, motivated by the need to decrease computational costs of recommendation algorithms, this work studies the effectiveness of using short-user histories and suggests that short-user histories can successfully replace long-user histories for recommendation tasks. Although most approaches for recommender systems use user preferences from only one domain, in many applications, user interests span items of various types (e.g., artists and tags). Each recommendation problem (e.g., recommending artists to users or recommending tags to users) can be considered unique domains, and user preferences from several domains can be used to improve accuracy in one domain, an area of research known as cross-domain recommender systems. The work in this dissertation on cross-domain recommender systems investigates several limitations of existing approaches and proposes three novel approaches (two Adsorption-based and one MF-based) to improve recommendation accuracy in one domain by leveraging knowledge from multiple domains with implicit feedback. The first approach performs aggregation of neighborhoods (WAN) from the source and target domains, and the neighborhoods are used with Adsorption to recommend target items. The second approach performs aggregation of target recommendations (WAR) from Adsorption computed using neighborhoods from the source and target domains. The third approach integrates latent user factors from source domains into the target through a regularized latent factor model (CIMF). Experimental results on six target recommendation tasks from two real-world applications suggest that the proposed approaches effectively improve target recommendation accuracy as compared to single-domain CF approaches and successfully utilize varying amounts of user overlap between source and target domains. Furthermore, under the assumption that tuning may not be possible for large recommendation problems, this work proposes an approach to calculate knowledge aggregation weights based on network alignment for WAN and WAR approaches, and results show the usefulness of the proposed solution. The results also suggest that the WAN and WAR approaches effectively address the cold-start user problem in the target domain.
36

Démixage d’images hyperspectrales en présence d’objets de petite taille / Spectral unmixing of hyperspectral images in the presence of small targets

Ravel, Sylvain 08 December 2017 (has links)
Cette thèse est consacrée au démixage en imagerie hyperspectrale en particulier dans le cas où des objets de petite taille sont présents dans la scène. Les images hyperspectrales contiennent une grande quantité d’information à la fois spectrale et spatiale, et chaque pixel peut être vu comme le spectre de réflexion de la zone imagée. Du fait de la faible résolution spatiale des capteurs le spectre de réflexion observé au niveau de chaque pixel est un mélange des spectres de réflexion de l’ensemble des composants imagés dans le pixel. Une problématique de ces images hyperspectrales est le démixage, qui consiste à décomposer l’image en une liste de spectres sources, appelés endmembers, correspondants aux spectres de réflexions des composants de la scène d’une part, et d’autre part la proportion de chacun de ces spectres source dans chaque pixel de l’image. De nombreuses méthodes de démixage existent mais leur efficacité reste amoindrie en présence de spectres sources dits rares (c’est-à-dire des spectres présents dans très peu de pixels, et souvent à un niveau subpixelique). Ces spectres rares correspondent à des composants présents en faibles quantités dans la scène et peuvent être vus comme des anomalies dont la détection est souvent cruciale pour certaines applications.Nous présentons dans un premier temps deux méthodes de détection des pixels rares dans une image, la première basée sur un seuillage de l’erreur de reconstruction après estimation des endmembers abondants, la seconde basée sur les coefficients de détails obtenus par la décomposition en ondelettes. Nous proposons ensuite une méthode de démixage adaptée au cas où une partie des endmembers sont connus a priori et montrons que cette méthode utilisée avec les méthodes de détection proposées permet le démixage des endmembers des pixels rares. Enfin nous étudions une méthode de rééchantillonnage basée sur la méthode du bootstrap pour amplifier le rôle de ces pixels rares et proposer des méthodes de démixage en présence d’objets de petite taille. / This thesis is devoted to the unmixing issue in hyperspectral images, especiallyin presence of small sized objects. Hyperspectral images contains an importantamount of both spectral and spatial information. Each pixel of the image canbe assimilated to the reflection spectra of the imaged scene. Due to sensors’ lowspatial resolution, the observed spectra are a mixture of the reflection spectraof the different materials present in the pixel. The unmixing issue consists inestimating those materials’ spectra, called endmembers, and their correspondingabundances in each pixel. Numerous unmixing methods have been proposed butthey fail when an endmembers is rare (that is to say an endmember present inonly a few of the pixels). We call rare pixels, pixels containing those endmembers.The presence of those rare endmembers can be seen as anomalies that we want todetect and unmix. In a first time, we present two detection methods to retrievethis anomalies. The first one use a thresholding criterion on the reconstructionerror from estimated dominant endmembers. The second one, is based on wavelettransform. Then we propose an unmixing method adapted when some endmembersare known a priori. This method is then used with the presented detectionmethod to propose an algorithm to unmix the rare pixels’ endmembers. Finally,we study the application of bootstrap resampling method to artificially upsamplerare pixels and propose unmixing methods in presence of small sized targets.
37

Nonnegative matrix factorization for transfer learning / Factorisation matricielle non-négative pour l'apprentissage par transfert

Redko, Ievgen 26 November 2015 (has links)
L’apprentissage par transfert consiste `a utiliser un jeu de taches pour influencerl’apprentissage et améliorer les performances sur une autre tache.Cependant, ce paradigme d’apprentissage peut en réalité gêner les performancessi les taches (sources et cibles) sont trop dissemblables. Un défipour l’apprentissage par transfert est donc de développer des approchesqui détectent et évitent le transfert négatif des connaissances utilisant tr`espeu d’informations sur la tache cible. Un cas particulier de ce type d’apprentissageest l’adaptation de domaine. C’est une situation o`u les tachessources et cibles sont identiques mais dans des domaines différents. Danscette thèse, nous proposons des approches adaptatives basées sur la factorisationmatricielle non-figurative permettant ainsi de trouver une représentationadéquate des données pour ce type d’apprentissage. En effet, unereprésentation utile rend généralement la structure latente dans les donnéesexplicite, et réduit souvent la dimensionnalité´e des données afin que d’autresméthodes de calcul puissent être appliquées. Nos contributions dans cettethèse s’articulent autour de deux dimensions complémentaires : théoriqueet pratique.Tout d’abord, nous avons propose deux méthodes différentes pour résoudrele problème de l’apprentissage par transfert non supervise´e bas´e sur destechniques de factorisation matricielle non-négative. La première méthodeutilise une procédure d’optimisation itérative qui vise `a aligner les matricesde noyaux calculées sur les bases des données provenant de deux taches.La seconde représente une approche linéaire qui tente de découvrir unplongement pour les deux taches minimisant la distance entre les distributionsde probabilité correspondantes, tout en préservant la propriété depositivité.Nous avons également propos´e un cadre théorique bas´e sur les plongementsHilbert-Schmidt. Cela nous permet d’améliorer les résultats théoriquesde l’adaptation au domaine, en introduisant une mesure de distancenaturelle et intuitive avec de fortes garanties de calcul pour son estimation.Les résultats propos´es combinent l’etancheite des bornes de la théoried’apprentissage de Rademacher tout en assurant l’estimation efficace deses facteurs cl´es.Les contributions théoriques et algorithmiques proposées ont et évaluéessur un ensemble de données de référence dans le domaine avec des résultatsprometteurs. / The ability of a human being to extrapolate previously gained knowledge to other domains inspired a new family of methods in machine learning called transfer learning. Transfer learning is often based on the assumption that objects in both target and source domains share some common feature and/or data space. If this assumption is false, most of transfer learning algorithms are likely to fail. In this thesis we propose to investigate the problem of transfer learning from both theoretical and applicational points of view.First, we present two different methods to solve the problem of unsuper-vised transfer learning based on Non-negative matrix factorization tech-niques. First one proceeds using an iterative optimization procedure that aims at aligning the kernel matrices calculated based on the data from two tasks. Second one represents a linear approach that aims at discovering an embedding for two tasks that decreases the distance between the cor-responding probability distributions while preserving the non-negativity property.We also introduce a theoretical framework based on the Hilbert-Schmidt embeddings that allows us to improve the current state-of-the-art theo-retical results on transfer learning by introducing a natural and intuitive distance measure with strong computational guarantees for its estimation. The proposed results combine the tightness of data-dependent bounds de-rived from Rademacher learning theory while ensuring the efficient esti-mation of its key factors.Both theoretical contributions and the proposed methods were evaluated on a benchmark computer vision data set with promising results. Finally, we believe that the research direction chosen in this thesis may have fruit-ful implications in the nearest future.
38

Ant Clustering with Consensus

Gu, Yuhua 01 April 2009 (has links)
Clustering is actively used in several research fields, such as pattern recognition, machine learning and data mining. This dissertation focuses on clustering algorithms in the data mining area. Clustering algorithms can be applied to solve the unsupervised learning problem, which deals with finding clusters in unlabeled data. Most clustering algorithms require the number of cluster centers be known in advance. However, this is often not suitable for real world applications, since we do not know this information in most cases. Another question becomes, once clusters are found by the algorithms, do we believe the clusters are exactly the right ones or do there exist better ones? In this dissertation, we present two new Swarm Intelligence based approaches for data clustering to solve the above issues. Swarm based approaches to clustering have been shown to be able to skip local extrema by doing a form of global search, our two newly proposed ant clustering algorithms take advantage of this. The first algorithm is a kernel-based fuzzy ant clustering algorithm using the Xie-Beni partition validity metric, it is a two stage algorithm, in the first stage of the algorithm ants move the cluster centers in feature space, the cluster centers found by the ants are evaluated using a reformulated kernel-based Xie-Beni cluster validity metric. We found when provided with more clusters than exist in the data our new ant-based approach produces a partition with empty clusters and/or very lightly populated clusters. Then the second stage of this algorithm was applied to automatically detect the number of clusters for a data set by using threshold solutions. The second ant clustering algorithm, using chemical recognition of nestmates is a combination of an ant based algorithm and a consensus clustering algorithm. It is a two-stage algorithm without initial knowledge of the number of clusters. The main contributions of this work are to use the ability of an ant based clustering algorithm to determine the number of cluster centers and refine the cluster centers, then apply a consensus clustering algorithm to get a better quality final solution. We also introduced an ensemble ant clustering algorithm which is able to find a consistent number of clusters with appropriate parameters. We proposed a modified online ant clustering algorithm to handle clustering large data sets. To our knowledge, we are the first to use consensus to combine multiple ant partitions to obtain robust clustering solutions. Experiments were done with twelve data sets, some of which were benchmark data sets, two artificially generated data sets and two magnetic resonance image brain volumes. The results show how the ant clustering algorithms play an important role in finding the number of clusters and providing useful information for consensus clustering to locate the optimal clustering solutions. We conducted a wide range of comparative experiments that demonstrate the effectiveness of the new approaches.
39

Peak identification and quantification in proteomic mass spectrograms using non-negative matrix factorization / プロテオミクスにおける非負値行列因子分解法によるマススペクトログラムピークの同定および定量

TAECHAWATTANANANT, PASRAWIN 25 May 2020 (has links)
京都大学 / 0048 / 新制・課程博士 / 博士(薬科学) / 甲第22651号 / 薬科博第123号 / 新制||薬科||13(附属図書館) / 京都大学大学院薬学研究科薬科学専攻 / (主査)教授 石濱 泰, 教授 緒方 博之, 教授 馬見塚 拓, 教授 山下 富義 / 学位規則第4条第1項該当 / Doctor of Pharmaceutical Sciences / Kyoto University / DFAM
40

Optimal Transport Dictionary Learning and Non-negative Matrix Factorization / 最適輸送辞書学習と非負値行列因子分解

Rolet, Antoine 23 March 2021 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第23314号 / 情博第750号 / 新制||情||128(附属図書館) / 京都大学大学院情報学研究科知能情報学専攻 / (主査)教授 山本 章博, 教授 鹿島 久嗣, 教授 河原 達也 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM

Page generated in 0.13 seconds