• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 90
  • 20
  • 14
  • 13
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 170
  • 170
  • 56
  • 41
  • 38
  • 37
  • 31
  • 28
  • 27
  • 26
  • 26
  • 26
  • 23
  • 22
  • 21
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Tracking non-rigid objects in video

Buchanan, Aeron Morgan January 2008 (has links)
Video is a sequence of 2D images of the 3D world generated by a camera. As the camera moves relative to the real scene and elements of that scene themselves move, correlated frame-to-frame changes in the video images are induced. Humans easily identify such changes as scene motion and can readily assess attempts to quantify it. For a machine, the identification of the 2D frame-to-frame motion is difficult. This problem is addressed by the computer vision process of tracking. Tracking underpins the solution to the problem of augmenting general video sequences with artificial imagery, a staple task in the visual effects industry. The problem is difficult because tracking in general video sequences is complicated by the presence of non-rigid motion, repeated texture and arbitrary occlusions. Existing methods provide solutions that rely on imposing limitations on the scenes that can be processed or that rely on human artistry and hard work. I introduce new paradigms, frameworks and algorithms for overcoming the challenges of processing general video and thus provide solutions that fill the gap between the `automated' and `manual' approaches. The work is easily sectioned into three parts, which can be considered separately or taken together for dealing with video without limitations. The initial focus is on directly addressing practical issues of human interaction in the tracking process: a new solution is developed by explicitly incorporating the user into an interactive algorithm. It is a novel tracking system based on fast full-frame patch searching and high-speed optimal track determination. This approach makes only minimal assumptions about motion and appearance, making it suitable for the widest variety of input video. I detail an implementation of the new system using k-d trees and dynamic programming. The second distinct contribution is an important extension to tracking algorithms in general. It can be noted that existing tracking algorithms occupy a spectrum in their use of global motion information. Local methods are easily confused by occlusions, repeated texture and image noise. Global motion models offer strong predictions to see through these difficulties and have been used in restricted circumstances, but are defeated by scenes containing independently moving objects or modest levels of non-rigid motion. I present a well principled way of combining local and global models to improve tracking, especially in these highly problematic cases. By viewing rank-constrained tracking as a probabilistic model of 2D tracks instead of 3D motion, I show how one can obtain a robust motion prior that can be easily incorporated in any existing tracking algorithm. The development of the global motion prior is based on rank-constrained factorization of measurement matrices. A common difficulty comes from the frequent occurrence of occlusions in video, which means that the relevant matrices are often not complete due to missing data. This defeats standard factorization algorithms. To fully explain and understand the algorithmic complexities of factorization in this practical context, I present a common notation for the direct comparison of existing algorithms and propose a new family of hybrid approaches that combine the superb initial performance of alternation methods with the convergence power of the Newton algorithm. Together, these investigations provide a wide-ranging, yet coherent exploration of tracking non-rigid objects in video.
132

Efficient end-to-end monitoring for fault management in distributed systems / La surveillance efficace de bout-à-bout pour la gestion des pannes dans les systèmes distribués

Feng, Dawei 27 March 2014 (has links)
Dans cette thèse, nous présentons notre travail sur la gestion des pannes dans les systèmes distribués, avec comme motivation principale le suivi de fautes et de changements brusques dans de grands systèmes informatiques comme la grille et le cloud.Au lieu de construire une connaissance complète a priori du logiciel et des infrastructures matérielles comme dans les méthodes traditionnelles de détection ou de diagnostic, nous proposons d'utiliser des techniques spécifiques pour effectuer une surveillance de bout en bout dans des systèmes de grande envergure, en laissant les détails inaccessibles des composants impliqués dans une boîte noire.Pour la surveillance de pannes d'un système distribué, nous modélisons tout d'abord cette application basée sur des sondes comme une tâche de prédiction statique de collaboration (CP), et démontrons expérimentalement l'efficacité des méthodes de CP en utilisant une méthode de la max margin matrice factorisation. Nous introduisons en outre l’apprentissage actif dans le cadre de CP et exposons son avantage essentiel dans le traitement de données très déséquilibrées, ce qui est particulièrement utile pour identifier la class de classe de défaut de la minorité.Nous étendons ensuite la surveillance statique de défection au cas séquentiel en proposant la méthode de factorisation séquentielle de matrice (SMF). La SMF prend une séquence de matrices partiellement observées en entrée, et produit des prédictions comportant des informations à la fois sur les fenêtres temporelles actuelle et passé. L’apprentissage actif est également utilisé pour la SMF, de sorte que les données très déséquilibrées peuvent être traitées correctement. En plus des méthodes séquentielles, une action de lissage pris sur la séquence d'estimation s'est avérée être une astuce pratique utile pour améliorer la performance de la prédiction séquentielle.Du fait que l'hypothèse de stationnarité utilisée dans le surveillance statique et séquentielle devient irréaliste en présence de changements brusques, nous proposons un framework en ligne semi-Supervisé de détection de changement (SSOCD) qui permette de détecter des changements intentionnels dans les données de séries temporelles. De cette manière, le modèle statique du système peut être recalculé une fois un changement brusque est détecté. Dans SSOCD, un procédé hors ligne non supervisé est proposé pour analyser un échantillon des séries de données. Les points de changement ainsi détectés sont utilisés pour entraîner un modèle en ligne supervisé, qui fournit une décision en ligne concernant la détection de changement à parti de la séquence de données en entrée. Les méthodes de détection de changements de l’état de l’art sont utilisées pour démontrer l'utilité de ce framework.Tous les travaux présentés sont vérifiés sur des ensembles de données du monde réel. Plus précisément, les expériences de surveillance de panne sont effectuées sur un ensemble de données recueillies auprès de l’infrastructure de grille Biomed faisant partie de l’European Grid Initiative et le framework de détection de changement brusque est vérifié sur un ensemble de données concernant le changement de performance d'un site en ligne ayant un fort trafic. / In this dissertation, we present our work on fault management in distributed systems, with motivating application roots in monitoring fault and abrupt change of large computing systems like the grid and the cloud. Instead of building a complete a priori knowledge of the software and hardware infrastructures as in conventional detection or diagnosis methods, we propose to use appropriate techniques to perform end-To-End monitoring for such large scale systems, leaving the inaccessible details of involved components in a black box.For the fault monitoring of a distributed system, we first model this probe-Based application as a static collaborative prediction (CP) task, and experimentally demonstrate the effectiveness of CP methods by using the max margin matrix factorization method. We further introduce active learning to the CP framework and exhibit its critical advantage in dealing with highly imbalanced data, which is specially useful for identifying the minority fault class.Further we extend the static fault monitoring to the sequential case by proposing the sequential matrix factorization (SMF) method. SMF takes a sequence of partially observed matrices as input, and produces predictions with information both from the current and history time windows. Active learning is also employed to SMF, such that the highly imbalanced data can be coped with properly. In addition to the sequential methods, a smoothing action taken on the estimation sequence has shown to be a practically useful trick for enhancing sequential prediction performance.Since the stationary assumption employed in the static and sequential fault monitoring becomes unrealistic in the presence of abrupt changes, we propose a semi-Supervised online change detection (SSOCD) framework to detect intended changes in time series data. In this way, the static model of the system can be recomputed once an abrupt change is detected. In SSOCD, an unsupervised offline method is proposed to analyze a sample data series. The change points thus detected are used to train a supervised online model, which gives online decision about whether there is a change presented in the arriving data sequence. State-Of-The-Art change detection methods are employed to demonstrate the usefulness of the framework.All presented work is verified on real-World datasets. Specifically, the fault monitoring experiments are conducted on a dataset collected from the Biomed grid infrastructure within the European Grid Initiative, and the abrupt change detection framework is verified on a dataset concerning the performance change of an online site with large amount of traffic.
133

Single-pixel imaging : Development and applications of adaptive methods / Imagerie mono-pixel : Développement et applications de méthodes adaptatives

Rousset, Florian 27 October 2017 (has links)
L'imagerie mono-pixel est un concept récent qui permet l'obtention d'images à un coût relativement faible par une compression des données durant l'acquisition. L'architecture d'une caméra mono-pixel comprend seulement deux éléments, un modulateur spatial de la lumière et un détecteur ponctuel. L'idée est de mesurer, au niveau du détecteur, la projection de la scène observée -l'image- avec un certain motif. Le post-traitement d'une séquence de mesures obtenues avec différents motifs permet de restaurer l'image de la scène. L'imagerie mono-pixel possède plusieurs avantages qui sont d'un intérêt pour différentes applications, en particulier dans le domaine biomédical. Par exemple, une caméra mono-pixel résolue en temps bas coût est bénéfique pour l'imagerie de temps de vie de fluorescence. Un tel système peut également être couplé à un spectromètre afin de compléter le temps de vie avec une information spectrale. Cependant, la limite principale de l'imagerie mono-pixel est la vitesse d'acquisition et/ou de l'étape de restauration d'image qui est, à ce jour, non compatible avec des applications temps réel. Le but de cette thèse est de développer des méthodes rapides d'acquisition et de restauration des images à visée d'applications biomédicales. Tout d'abord, une stratégie d'acquisition basée sur les algorithmes de compression dans le domaine ondelettes est proposée. Celle-ci accélère le temps de restauration de l'image par rapport aux schémas d'acquisition classiques basés sur l'acquisition comprimée. Dans un second temps, une nouvelle méthode pour lever une contrainte expérimentale de positivité sur les motifs est détaillée. Comparée aux approches classiques, cette méthode basée sur une factorisation en matrices non-négatives permet de diviser par deux le nombre de motifs envoyés au modulateur spatial de la lumière, entrainant ainsi une division par deux du temps d'acquisition total. Enfin, l'applicabilité de ces techniques est démontrée pour de l'imagerie multispectrale et/ou résolue en temps, modalités courantes dans le domaine biomédical. / Single-pixel imaging is a recent paradigm that allows the acquisition of images at a reasonably low cost by exploiting hardware compression of the data. The architecture of a single-pixel camera consists of only two elements, a spatial light modulator and a single point detector. The key idea is to measure, at the detector, the projection (i.e., inner product) of the scene under view -the image- with some patterns. The post-processing of a measurements sequence obtained with different patterns permits to restore the desired image. Single-pixel imaging has several advantages, which are of interest for different applications, especially in the biomedical field. In particular, a time-resolved single-pixel imaging system benefits to fluorescence lifetime sensing. Such a setup can be coupled to a spectrometer to supplement lifetime with spectral information. However, the main limitation of single-pixel imaging is the speed of the acquisition and/or image restoration that is, as of today, not compatible with real-time applications. This thesis investigates fast acquisition/restoration schemes for single-pixel camera targeting biomedical applications. First, a new acquisition strategy based on wavelet compression algorithms is reported. It is shown that it can significantly accelerate image recovery compared to conventional schemes belonging to the compressive sensing framework. Second, a novel technique is proposed to alleviate an experimental positivity constraint of the modulation patterns. With respect to the classical approaches, the proposed non-negative matrix factorization based technique permits to divide by two the number of patterns sent to the spatial light modulator, hence dividing the overall acquisition time by two. Finally, the applicability of these techniques is demonstrated for multispectral and/or time-resolved imaging, which are common modalities in biomedical imaging.
134

Avaliação do uso de diferentes modelos receptores com dados de PM2,5: balanço químico de massa (BQM) e fatoração de matriz positiva (FMP)

Trindade, Camila Carnielli 13 March 2009 (has links)
Made available in DSpace on 2016-12-23T14:04:31Z (GMT). No. of bitstreams: 1 dissertacao Trindade.pdf: 2131237 bytes, checksum: 514907f9bd367cc5bd486dcd27fa2d9d (MD5) Previous issue date: 2009-03-13 / A identificação de fontes para material particulado tem sido um tema de crescente interesse em todo o mundo para auxiliar a gestão da qualidade do ar. Esta classe de estudos é convencionalmente baseada no uso de modelos receptores, que identificam e quantificam as fontes responsáveis a partir da concentração do contaminante no receptor. Existe uma variedade de modelos receptores disponíveis na literatura, este trabalho compara os resultados dos modelos receptores balanço químico de massa (BQM) e fatoração de matriz positiva (FMP) para o banco de dados de PM2,5, da região de Brighton, Colorado, com o intuito de investigar as dificuldades na utilização de cada modelo, bem como suas vantagens e desvantagens. Inicialmente, já é conhecido que o modelo BQM tem a desvantagem de necessitar dos perfis das fontes, determinados experimentalmente, para ser aplicado e também tem limitações quando as fontes envolvidas são similares. Já o modelo FMP não requer os perfis de fontes, mas tem a desvantagem de precisar de elevada quantidade amostral da concentração do contaminante no receptor. Os resultados mostraram, baseados nas medidas de performance que os dois modelos foram aptos para reproduzir os dados do receptor com ajustes aceitáveis. Todavia, resultados diferentes se ajustaram a medidas de performance. O modelo BQM, utilizou 9 tipos de fontes e o modelo FMP encontrou apenas 6 tipos de fontes. Constatou-se com isso que o modelo FMP tem dificuldades em modelar fontes que aparecem ocasionalmente. As fontes sulfato de amônio, solos, veículos a diesel e nitrato de amônio tiverem boas correlações nos resultados dos dois modelos de contribuições de fontes. Os perfis de fontes utilizados no modelo BQM e resultados do modelo FMP que mais se assimilaram foram das fontes nitrato de amônio, solos, sulfato de amônio e combustão de madeira e ou/ veículos desregulados. Verificou-se no modelo FMP que as espécies não características de determinadas fontes aparecem nos resultados dos perfis das fontes, o que torna-se ainda mais complexo a identificação das fontes, requerendo elevado conhecimento sobre a composição de inúmeras fontes. / The identification of sources of particulate matter has been a topic of growing interest throughout the world to assist the air quality management. This class of studies is conventionally based on the use of receptor models, which identify and quantify the sources responsible from the concentration of the contaminant in the receptor. There are a variety of receptor models, this study compares the results of chemical mass balance (CMB) and positive matrix factorization (PMF) models for a database of PM2.5, for the region of Brighton, Colorado, with a view to investigate the difficulties in the use of each model, as well as its advantages and disadvantages. It is known that the CMB model has the disadvantage of requiring source profiles, determined experimentally, to be applied and also has limitations when the sources involved are similar. On the other hand, the PMF model does not require source profiles, it has the disadvantage to require a large amount sample, in receptor. The results showed, based on performance measures that both models were able to reproduce the data of the receptor with reasonable fit. However, different results were adjusted for performance measurements. The CMB model, used 9 types of sources and PMF model found only 6 types of sources, it was noted by that what the PMF model has difficulty in modeling sources that appear occasionally. The sources ammonium sulfate, soil, diesel vehicles and ammonium nitrate have good correlation in the results of the two model of sources apportionment. The source profiles used in the CMB model and results of the PMF model that present more similarities were of the sources ammonium nitrate, soil, ammonium sulfate and combustion of wood and/or smoker vehicles. It was verified what the PMF model does not separate well species in the source profiles, therefore becomes even more complex to identify the sources in the FMP model, requiring considerable knowledge about the composition of many sources. For the database used with similar sources, the lack of confidence in the results based only on receptors models for a final decision on the source apportionment.
135

On recommendation systems in a sequential context / Des Systèmes de Recommandation dans un Contexte Séquentiel

Guillou, Frédéric 02 December 2016 (has links)
Cette thèse porte sur l'étude des Systèmes de Recommandation dans un cadre séquentiel, où les retours des utilisateurs sur des articles arrivent dans le système l'un après l'autre. Après chaque retour utilisateur, le système doit le prendre en compte afin d'améliorer les recommandations futures. De nombreuses techniques de recommandation ou méthodologies d'évaluation ont été proposées par le passé pour les problèmes de recommandation. Malgré cela, l'évaluation séquentielle, qui est pourtant plus réaliste et se rapproche davantage du cadre d'évaluation d'un vrai système de recommandation, a été laissée de côté. Le contexte séquentiel nécessite de prendre en considération différents aspects non visibles dans un contexte fixe. Le premier de ces aspects est le dilemme dit d'exploration vs. exploitation: le modèle effectuant les recommandations doit trouver le bon compromis entre recueillir de l'information sur les goûts des utilisateurs à travers des étapes d'exploration, et exploiter la connaissance qu'il a à l'heure actuelle pour maximiser le feedback reçu. L'importance de ce premier point est mise en avant à travers une première évaluation, et nous proposons une approche à la fois simple et efficace, basée sur la Factorisation de Matrice et un algorithme de Bandit Manchot, pour produire des recommandations appropriées. Le second aspect pouvant apparaître dans le cadre séquentiel surgit dans le cas où une liste ordonnée d'articles est recommandée au lieu d'un seul article. Dans cette situation, le feedback donné par l'utilisateur est multiple: la partie explicite concerne la note donnée par l'utilisateur concernant l'article choisi, tandis que la partie implicite concerne les articles cliqués (ou non cliqués) parmi les articles de la liste. En intégrant les deux parties du feedback dans un modèle d'apprentissage, nous proposons une approche basée sur la Factorisation de Matrice, qui peut recommander de meilleures listes ordonnées d'articles, et nous évaluons cette approche dans un contexte séquentiel particulier pour montrer son efficacité. / This thesis is dedicated to the study of Recommendation Systems under a sequential setting, where the feedback given by users on items arrive one after another in the system. After each feedback, the system has to integrate it and try to improve future recommendations. Many techniques or evaluation methods have already been proposed to study the recommendation problem. Despite that, such sequential setting, which is more realistic and represent a closer framework to a real Recommendation System evaluation, has surprisingly been left aside. Under a sequential context, recommendation techniques need to take into consideration several aspects which are not visible for a fixed setting. The first one is the exploration-exploitation dilemma: the model making recommendations needs to find a good balance between gathering information about users' tastes or items through exploratory recommendation steps, and exploiting its current knowledge of the users and items to try to maximize the feedback received. We highlight the importance of this point through the first evaluation study and propose a simple yet efficient approach to make effective recommendation, based on Matrix Factorization and Multi-Armed Bandit algorithms. The second aspect emphasized by the sequential context appears when a list of items is recommended to the user instead of a single item. In such a case, the feedback given by the user includes two parts: the explicit feedback as the rating, but also the implicit feedback given by clicking (or not clicking) on other items of the list. By integrating both feedback into a Matrix Factorization model, we propose an approach which can suggest better ranked list of items, and we evaluate it in a particular setting.
136

Unsupervised Models for White Matter Fiber-Bundles Analysis in Multiple Sclerosis / Modèles Non Supervisé pour l’Analyse des Fibres de Substance Blanche dans la Sclérose en Plaques

Stamile, Claudio 11 September 2017 (has links)
L’imagerie de résonance magnétique de diffusion (dMRI) est une technique très sensible pour la tractographie des fibres de substance blanche et la caractérisation de l’intégrité et de la connectivité axonale. A travers la mesure des mouvements des molécules d’eau dans les trois dimensions de l’espace, il est possible de reconstruire des cartes paramétriques reflétant l’organisation tissulaire. Parmi ces cartes, la fraction d’anisotropie (FA) et les diffusivités axiale (λa), radiale (λr) et moyenne (MD) ont été largement utilisés pour caractériser les pathologies du système nerveux central. L’emploi de ces cartes paramétriques a permis de mettre en évidence la survenue d’altérations micro structurelles de la substance blanche (SB) et de la substance grise (SG) chez les patients atteints d’une sclérose en plaques (SEP). Cependant, il reste à déterminer l’origine de ces altérations qui peuvent résulter de processus globaux comme la cascade inflammatoire et les mécanismes neurodégénératifs ou de processus plus localisés comme la démyélinisation et l’inflammation. De plus, ces processus pathologiques peuvent survenir le long de faisceaux de SB afférents ou efférents, conduisant à une dégénérescence antero- ou rétrograde. Ainsi, pour une meilleure compréhension des processus pathologiques et de leur progression dans l’espace et dans le temps, une caractérisation fine et précise des faisceaux de SB est nécessaire. En couplant l’information spatiale de la tractographie des fibres aux cartes paramétriques de diffusion, obtenues grâce à un protocole d’acquisitions longitudinal, les profils des faisceaux de SB peuvent être modélisés et analysés. Une telle analyse des faisceaux de SB peut être effectuée grâce à différentes méthodes, partiellement ou totalement non-supervisées. Dans la première partie de ce travail, nous dressons l’état de l’art des études déjà présentes dans la littérature. Cet état de l’art se focalisera sur les études montrant les effets de la SEP sur les faisceaux de SB grâce à l’emploi de l’imagerie de tenseur de diffusion. Dans la seconde partie de ce travail, nous introduisons deux nouvelles méthodes,“string-based”, l’une semi-supervisée et l’autre non-supervisée, pour extraire les faisceaux de SB. Nous montrons comment ces algorithmes permettent d’améliorer l’extraction de faisceaux spécifiques comparé aux approches déjà présentes dans la littérature. De plus, dans un second chapitre, nous montrons une extension de la méthode proposée par le couplage du formalisme “string-based” aux informations spatiales des faisceaux de SB. Dans la troisième et dernière partie de ce travail, nous décrivons trois algorithmes automatiques permettant l’analyse des changements longitudinaux le long des faisceaux de SB chez des patients atteints d’une SEP. Ces méthodes sont basées respectivement sur un modèle de mélange Gaussien, la factorisation de matrices non-négatives et la factorisation de tenseurs non-négatifs. De plus, pour valider nos méthodes, nous introduisons un nouveau modèle pour simuler des changements longitudinaux réels, base sur une fonction de probabilité Gaussienne généralisée. Des hautes performances ont été obtenues avec ces algorithmes dans la détection de changements longitudinaux d’amplitude faible le long des faisceaux de SB chez des patients atteints de SEP. En conclusion, nous avons proposé dans ce travail des nouveaux algorithmes non supervisés pour une analyse précise des faisceaux de SB, permettant une meilleure caractérisation des altérations pathologiques survenant chez les patients atteints de SEP / Diffusion Magnetic Resonance Imaging (dMRI) is a meaningful technique for white matter (WM) fiber-tracking and microstructural characterization of axonal/neuronal integrity and connectivity. By measuring water molecules motion in the three directions of space, numerous parametric maps can be reconstructed. Among these, fractional anisotropy (FA), mean diffusivity (MD), and axial (λa) and radial (λr) diffusivities have extensively been used to investigate brain diseases. Overall, these findings demonstrated that WM and grey matter (GM) tissues are subjected to numerous microstructural alterations in multiple sclerosis (MS). However, it remains unclear whether these tissue alterations result from global processes, such as inflammatory cascades and/or neurodegenerative mechanisms, or local inflammatory and/or demyelinating lesions. Furthermore, these pathological events may occur along afferent or efferent WM fiber pathways, leading to antero- or retrograde degeneration. Thus, for a better understanding of MS pathological processes like its spatial and temporal progression, an accurate and sensitive characterization of WM fibers along their pathways is needed. By merging the spatial information of fiber tracking with the diffusion metrics derived obtained from longitudinal acquisitions, WM fiber-bundles could be modeled and analyzed along their profile. Such signal analysis of WM fibers can be performed by several methods providing either semi- or fully unsupervised solutions. In the first part of this work, we will give an overview of the studies already present in literature and we will focus our analysis on studies showing the interest of dMRI for WM characterization in MS. In the second part, we will introduce two new string-based methods, one semi-supervised and one unsupervised, to extract specific WM fiber-bundles. We will show how these algorithms allow to improve extraction of specific fiber-bundles compared to the approaches already present in literature. Moreover, in the second chapter, we will show an extension of the proposed method by coupling the string-based formalism with the spatial information of the fiber-tracks. In the third, and last part, we will describe, in order of complexity, three different fully automated algorithms to perform analysis of longitudinal changes visible along WM fiber-bundles in MS patients. These methods are based on Gaussian mixture model, nonnegative matrix and tensor factorisation respectively. Moreover, in order to validate our methods, we introduce a new model to simulate real longitudinal changes based on a generalised Gaussian probability density function. For those algorithms high levels of performances were obtained for the detection of small longitudinal changes along the WM fiber-bundles in MS patients. In conclusion, we propose, in this work, a new set of unsupervised algorithms to perform a sensitivity analysis of WM fiber bundle that would be useful for the characterisation of pathological alterations occurring in MS patients
137

網路評比資料之統計分析 / Statistical analysis of online rating data

張孫浩 Unknown Date (has links)
隨著網路的發達,各式各樣的資訊和商品也在網路上充斥著,使用者尋找資訊或是上網購物時,有的網站有推薦系統(recommender system)能提供使用者相關資訊或商品。若推薦系統能夠讓消費者所搜尋的相關資訊或商品能夠符合他們的習性時,便能讓消費者增加對系統的信賴程度,因此系統是否能準確預測出使用者的偏好就成為一個重要的課題。本研究使用兩筆資料,並以相關研究的三篇文獻進行分析和比較。這三篇文獻分別為IRT模型法(IRT model-based method)、相關係數法(correlation-coefficient method)、以及矩陣分解法(matrix factorization)。 在經過一連串的實證分析後,歸納出以下結論: 1. 模型法在預測方面雖然精確度不如其他兩種方法來的好,但是模型有解釋變數之間的關係以及預測機率的圖表展示,因此這個方法仍有存在的價值。 2. 相關係數法容易因為評分稀疏性的問題而無法預測,建議可以搭配內容式推薦系統的運作方式協助推薦。 3. 矩陣分解法在預測上雖然比IRT模型法還好,但分量的數字只是一個最佳化的結果,實際上無法解釋這些分量和數字的意義。 / With the growth of the internet, websites are full of a variety of information and products. When users find the information or surf the internet to shopping, some websites provide users recommender system to find with which related. Hence, whether the recommender system can predict the users' preference is an important topic. This study used two data,which are "Mondo" and "MovieLens", and we used three related references to analyze and compare them. The three references are following: IRT model-based method, Correlation-coefficient method, and Matrix factorization. After the data analysis, we get the following conclusions: 1. IRT model-based method is worse then other methods in predicting, but it can explain the relationship of variables and display the graph of predicting probabilities. Hence this method still has it's value. 2. Correlation-coefficient method is hard to predict because of sparsity. We can connect it with content filtering approach. 3. Although matrix factorization is better then IRT model-based method in predicting, the vectors is a result of optimization. It may be hard to explain the meaning of the vectors.
138

基植於非負矩陣分解之華語流行音樂曲式分析 / Chinese popular music structure analysis based on non-negative matrix factorization

黃柏堯, Huang, Po Yao Unknown Date (has links)
近幾年來,華語流行音樂的發展越來越多元,而大眾所接收到的資訊是流行音樂當中的組成元素”曲與詞”,兩者分別具有賦予人類感知的功能,使人能夠深刻體會音樂作品當中所表答的內容與意境。然而,作曲與作詞都是屬於專業的創作藝術,作詞者通常在填詞時,會先對樂曲當中的結構進行粗略的分析,找出整首曲子的曲式,而針對可以填詞的部份,再進行更細部的分析將詞填入最適當的位置。流行音樂當中,曲與詞存在著密不可分的關係,瞭解歌曲結構不僅能降低填詞的門檻,亦能夠明白曲子的骨架與脈絡;在音樂教育與音樂檢索方面亦有幫助。 本研究的目標為,使用者輸入流行音樂歌曲,系統會自動分析出曲子的『曲式結構』。方法主要分成三個部分,分別為主旋律擷取、歌句分段與音樂曲式結構擷取。首先,我們利用Support Vector Machine以學習之方式建立模型後,擷取出符號音樂中之主旋律。第二步驟我們以”歌句”為單位,對主旋律進行分段,對於分段之結果建構出Self-Similarity Matrix矩陣。最後再利用Non-Negative Matrix Factorization針對不同特徵值矩陣進行分解並建立第二層之Self-Similarity Matrix矩陣,以歧異度之方式找出曲式邊界。 我們針對分段方式對歌曲結構之影響進行分析與觀察。實驗數據顯示,事先將歌曲以歌句單位分段之效果較未分段佳,而歌句分段之評測結果F-Score為0.82;將音樂中以不同特徵值建構之自相似度矩進行Non-Negative Matrix Factorization後,另一空間中之基底特徵更能有效地分辨出不同的歌曲結構,其F-Score為0.71。 / Music structure analysis is helpful for music information retrieval, music education and alignment between lyrics and music. This thesis investigates the techniques of music structure analysis for Chinese popular music. Our work is to analyze music form automatically by three steps, main melody finding, sentence discovery, and music form discovery. First, we extract main melody based on learning from user-labeled sample using support vector machine. Then, the boundary of music sentence is detected by two-way classification using support vector machine. To discover the music form, the sentence-based Self-Similarity Matrix is constructed for each music feature. Non-negative Matrix Factorization is employed to extract the new features and to construct the second level Self-Similarity Matrix. The checkerboard kernel correlation is utilized to find music form boundaries on the second level Self-Similarity Matrix. Experiments on eighty Chinese popular music are performed for performance evaluation of the proposed approaches. For the main melody finding, our proposed learning-based approach is better than existing methods. The proposed approaches achieve 82% F-score for sentence discovery while 71% F-score for music form discovery.
139

Doppler Radar Data Processing And Classification

Aygar, Alper 01 September 2008 (has links) (PDF)
In this thesis, improving the performance of the automatic recognition of the Doppler radar targets is studied. The radar used in this study is a ground-surveillance doppler radar. Target types are car, truck, bus, tank, helicopter, moving man and running man. The input of this thesis is the output of the real doppler radar signals which are normalized and preprocessed (TRP vectors: Target Recognition Pattern vectors) in the doctorate thesis by Erdogan (2002). TRP vectors are normalized and homogenized doppler radar target signals with respect to target speed, target aspect angle and target range. Some target classes have repetitions in time in their TRPs. By the use of these repetitions, improvement of the target type classification performance is studied. K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) algorithms are used for doppler radar target classification and the results are evaluated. Before classification PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis), NMF (Nonnegative Matrix Factorization) and ICA (Independent Component Analysis) are implemented and applied to normalized doppler radar signals for feature extraction and dimension reduction in an efficient way. These techniques transform the input vectors, which are the normalized doppler radar signals, to another space. The effects of the implementation of these feature extraction algoritms and the use of the repetitions in doppler radar target signals on the doppler radar target classification performance are studied.
140

Investigation Of Short And Long Term Trends In The Eastern Mediterranean Aerosol Composition

Ozturk, Fatma 01 January 2009 (has links) (PDF)
Approximately 2000 daily aerosol samples were collected at Antalya (30&deg / 34&amp / #900 / 30.54 E, 36&deg / 47&amp / #8217 / 30.54N) on the Mediterranean coast of Turkey between 1993 and 2001. High volume PM10 sampler was used for the collection of samples on Whatman&amp / #8211 / 41 filters. Collected samples were analyzed by a combination of analytical techniques. Energy Dispersive X-Ray Fluorescence (EDXRF) and Inductively Coupled Plasma Mass Spectrometry (ICPMS) was used to measure trace element content of the collected samples from Li to U. Major ions, namely, SO42- and NO3-, were determined by employing Ion Chromatography (IC). Samples were analyzed in terms of their NH4+ contents by means of Colorimetry. Evaluation of short term trends of measured parameters have been shown that elements with marine and crustal origin are more episodic as compared to anthropogenic ones. Most of the parameters showed well defined seasonal cycles, for example, concentrations of crustal elements increased in summer season while winter concentrations of marine elements were considerably higher than associated values for summer. Seasonal Kendall statistic depicted that there was a decreasing trend for crustal elements such as Be, Co, Al, Na, Mg, K, Dy, Ho, Tm, Cs and Eu. Lead, As, Se and Ge were the anhtropogenic elements that decreasing trend was detected in the course of study period. Cluster and Residence time analysis were performed to find the origin of air masses arrving to Eastern Mediterranena Basin. It has been found that air masses reaching to our station resided more on Balkans and Eastern Europe. Positive Matrix Factorization (PMF) resolved eight factors influencing the chemical composition of Eastern Mediterranean aerosols as local dust, Saharan dust, oil combustion, coal combustion, crustal-anthropogenic mixed, sea salt, motor vehicle emission, and local Sb factor.

Page generated in 0.4256 seconds