• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 393
  • 168
  • 46
  • 44
  • 29
  • 21
  • 19
  • 18
  • 17
  • 17
  • 15
  • 7
  • 4
  • 3
  • 3
  • Tagged with
  • 949
  • 949
  • 748
  • 149
  • 148
  • 142
  • 124
  • 113
  • 97
  • 87
  • 76
  • 74
  • 72
  • 64
  • 63
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

Functional data analysis with applications in finance

Benko, Michal 26 January 2007 (has links)
An vielen verschiedenen Stellen der angewandten Statistik sind die zu untersuchenden Objekte abhängig von stetigen Parametern. Typische Beispiele in Finanzmarktapplikationen sind implizierte Volatilitäten, risikoneutrale Dichten oder Zinskurven. Aufgrund der Marktkonventionen sowie weiteren technisch bedingten Gründen sind diese Objekte nur an diskreten Punkten, wie zum Beispiel an Ausübungspreise und Maturitäten, für die ein Geschäft in einem bestimmten Zeitraum abgeschlossen wurde, beobachtbar. Ein funktionaler Datensatz ist dann vorhanden, wenn diese Funktionen für verschiedene Zeitpunkte (z.B. Tage) oder verschiedene zugrundeliegende Aktiva gesammelt werden. Das erste Thema, das in dieser Dissertation betrachtet wird, behandelt die nichtparametrischen Methoden der Schätzung dieser Objekte (wie z.B. implizierte Volatilitäten) aus den beobachteten Daten. Neben den bekannten Glättungsmethoden wird eine Prozedur für die Glättung der implizierten Volatilitäten vorgeschlagen, die auf einer Kombination von nichtparametrischer Glättung und den Ergebnissen der arbitragefreien Theorie basiert. Der zweite Teil der Dissertation ist der funktionalen Datenanalyse (FDA), speziell im Zusammenhang mit den Problemen, der empirischen Finanzmarktanalyse gewidmet. Der theoretische Teil der Arbeit konzentriert sich auf die funktionale Hauptkomponentenanalyse -- das funktionale Ebenbild der bekannten Dimensionsreduktionstechnik. Ein umfangreicher überblick der existierenden Methoden wird gegeben, eine Schätzmethode, die von der Lösung des dualen Problems motiviert ist und die Zwei-Stichproben-Inferenz basierend auf der funktionalen Hauptkomponentenanalyse werden behandelt. Die FDA-Techniken sind auf die Analyse der implizierten Volatilitäten- und Zinskurvendynamik angewandt worden. Darüber hinaus, wird die Implementation der FDA-Techniken zusammen mit einer FDA-Bibliothek für die statistische Software Xplore behandelt. / In many different fields of applied statistics an object of interest is depending on some continuous parameter. Typical examples in finance are implied volatility functions, yield curves or risk-neutral densities. Due to the different market conventions and further technical reasons, these objects are observable only on a discrete grid, e.g. for a grid of strikes and maturities for which the trade has been settled at a given time-point. By collecting these functions for several time points (e.g. days) or for different underlyings, a bunch (sample) of functions is obtained - a functional data set. The first topic considered in this thesis concerns the strategies of recovering the functional objects (e.g. implied volatilities function) from the observed data based on the nonparametric smoothing methods. Besides the standard smoothing methods, a procedure based on a combination of nonparametric smoothing and the no-arbitrage-theory results is proposed for implied volatility smoothing. The second part of the thesis is devoted to the functional data analysis (FDA) and its connection to the problems present in the empirical analysis of the financial markets. The theoretical part of the thesis focuses on the functional principal components analysis -- functional counterpart of the well known multivariate dimension-reduction-technique. A comprehensive overview of the existing methods is given, an estimation method based on the dual problem as well as the two-sample inference based on the functional principal component analysis are discussed. The FDA techniques are applied to the analysis of the implied volatility and yield curve dynamics. In addition, the implementation of the FDA techniques together with a FDA library for the statistical environment XploRe are presented.
652

Non-linear dimensionality reduction and sparse representation models for facial analysis / Réduction de la dimension non-linéaire et modèles de la représentations parcimonieuse pour l’analyse du visage

Zhang, Yuyao 20 February 2014 (has links)
Les techniques d'analyse du visage nécessitent généralement une représentation pertinente des images, notamment en passant par des techniques de réduction de la dimension, intégrées dans des schémas plus globaux, et qui visent à capturer les caractéristiques discriminantes des signaux. Dans cette thèse, nous fournissons d'abord une vue générale sur l'état de l'art de ces modèles, puis nous appliquons une nouvelle méthode intégrant une approche non-linéaire, Kernel Similarity Principle Component Analysis (KS-PCA), aux Modèles Actifs d'Apparence (AAMs), pour modéliser l'apparence d'un visage dans des conditions d'illumination variables. L'algorithme proposé améliore notablement les résultats obtenus par l'utilisation d'une transformation PCA linéaire traditionnelle, que ce soit pour la capture des caractéristiques saillantes, produites par les variations d'illumination, ou pour la reconstruction des visages. Nous considérons aussi le problème de la classification automatiquement des poses des visages pour différentes vues et différentes illumination, avec occlusion et bruit. Basé sur les méthodes des représentations parcimonieuses, nous proposons deux cadres d'apprentissage de dictionnaire pour ce problème. Une première méthode vise la classification de poses à l'aide d'une représentation parcimonieuse active (Active Sparse Representation ASRC). En fait, un dictionnaire est construit grâce à un modèle linéaire, l'Incremental Principle Component Analysis (Incremental PCA), qui a tendance à diminuer la redondance intra-classe qui peut affecter la performance de la classification, tout en gardant la redondance inter-classes, qui elle, est critique pour les représentations parcimonieuses. La seconde approche proposée est un modèle des représentations parcimonieuses basé sur le Dictionary-Learning Sparse Representation (DLSR), qui cherche à intégrer la prise en compte du critère de la classification dans le processus d'apprentissage du dictionnaire. Nous faisons appel dans cette partie à l'algorithme K-SVD. Nos résultats expérimentaux montrent la performance de ces deux méthodes d'apprentissage de dictionnaire. Enfin, nous proposons un nouveau schéma pour l'apprentissage de dictionnaire adapté à la normalisation de l'illumination (Dictionary Learning for Illumination Normalization: DLIN). L'approche ici consiste à construire une paire de dictionnaires avec une représentation parcimonieuse. Ces dictionnaires sont construits respectivement à partir de visages illuminées normalement et irrégulièrement, puis optimisés de manière conjointe. Nous utilisons un modèle de mixture de Gaussiennes (GMM) pour augmenter la capacité à modéliser des données avec des distributions plus complexes. Les résultats expérimentaux démontrent l'efficacité de notre approche pour la normalisation d'illumination. / Face analysis techniques commonly require a proper representation of images by means of dimensionality reduction leading to embedded manifolds, which aims at capturing relevant characteristics of the signals. In this thesis, we first provide a comprehensive survey on the state of the art of embedded manifold models. Then, we introduce a novel non-linear embedding method, the Kernel Similarity Principal Component Analysis (KS-PCA), into Active Appearance Models, in order to model face appearances under variable illumination. The proposed algorithm successfully outperforms the traditional linear PCA transform to capture the salient features generated by different illuminations, and reconstruct the illuminated faces with high accuracy. We also consider the problem of automatically classifying human face poses from face views with varying illumination, as well as occlusion and noise. Based on the sparse representation methods, we propose two dictionary-learning frameworks for this pose classification problem. The first framework is the Adaptive Sparse Representation pose Classification (ASRC). It trains the dictionary via a linear model called Incremental Principal Component Analysis (Incremental PCA), tending to decrease the intra-class redundancy which may affect the classification performance, while keeping the extra-class redundancy which is critical for sparse representation. The other proposed work is the Dictionary-Learning Sparse Representation model (DLSR) that learns the dictionary with the aim of coinciding with the classification criterion. This training goal is achieved by the K-SVD algorithm. In a series of experiments, we show the performance of the two dictionary-learning methods which are respectively based on a linear transform and a sparse representation model. Besides, we propose a novel Dictionary Learning framework for Illumination Normalization (DL-IN). DL-IN based on sparse representation in terms of coupled dictionaries. The dictionary pairs are jointly optimized from normally illuminated and irregularly illuminated face image pairs. We further utilize a Gaussian Mixture Model (GMM) to enhance the framework's capability of modeling data under complex distribution. The GMM adapt each model to a part of the samples and then fuse them together. Experimental results demonstrate the effectiveness of the sparsity as a prior for patch-based illumination normalization for face images.
653

類典型相關分析及其在 免試入學上採計成績之研究 / A canonical correlation analysis type approach to model a criterion for enrolling high school students

卓惠敏, Cho, Hui Min Unknown Date (has links)
實施十二年國民基本教育,目的是為促進學生五育均衡發展,兼顧國中學習品質及日常生活表現。由於各校對成績的評分標準與評分方式皆不相同,因此如何使在校成績採計達到公平性將成為一項重要的問題。 戴岑熹(2011) 考慮了國中在校綜合學科分數與基測總分間的相關性,以決定在校各學科的權重。而本研究延伸其概念與方法,將基測各科量尺分數考慮進來,於在校綜合學科分數與基測綜合量尺分數的關聯性最密切的情況下,分析各學科權重的取決方式,希望能找出較理想的模式來代表學生在校三年的整體學習表現與成果,以做為免試升學採計在校成績的參考與依據。 本文的研究方法是運用典型相關分析的理論,但因權重的限制條件與傳統典型相關分析的要求不同,因此,便將其命名為「類典型相關分析」。在類典型相關分析中,我們證明了在校各學科分數及基測各科量尺分數的最佳權重,可先透過典型相關分析求得典型相關向量,若有必要的話,使用Rao-Ghangurad 方法加以修正,最後,再將所獲得的非負典型相關向量正規化,即可獲得所要的結果,這是一個求最佳權重向量極便捷的途徑。在實例分析方面,我們發現了一個有趣的現象,即在校學科分數與基測考科量尺分數的最佳權重向量相當接近,即名稱相同的學科與考科幾乎有相同的權重。在比較了幾個權重分配方式不同的在校綜合學科分數後,我們也發現一般學校常用的等加權模式,其表現結果也頗優異。 / The purpose of implementing the twelve-year compulsory education is to promote the balanced development of learning in students, taking into account their learning quality and normal daily performances in school. As the evaluation standard and method vary among schools, achieving fairness in calculating in-school grades has become an important issue. Dai (2011) considered the correlations between the scores of in-school academic performance and the total score of the BCTEST for junior high schools, which decided to the weightings of all learning subjects. This study extended his concept and method, and took into account the scale scores of all learning subjects. In the closest case of the weightings of all learning subjects and find out the correlations between the scores of in-school academic performance and the BCTEST, and analyse the weightings of all learning subjects. We hope the study can find a better approach that can not only reflect students’ learning situations and achievements for the three years in school but also provide a reference for the evaluation of entering senior high schools without entrance examinations. The research method in this paper employs the theory of canonical correlation analysis.However, due to that fact that weight restrictions are different from the requirements of canonical correlation analysis, it is named as the canonical correlation analysis type approach. In the canonical correlation analysis type approach, we proved that the optimal weights for school subject score and test subject score scales can be obtained by finding the canonical correlation vectors using canonical correlation analysis. Then the Rao-Ghangurad method can further be used for amending, if needed. Finally, the nonnegative canonical correlation vectors generated would be normalized to get the desired result. It is an extremely convenient way to obtain the optimal weight vector. In the case study, we found an interesting phenomenon as follows: When the optimal weight vectors for school subject score and test subject score scales were very close, subjects and tests of the same name had almost the same weight. After comparing several comprehensive school subject scores of different weight distribution, we also found that the results of the equal weighting model commonly used in schools also showed quite good results.
654

Μέτρηση γεωμετρικών χαρακτηριστικών και αναλογίας μεγεθών ερυθρών αιμοσφαιρίων με ψηφιακή επεξεργασία της σκεδαζόμενης ηλεκτρομαγνητικής ακτινοβολίας / Estimation of geometrical properties of human red blood cells using light scattering images

Αποστολόπουλος, Γεώργιος 19 January 2011 (has links)
Σκοπός της διδακτορικής διατριβής είναι η ανάπτυξη κατάλληλων μεθόδων ψηφιακής επεξεργασίας εικόνας και αναγνώρισης προτύπων με τις οποίες θα προσδιορίζονται βιομετρικές και διαγνωστικές παράμετροι μέσω της αλληλεπίδρασης φωτονίων στο ορατό και υπέρυθρο φάσμα. Πιο συγκεκριμένα επιλύεται ένα αντίστροφο πρόβλημα σκέδασης ΗΜ ακτινοβολίας από ένα ανθρώπινο, υγιές και απαραμόρφωτο ερυθρό αιμοσφαίριο. Παρουσιάζονται μέθοδοι εκτίμησης και αναγνώρισης των γεωμετρικών χαρακτηριστικών απαραμόρφωτων υγιών ερυθρών αιμοσφαιρίων με χρήση εικόνων που προσομοιώνουν φαινόμενα σκέδασης ηλεκτρομαγνητικής ακτινοβολίας που διέρχεται από προσανατολισμένα ερυθρά αιμοσφαίρια. Η διαδικασία της ανάκτησης της πληροφορίας περιλαμβάνει, εξαγωγή χαρακτηριστικών με χρήση δισδιάστατων μετασχηματισμών, κανονικοποίηση των χαρακτηριστικών και την χρήση νευρωνικών δικτύων για την εκτίμηση των γεωμετρικών ιδιοτήτων του ερυθροκυττάρου. Παράλληλα σχεδιάστηκε και αξιολογήθηκε σύστημα αναγνώρισης των γεωμετρικών χαρακτηριστικών των ερυθρών αιμοσφαιρίων. Οι εικόνες σκέδασης δημιουργήθηκαν προσομοιώνοντας το πρόβλημα εμπρόσθιας σκέδασης ενός επίπεδου ηλεκτρομαγνητικού (ΗΜ) κύματος, χρησιμοποιώντας την μέθοδο των συνοριακών στοιχείων, λαμβάνοντας υπόψη τόσο την αξονοσυμμετρική γεωμετρία του ερυθροκυττάρου όσο και τις μη αξονοσυμμετρικές οριακές συνθήκες του προβλήματος. Η επίλυση του εν λόγω προβλήματος πραγματοποιήθηκε στα 632.8 nm και εν συνεχεία επεκτάθηκε σε 12 διακριτά ίσου βήματος μήκη κύματος από 432.8 nm έως 1032.8 nm. Επίσης, προτάθηκε μία νέα πειραματική διάταξη για την απόκτηση πολλαπλών εικόνων σκέδασης και την εκτίμηση των γεωμετρικών χαρακτηριστικών των ερυθρών αιμοσφαιρίων, αποτελούμενη από μία πολυχρωματική πηγή φωτός (Led) και πολλαπλά χρωματικά φίλτρα. Επίσης κατασκευάστηκε μέθοδος επίλυσης του σημαντικού προβλήματος εύρεσης της περιεκτικότητας του διαλύματος σε ερυθρά αιμοσφαίρια διαφορετικών μεγεθών στην περίπτωση απόκτησης πολλαπλών εικόνων σκέδασης από διαφορετικές φωτοδιόδους και πολλαπλά χρωματικά φίλτρα. Στα πειράματα αξιολόγησης της μεθόδου που προτείνεται με εικόνες προσομοίωσης δείχνεται ότι είναι ικανή η εύρεση της αναλογίας των ερυθρών αιμοσφαιρίων με πολύ μεγάλη ακρίβεια ακόμα και στη περίπτωση όπου στις εικόνες έχει προστεθεί λευκός κανονικός θόρυβος. Η βασική μεθοδολογία που παρουσιάζεται στην παρούσα δια-τριβή μπορεί να χρησιμοποιηθεί για την αναγνώριση παθολογικών αιμοσφαιρίων ή να χρησιμοποιηθεί στην αναγνώριση μικροσωματιδίων σε υγρά ή αέρια. / The aim of this PhD thesis is the development of digital image processing and pattern recognition methods to estimate biometric and diagnostic parameters using scattering phenomena in the visible and infrared spectrum. More concretely, several reverse scattering problems of EM radiation from a human, healthy and undistorted Red Blood Cell (RBC) is solved. Methods of estimation and recognition of geometrical characteristics of healthy and undistorted RBCs using simulating images are presented. The information retrieval process includes, features extraction using two-dimensional integral transforms, features normalization, and Neural Networks for estimation of three major RBC geometrical proper-ties. Using the same features set, a recognition system of the geometric characteristics of RBCs was developed and evaluated. The scattering images were created simulating the forward scattering problem of a plane electromagnetic wave using the Boundary Element Method, taking into account both axisymmetric geometry of the scatterer and the non-axisymmetric boundary conditions of the problem. Initially, the problem is solved at 632.8 nm and consequently the same problem was solved at 12 different wavelengths, from 432.8 to 1032.8 nm equally spaced. Also, a new device for acquisition of scattering images from RBCs-flow, consisting of a multi-color light source (Led) was proposed, for RBC size estimation and recognition. Finally, a system for the estimation of different RBCs concentration was developed when scattering images acquired using multiple scattering images acquired from multiple Leds and color filters. The system was evaluated using additive white regular noise.
655

Nedourčená slepá separace zvukových signálů / Underdetermined Blind Audio Signal Separation

Čermák, Jan January 2008 (has links)
We often have to face the fact that several signals are mixed together in unknown environment. The signals must be first extracted from the mixture in order to interpret them correctly. This problem is in signal processing society called blind source separation. This dissertation thesis deals with multi-channel separation of audio signals in real environment, when the source signals outnumber the sensors. An introduction to blind source separation is presented in the first part of the thesis. The present state of separation methods is then analyzed. Based on this knowledge, the separation systems implementing fuzzy time-frequency mask are introduced. However these methods are still introducing nonlinear changes in the signal spectra, which can yield in musical noise. In order to reduce musical noise, novel methods combining time-frequency binary masking and beamforming are introduced. The new separation system performs linear spatial filtering even if the source signals outnumber the sensors. Finally, the separation systems are evaluated by objective and subjective tests in the last part of the thesis.
656

Assessment of blind source separation techniques for video-based cardiac pulse extraction

Wedekind, Daniel, Trumpp, Alexander, Gaetjen, Frederik, Rasche, Stefan, Matschke, Klaus, Malberg, Hagen, Zaunseder, Sebastian 09 September 2019 (has links)
Blind source separation (BSS) aims at separating useful signal content from distortions. In the contactless acquisition of vital signs by means of the camera-based photoplethysmogram (cbPPG), BSS has evolved the most widely used approach to extract the cardiac pulse. Despite its frequent application, there is no consensus about the optimal usage of BSS and its general benefit. This contribution investigates the performance of BSS to enhance the cardiac pulse from cbPPGs in dependency to varying input data characteristics. The BSS input conditions are controlled by an automated spatial preselection routine of regions of interest. Input data of different characteristics (wavelength, dominant frequency, and signal quality) from 18 postoperative cardiovascular patients are processed with standard BSS techniques, namely principal component analysis (PCA) and independent component analysis (ICA). The effect of BSS is assessed by the spectral signal-tonoise ratio (SNR) of the cardiac pulse. The preselection of cbPPGs, appears beneficial providing higher SNR compared to standard cbPPGs. Both, PCA and ICA yielded better outcomes by using monochrome inputs (green wavelength) instead of inputs of different wavelengths. PCA outperforms ICA for more homogeneous input signals. Moreover, for high input SNR, the application of ICA using standard contrast is likely to decrease the SNR.
657

Outlier detection with ensembled LSTM auto-encoders on PCA transformed financial data / Avvikelse-detektering med ensemble LSTM auto-encoders på PCA-transformerad finansiell data

Stark, Love January 2021 (has links)
Financial institutions today generate a large amount of data, data that can contain interesting information to investigate to further the economic growth of said institution. There exists an interest in analyzing these points of information, especially if they are anomalous from the normal day-to-day work. However, to find these outliers is not an easy task and not possible to do manually due to the massive amounts of data being generated daily. Previous work to solve this has explored the usage of machine learning to find outliers in these financial datasets. Previous studies have shown that the pre-processing of data usually stands for a big part in information loss. This work aims to study if there is a proper balance in how the pre-processing is carried out to retain the highest amount of information while simultaneously not letting the data remain too complex for the machine learning models. The dataset used consisted of Foreign exchange transactions supplied by the host company and was pre-processed through the use of Principal Component Analysis (PCA). The main purpose of this work is to test if an ensemble of Long Short-Term Memory Recurrent Neural Networks (LSTM), configured as autoencoders, can be used to detect outliers in the data and if the ensemble is more accurate than a single LSTM autoencoder. Previous studies have shown that Ensemble autoencoders can prove more accurate than a single autoencoder, especially when SkipCells have been implemented (a configuration that skips over LSTM cells to make the model perform with more variation). A datapoint will be considered an outlier if the LSTM model has trouble properly recreating it, i.e. a pattern that is hard to classify, making it available for further investigations done manually. The results show that the ensembled LSTM model proved to be more accurate than that of a single LSTM model in regards to reconstructing the dataset, and by our definition of an outlier, more accurate in outlier detection. The results from the pre-processing experiments reveal different methods of obtaining an optimal number of components for your data. One of those is by studying retained variance and accuracy of PCA transformation compared to model performance for a certain number of components. One of the conclusions from the work is that ensembled LSTM networks can prove very powerful, but that alternatives to pre-processing should be explored such as categorical embedding instead of PCA. / Finansinstitut genererar idag en stor mängd data, data som kan innehålla intressant information värd att undersöka för att främja den ekonomiska tillväxten för nämnda institution. Det finns ett intresse för att analysera dessa informationspunkter, särskilt om de är avvikande från det normala dagliga arbetet. Att upptäcka dessa avvikelser är dock inte en lätt uppgift och ej möjligt att göra manuellt på grund av de stora mängderna data som genereras dagligen. Tidigare arbete för att lösa detta har undersökt användningen av maskininlärning för att upptäcka avvikelser i finansiell data. Tidigare studier har visat på att förbehandlingen av datan vanligtvis står för en stor del i förlust av emphinformation från datan. Detta arbete syftar till att studera om det finns en korrekt balans i hur förbehandlingen utförs för att behålla den högsta mängden information samtidigt som datan inte förblir för komplex för maskininlärnings-modellerna. Det emphdataset som användes bestod av valutatransaktioner som tillhandahölls av värdföretaget och förbehandlades genom användning av Principal Component Analysis (PCA). Huvudsyftet med detta arbete är att undersöka om en ensemble av Long Short-Term Memory Recurrent Neural Networks (LSTM), konfigurerad som autoenkodare, kan användas för att upptäcka avvikelser i data och om ensemblen är mer precis i sina predikteringar än en ensam LSTM-autoenkodare. Tidigare studier har visat att en ensembel avautoenkodare kan visa sig vara mer precisa än en singel autokodare, särskilt när SkipCells har implementerats (en konfiguration som hoppar över vissa av LSTM-cellerna för att göra modellerna mer varierade). En datapunkt kommer att betraktas som en avvikelse om LSTM-modellen har problem med att återskapa den väl, dvs ett mönster som nätverket har svårt att återskapa, vilket gör datapunkten tillgänglig för vidare undersökningar. Resultaten visar att en ensemble av LSTM-modeller predikterade mer precist än en singel LSTM-modell när det gäller att återskapa datasetet, och då enligt vår definition av avvikelser, mer precis avvikelse detektering. Resultaten från förbehandlingen visar olika metoder för att uppnå ett optimalt antal komponenter för dina data genom att studera bibehållen varians och precision för PCA-transformation jämfört med modellprestanda. En av slutsatserna från arbetet är att en ensembel av LSTM-nätverk kan visa sig vara mycket kraftfulla, men att alternativ till förbehandling bör undersökas, såsom categorical embedding istället för PCA.
658

Why Does Cash Still Exist? / Varför Existerar Fortfarande Kontanter?

Asplund, Oscar, Tzobras, Othon January 2018 (has links)
With non-cash transactions on the rise and the debate about the future cashless society is raging, cash is still being used around the world with varying degrees. This thesis studies the behavioral determinants of consumers with regard to cash usage. The current research have found several determinants of consumer behavior and this study aims at combining the existing knowledge into one model that might explain peoples’ payment medium behaviors. The method chosen in this thesis was to do a factor analysis in order to validate the hypothesized model. The obtained data-set was analyzed by using IBM SPSS Statistics, version 24.0.0.0. To start with the data was deemed to be suitable after checking the adequacy of our data set through a KMO and Bartlett’s test and further by looking at the MSA table where the variables score quite high indicating that the data is suitable. Secondly the results from the factor analysis indicated that there are eight components extracted with some components being uncorrelated but the majority of extracted components from the output being correlated. Finally we found that our theoretical model holds but we recommend further research to be conducted on how locations determine cash usage. Moreover, we noted that some components into the socio-demographic groups are uncorrelated and thus we would like to recommend further research into the statistical validity of the model. / Trots stadigt ökande andelen ickekontanta transaktioner kombinerat med den pågående debatten om det kontantlösa samhället så används fortfarande kontanter (i varieande utsträckning) runt om världen. Denna uppsats studerar beteendedeterminanter med hänsyn konsumenters val av olika betalningsmedel. Tidigare studier har hittat bevis för flera olika determinanter som påverkar konsumenters beteenden och denna uppsats syftar till att kombinera existerande determinanter till en modell som kan förklara konsumenters beteendemönster kring valet av olika betalningsmedier. Faktoranalys har varit den valda metoden för denna studie för att kunna validera den hypotiserad bettendemodellen. Det erhållna datasetet analyserades med hjälp av mjukvaran SPSS IBM SPSS Statistics, version 24.0.0.0. Till att börja med så ansågs datan vara passande efter lämplighetstest av det tillgängliggjorda datasetet och därefter kontrollera resultaten från KMO- och Barlettstesterna samt att undersöka resultaten från MSA-tabellen där flera variabler innehar höga värden vilket indikerade att datan var lämplig för vidare analys. Resultaten från faktoranalysen indikerar att vi erhöll totalt åtta komponenter där ett fåtal korrelerade men majoriteten av komponenterna var inte korrelerade. Till att börja med så fann vi att datan var lämplig för vidare analys. Därefter fick vi åtta extraherade komponenter vilka teoretiskt kunde härledas till vår modells hypotiserade determinanter. Till slut fann vi att vår teoretiska modell håller men vill rekommendera vidare forskning på hur specifika platser determinerar kontantanvändning. Dessutom så noterade vi att vissa komponeter inte korrelerar hos vissa sociodemografiska grupper och vi vill därför rekommendera vidare forskning för att bättre validera modellen statistiskt.
659

Unsupervised Anomaly Detection and Root Cause Analysis in HFC Networks : A Clustering Approach

Forsare Källman, Povel January 2021 (has links)
Following the significant transition from the traditional production industry to an informationbased economy, the telecommunications industry was faced with an explosion of innovation, resulting in a continuous change in user behaviour. The industry has made efforts to adapt to a more datadriven future, which has given rise to larger and more complex systems. Therefore, troubleshooting systems such as anomaly detection and root cause analysis are essential features for maintaining service quality and facilitating daily operations. This study aims to explore the possibilities, benefits, and drawbacks of implementing cluster analysis for anomaly detection in hybrid fibercoaxial networks. Based on the literature review on unsupervised anomaly detection and an assumption regarding the anomalous behaviour in hybrid fibercoaxial network data, the kmeans, SelfOrganizing Map, and Gaussian Mixture Model were implemented both with and without Principal Component Analysis. Analysis of the results demonstrated an increase in performance for all models when the Principal Component Analysis was applied, with kmeans outperforming both SelfOrganizing Map and Gaussian Mixture Model. On this basis, it is recommended to apply Principal Component Analysis for clusteringbased anomaly detection. Further research is necessary to identify whether cluster analysis is the most appropriate unsupervised anomaly detection approach. / Följt av övergången från den traditionella tillverkningsindustrin till en informationsbaserad ekonomi stod telekommunikationsbranschen inför en explosion av innovation. Detta skifte resulterade i en kontinuerlig förändring av användarbeteende och branschen tvingades genomgå stora ansträngningar för att lyckas anpassa sig till den mer datadrivna framtiden. Större och mer komplexa system utvecklades och således blev felsökningsfunktioner såsom anomalidetektering och rotfelsanalys centrala för att upprätthålla servicekvalitet samt underlätta för den dagliga driftverksamheten. Syftet med studien är att utforska de möjligheterna, för- samt nackdelar med att använda klusteranalys för anomalidetektering inom HFC- nätverk. Baserat på litteraturstudien för oövervakad anomalidetektering samt antaganden för anomalibeteenden inom HFC- data valdes algritmerna k- means, Self- Organizing Map och Gaussian Mixture Model att implementeras, både med och utan Principal Component Analysis. Analys av resultaten påvisade en uppenbar ökning av prestanda för samtliga modeller vid användning av PCA. Vidare överträffade k- means, både Self- Organizing Maps och Gaussian Mixture Model. Utifrån resultatanalysen rekommenderas det således att PCA bör tillämpas vid klusterings- baserad anomalidetektering. Vidare är ytterligare forskning nödvändig för att avgöra huruvida klusteranalys är den mest lämpliga metoden för oövervakad anomalidetektering.
660

Modelling Credit Spread Risk with a Focus on Systematic and Idiosyncratic Risk / Modellering av Kredit Spreads Risk med Fokus på Systematisk och Idiosynkratisk Risk

Korac Dalenmark, Maximilian January 2023 (has links)
This thesis presents an application of Principal Component Analysis (PCA) and Hierarchical PCA to credit spreads. The aim is to identify the underlying factors that drive the behavior of credit spreads as well as the left over idiosyncratic risk, which is crucial for risk management and pricing of credit derivatives. The study employs a dataset from the Swedish market of credit spreads for different maturities and ratings, split into Covered Bonds and Corporate Bonds, and performs PCA to extract the dominant factors that explain the variation in the data of the former set. The results show that most of the systemic movements in Swedish covered bonds can be extracted using a mean which coincides with the first principal component. The report further explores the idiosyncratic risk of the credit spreads to further the knowledge regarding the dynamics of credit spreads and improving risk management in credit portfolios, specifically in regards to new regulation in the form of the Fundemental Review of the Trading Book (FRTB). The thesis also explores a more general model on corporate bonds using HPCA and K-means clustering. Due to data issues it is less explored but there are useful findings, specifically regarding the feasibility of using clustering in combination with HPCA. / I detta arbete presenteras en tillämpning av Principal Komponent Analysis (PCA) och Hierarkisk PCA på kreditspreadar. Syftet är att identifiera de underliggande faktorer som styr kreditspreadarnas beteende samt den kvarvarande idiosynkratiska risken, vilket är avgörande för riskhantering och prissättning av diverse kreditderivat. I studien används en datamängd från den svenska marknaden med kreditspreadar för olika löptider och kreditbetyg, uppdelat på säkerställda obligationer och företagsobligationer, och PCA används för att ta fram de mest signifikanta faktorerna som förklarar variationen i data för de förstnämnda obligationerna. Resultaten visar att de flesta av de systematiska rörelserna i svenska säkerställda obligationer kan extraheras med hjälp av ett medelvärde som sammanfaller med den första principalkomponenten. I rapporten undersöks vidare den idiosynkratiska risken i kreditspreadarna för att öka kunskapen om dynamiken i kreditspreadarna och förbättre riskhanteringen i kreditportföljer, särskilt med tanke på regelverket "Fundemental Review of the Tradring book" (FRTB). I rapporten undersöktes vidare en mer allmän modell för företagsobligationer med hjälp av HPCA och K-means-klustering. På grund av dataproblem är den mindre utforstkad, men det finns användbara resultat, särskild när det gäller möjligheten att använda kluster i kombination med HPCA.

Page generated in 0.0832 seconds