• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 132
  • 39
  • 33
  • 21
  • 11
  • 9
  • 9
  • 7
  • 6
  • 4
  • 4
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 317
  • 317
  • 160
  • 66
  • 62
  • 58
  • 44
  • 44
  • 37
  • 37
  • 36
  • 35
  • 35
  • 33
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Comparison of heuristic and machine learning algorithms for a multi-objective vehicle routing problem

Arneson, Sebastian, Borgenstierna, Mattias January 2024 (has links)
The vehicle routing problem is an optimisation problem with a high computational complexity that can be solved using heuristics methods to achieve near-optimal solutions in a reasonable amount of time. The work done in this study aims to compare the execution time and distance of different routing engines when using VROOM, as well as evaluate different implementations of the k-means algorithm by looking at the rand- and adjusted rand index. The results show a difference in the distance and execution time depending on which routing engine is used and it is unclear if there is a difference in the k-means implementations. Investigating the cause behind the observed results would be interesting in future works.
262

Indexation et recherche de similarités avec des descripteurs structurés par coupes d'images sur des graphes / Indexing and Searching for Similarities of Images with Structural Descriptors via Graph-cuttings Methods

Ren, Yi 20 November 2014 (has links)
Dans cette thèse, nous nous intéressons à la recherche d’images similaires avec des descripteurs structurés par découpages d’images sur les graphes.Nous proposons une nouvelle approche appelée “bag-of-bags of words” (BBoW) pour la recherche d’images par le contenu (CBIR). Il s’agit d’une extension du modèle classique dit sac-de-mots (bag of words - BoW). Dans notre approche, une image est représentée par un graphe placé sur une grille régulière de pixels d’image. Les poids sur les arêtes dépendent de caractéristiques locales de couleur et texture. Le graphe est découpé en un nombre fixe de régions qui constituent une partition irrégulière de l’image. Enfin, chaque partition est représentée par sa propre signature suivant le même schéma que le BoW. Une image est donc décrite par un ensemble de signatures qui sont ensuite combinées pour la recherche d’images similaires dans une base de données. Contrairement aux méthodes existantes telles que Spatial Pyramid Matching (SPM), le modèle BBoW proposé ne repose pas sur l’hypothèse que des parties similaires d’une scène apparaissent toujours au même endroit dans des images d’une même catégorie. L’extension de cette méthode ` a une approche multi-échelle, appelée Irregular Pyramid Matching (IPM), est ´ également décrite. Les résultats montrent la qualité de notre approche lorsque les partitions obtenues sont stables au sein d’une même catégorie d’images. Une analyse statistique est menée pour définir concrètement la notion de partition stable.Nous donnons nos résultats sur des bases de données pour la reconnaissance d’objets, d’indexation et de recherche d’images par le contenu afin de montrer le caractère général de nos contributions / Image representation is a fundamental question for several computer vision tasks. The contributions discussed in this thesis extend the basic bag-of-words representations for the tasks of object recognition and image retrieval.In the present thesis, we are interested in image description by structural graph descriptors. We propose a model, named bag-of-bags of words (BBoW), to address the problems of object recognition (for object search by similarity), and especially Content-Based Image Retrieval (CBIR) from image databases. The proposed BBoW model, is an approach based on irregular pyramid partitions over the image. An image is first represented as a connected graph of local features on a regular grid of pixels. Irregular partitions (subgraphs) of the image are further built by using graph partitioning methods. Each subgraph in the partition is then represented by its own signature. The BBoW model with the aid of graphs, extends the classical bag-of-words (BoW) model by embedding color homogeneity and limited spatial information through irregular partitions of an image. Compared to existing methods for image retrieval, such as Spatial Pyramid Matching (SPM), the BBoW model does not assume that similar parts of a scene always appear at the same location in images of the same category. The extension of the proposed model to pyramid gives rise to a method we named irregular pyramid matching (IPM).The experiments demonstrate the strength of our approach for image retrieval when the partitions are stable across an image category. The statistical analysisof subgraphs is fulfilled in the thesis. To validate our contributions, we report results on three related computer vision datasets for object recognition, (localized)content-based image retrieval and image indexing. The experimental results in a database of 13,044 general-purposed images demonstrate the efficiency and effectiveness of the proposed BBoW framework.
263

T?cnicas de computa??o natural para segmenta??o de imagens m?dicas

Souza, Jackson Gomes de 28 September 2009 (has links)
Made available in DSpace on 2014-12-17T14:55:35Z (GMT). No. of bitstreams: 1 JacksonGS.pdf: 1963039 bytes, checksum: ed3464892d7bb73b5dcab563e42f0e01 (MD5) Previous issue date: 2009-09-28 / Image segmentation is one of the image processing problems that deserves special attention from the scientific community. This work studies unsupervised methods to clustering and pattern recognition applicable to medical image segmentation. Natural Computing based methods have shown very attractive in such tasks and are studied here as a way to verify it's applicability in medical image segmentation. This work treats to implement the following methods: GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm), PSOKA (PSO and K-means based Clustering Algorithm) and PSOFCM (PSO and FCM based Clustering Algorithm). Besides, as a way to evaluate the results given by the algorithms, clustering validity indexes are used as quantitative measure. Visual and qualitative evaluations are realized also, mainly using data given by the BrainWeb brain simulator as ground truth / Segmenta??o de imagens ? um dos problemas de processamento de imagens que merece especial interesse da comunidade cient?fica. Neste trabalho, s?o estudado m?todos n?o-supervisionados para detec??o de algomerados (clustering) e reconhecimento de padr?es (pattern recognition) em segmenta??o de imagens m?dicas M?todos baseados em t?cnicas de computa??o natural t?m se mostrado bastante atrativos nestas tarefas e s?o estudados aqui como uma forma de verificar a sua aplicabilidade em segmenta??o de imagens m?dicas. Este trabalho trata de implementa os m?todos GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm) PSOKA (Algoritmo de clustering baseado em PSO (Particle Swarm Optimization) e K means) e PSOFCM (Algoritmo de clustering baseado em PSO e FCM (Fuzzy C Means)). Al?m disso, como forma de avaliar os resultados fornecidos pelos algoritmos s?o utilizados ?ndices de valida??o de clustering como forma de medida quantitativa Avalia??es visuais e qualitativas tamb?m s?o realizadas, principalmente utilizando dados do sistema BrainWeb, um gerador de imagens do c?rebro, como ground truth
264

Fúze simultánních EEG-FMRI dat za pomoci zobecněných spektrálních vzorců / Simultanneous EEG-FMRI Data Fusion with Generalized Spectral Patterns

Labounek, René January 2018 (has links)
Mnoho rozdílných strategií fúze bylo vyvinuto během posledních 15 let výzkumu simultánního EEG-fMRI. Aktuální dizertační práce shrnuje aktuální současný stav v oblasti výzkumu fúze simultánních EEG-fMRI dat a pokládá si za cíl vylepšit vizualizaci úkolem evokovaných mozkových sítí slepou analýzou přímo z nasnímaných dat. Dva rozdílné modely, které by to měly vylepšit, byly navrhnuty v předložené práci (tj. zobecněný spektrální heuristický model a zobecněný prostorovo-frekvenční heuristický model). Zobecněný frekvenční heuristický model využívá fluktuace relativního EEG výkonu v určitých frekvenčních pásmech zprůměrovaných přes elektrody zájmu a srovnává je se zpožděnými fluktuacemi BOLD signálů pomocí obecného lineárního modelu. Získané výsledky ukazují, že model zobrazuje několik na frekvenci závislých rozdílných úkolem evokovaných EEG-fMRI sítí. Model překonává přístup fluktuací absolutního EEG výkonu i klasický (povodní) heuristický přístup. Absolutní výkon vizualizoval s úkolem nesouvisející širokospektrální EEG-fMRI komponentu a klasický heuristický přístup nebyl senzitivní k vizualizaci s úkolem spřažené vizuální sítě, která byla pozorována pro relativní pásmo pro data vizuálního oddball experimentu. Pro EEG-fMRI data s úkolem sémantického rozhodování, frekvenční závislost nebyla ve finálních výsledcích tak evidentní, neboť všechna pásma zobrazily vizuální síť a nezobrazily aktivace v řečových centrech. Tyto výsledky byly pravděpodobně poškozeny artefaktem mrkání v EEG datech. Koeficienty vzájemné informace mezi rozdílnými EEG-fMRI statistickými parametrickými mapami ukázaly, že podobnosti napříč různými frekvenčními pásmy jsou obdobné napříč různými úkoly (tj. vizuální oddball a sémantické rozhodování). Navíc, koeficienty prokázaly, že průměrování napříč různými elektrodami zájmu nepřináší žádnou novou informaci do společné analýzy, tj. signál na jednom svodu je velmi rozmazaný signál z celého skalpu. Z těchto důvodů začalo být třeba lépe zakomponovat informace ze svodů do EEG-fMRI analýzy, a proto jsme navrhli více obecný prostorovo-frekvenční heuristický model a také jak ho odhadnout za pomoci prostorovo-frekvenční skupinové analýzy nezávislých komponent relativního výkonu EEG spektra. Získané výsledky ukazují, že prostorovo-frekvenční heuristický model vizualizuje statisticky nejvíce signifikantní s úkolem spřažené mozkové sítě (srovnáno s výsledky prostorovo-frekvenčních vzorů absolutního výkonu a s výsledky zobecněného frekvenčního heuristického modelu). Prostorovo-frekvenční heuristický model byl jediný, který zaznamenal s úkolem spřažené aktivace v řečových centrech na datech sémantického rozhodování. Mimo fúzi prostorovo-frekvenčních vzorů s fMRI daty, jsme testovali stabilitu odhadů prostorovo-frekvenčních vzorů napříč různými paradigmaty (tj. vizuální oddball, semantické rozhodování a resting-state) za pomoci k-means shlukovacího algoritmu. Dostali jsme 14 stabilních vzorů pro absolutní EEG výkon a 12 stabilních vzorů pro relativní EEG výkon. Ačkoliv 10 z těchto vzorů vypadají podobně napříč výkonovými typy, prostorovo-frekvenční vzory relativního výkonu (tj. vzory prostorovo-frekvenčního heuristického modelu) mají vyšší evidenci k úkolům.
265

Segmentace obrazu pomocí neuronové sítě / Neural Network Based Image Segmentation

Jamborová, Soňa January 2011 (has links)
This work is about suggestion of the software for neural network based image segmentation. It defines basic terms for this topics. It is focusing mainly at preperation imaging information for image segmentation using neural network. It describes and compares different aproaches for image segmentation.
266

Essais sur la prévision de la défaillance bancaire : validation empirique des modèles non-paramétriques et étude des déterminants des prêts non performants / Essays on the prediction of bank failure : empirical validation of non-parametric models and study of the determinants of non-performing loans

Affes, Zeineb 05 March 2019 (has links)
La récente crise financière qui a débuté aux États-Unis en 2007 a révélé les faiblesses du système bancaire international se traduisant par l’effondrement de nombreuses institutions financières aux États-Unis et aussi par l’augmentation de la part des prêts non performants dans les bilans des banques européennes. Dans ce cadre, nous proposons d’abord d’estimer et de tester l’efficacité des modèles de prévisions des défaillances bancaires. L’objectif étant d’établir un système d’alerte précoce (EWS) de difficultés bancaires basées sur des variables financières selon la typologie CAMEL (Capital adequacy, Asset quality, Management quality, Earnings ability, Liquidity). Dans la première étude, nous avons comparé la classification et la prédiction de l’analyse discriminante canonique (CDA) et de la régression logistique (LR) avec et sans coûts de classification en combinant ces deux modèles paramétriques avec le modèle descriptif d’analyse en composantes principales (ACP). Les résultats montrent que les modèles (LR et CDA) peuvent prédire la faillite des banques avec précision. De plus, les résultats de l’ACP montrent l’importance de la qualité des actifs, de l’adéquation des fonds propres et de la liquidité en tant qu’indicateurs des conditions financières de la banque. Nous avons aussi comparé la performance de deux méthodes non paramétriques, les arbres de classification et de régression (CART) et le nouveau modèle régression multivariée par spline adaptative (MARS), dans la prévision de la défaillance. Un modèle hybride associant ’K-means clustering’ et MARS est également testé. Nous cherchons à modéliser la relation entre dix variables financières et le défaut d’une banque américaine. L’approche comparative a mis en évidence la suprématie du modèle hybride en termes de classification. De plus, les résultats ont montré que les variables d’adéquation du capital sont les plus importantes pour la prévision de la faillite d’une banque. Enfin, nous avons étudié les facteurs déterminants des prêts non performants des banques de l’Union Européenne durant la période 2012-2015 en estimant un modèle à effets fixe sur données de panel. Selon la disponibilité des données nous avons choisi un ensemble de variables qui se réfèrent à la situation macroéconomique du pays de la banque et d’autres variables propres à chaque banque. Les résultats ont prouvé que la dette publique, les provisions pour pertes sur prêts, la marge nette d’intérêt et la rentabilité des capitaux propres affectent positivement les prêts non performants, par contre la taille de la banque et l’adéquation du capital (EQTA et CAR) ont un impact négatif sur les créances douteuses. / The recent financial crisis that began in the United States in 2007 revealed the weaknesses of the international banking system resulting in the collapse of many financial institutions in the United States and also the increase in the share of non-performing loans in the balance sheets of European banks. In this framework, we first propose to estimate and test the effectiveness of banking default forecasting models. The objective is to establish an early warning system (EWS) of banking difficulties based on financial variables according to CAMEL’s ratios (Capital adequacy, Asset quality, Management quality, Earnings ability, Liquidity). In the first study, we compared the classification and the prediction of the canonical discriminant analysis (CDA) and the logistic regression (LR) with and without classification costs by combining these two parametric models with the descriptive model of principal components analysis (PCA). The results show that the LR and the CDA can predict bank failure accurately. In addition, the results of the PCA show the importance of asset quality, capital adequacy and liquidity as indicators of the bank’s financial conditions. We also compared the performance of two non-parametric methods, the classification and regression trees (CART) and the newly multivariate adaptive regression splines (MARS) models, in the prediction of failure. A hybrid model combining ’K-means clustering’ and MARS is also tested. We seek to model the relationship between ten financial variables (CAMEL’s ratios) and the default of a US bank. The comparative approach has highlighted the supremacy of the hybrid model in terms of classification. In addition, the results showed that the capital adequacy variables are the most important for predicting the bankruptcy of a bank. Finally, we studied the determinants of non-performing loans from European Union banks during the period 2012-2015 by estimating a fixed effects model on panel data. Depending on the availability of data we have chosen a set of variables that refer to the macroeconomic situation of the country of the bank and other variables specific to each bank. The results showed that public debt, loan loss provisions, net interest margin and return on equity positively affect non performing loans, while the size of the bank and the adequacy of capital (EQTA and CAR) have a negative impact on bad debts.
267

Unsupervised Anomaly Detection and Root Cause Analysis in HFC Networks : A Clustering Approach

Forsare Källman, Povel January 2021 (has links)
Following the significant transition from the traditional production industry to an informationbased economy, the telecommunications industry was faced with an explosion of innovation, resulting in a continuous change in user behaviour. The industry has made efforts to adapt to a more datadriven future, which has given rise to larger and more complex systems. Therefore, troubleshooting systems such as anomaly detection and root cause analysis are essential features for maintaining service quality and facilitating daily operations. This study aims to explore the possibilities, benefits, and drawbacks of implementing cluster analysis for anomaly detection in hybrid fibercoaxial networks. Based on the literature review on unsupervised anomaly detection and an assumption regarding the anomalous behaviour in hybrid fibercoaxial network data, the kmeans, SelfOrganizing Map, and Gaussian Mixture Model were implemented both with and without Principal Component Analysis. Analysis of the results demonstrated an increase in performance for all models when the Principal Component Analysis was applied, with kmeans outperforming both SelfOrganizing Map and Gaussian Mixture Model. On this basis, it is recommended to apply Principal Component Analysis for clusteringbased anomaly detection. Further research is necessary to identify whether cluster analysis is the most appropriate unsupervised anomaly detection approach. / Följt av övergången från den traditionella tillverkningsindustrin till en informationsbaserad ekonomi stod telekommunikationsbranschen inför en explosion av innovation. Detta skifte resulterade i en kontinuerlig förändring av användarbeteende och branschen tvingades genomgå stora ansträngningar för att lyckas anpassa sig till den mer datadrivna framtiden. Större och mer komplexa system utvecklades och således blev felsökningsfunktioner såsom anomalidetektering och rotfelsanalys centrala för att upprätthålla servicekvalitet samt underlätta för den dagliga driftverksamheten. Syftet med studien är att utforska de möjligheterna, för- samt nackdelar med att använda klusteranalys för anomalidetektering inom HFC- nätverk. Baserat på litteraturstudien för oövervakad anomalidetektering samt antaganden för anomalibeteenden inom HFC- data valdes algritmerna k- means, Self- Organizing Map och Gaussian Mixture Model att implementeras, både med och utan Principal Component Analysis. Analys av resultaten påvisade en uppenbar ökning av prestanda för samtliga modeller vid användning av PCA. Vidare överträffade k- means, både Self- Organizing Maps och Gaussian Mixture Model. Utifrån resultatanalysen rekommenderas det således att PCA bör tillämpas vid klusterings- baserad anomalidetektering. Vidare är ytterligare forskning nödvändig för att avgöra huruvida klusteranalys är den mest lämpliga metoden för oövervakad anomalidetektering.
268

Modelling Credit Spread Risk with a Focus on Systematic and Idiosyncratic Risk / Modellering av Kredit Spreads Risk med Fokus på Systematisk och Idiosynkratisk Risk

Korac Dalenmark, Maximilian January 2023 (has links)
This thesis presents an application of Principal Component Analysis (PCA) and Hierarchical PCA to credit spreads. The aim is to identify the underlying factors that drive the behavior of credit spreads as well as the left over idiosyncratic risk, which is crucial for risk management and pricing of credit derivatives. The study employs a dataset from the Swedish market of credit spreads for different maturities and ratings, split into Covered Bonds and Corporate Bonds, and performs PCA to extract the dominant factors that explain the variation in the data of the former set. The results show that most of the systemic movements in Swedish covered bonds can be extracted using a mean which coincides with the first principal component. The report further explores the idiosyncratic risk of the credit spreads to further the knowledge regarding the dynamics of credit spreads and improving risk management in credit portfolios, specifically in regards to new regulation in the form of the Fundemental Review of the Trading Book (FRTB). The thesis also explores a more general model on corporate bonds using HPCA and K-means clustering. Due to data issues it is less explored but there are useful findings, specifically regarding the feasibility of using clustering in combination with HPCA. / I detta arbete presenteras en tillämpning av Principal Komponent Analysis (PCA) och Hierarkisk PCA på kreditspreadar. Syftet är att identifiera de underliggande faktorer som styr kreditspreadarnas beteende samt den kvarvarande idiosynkratiska risken, vilket är avgörande för riskhantering och prissättning av diverse kreditderivat. I studien används en datamängd från den svenska marknaden med kreditspreadar för olika löptider och kreditbetyg, uppdelat på säkerställda obligationer och företagsobligationer, och PCA används för att ta fram de mest signifikanta faktorerna som förklarar variationen i data för de förstnämnda obligationerna. Resultaten visar att de flesta av de systematiska rörelserna i svenska säkerställda obligationer kan extraheras med hjälp av ett medelvärde som sammanfaller med den första principalkomponenten. I rapporten undersöks vidare den idiosynkratiska risken i kreditspreadarna för att öka kunskapen om dynamiken i kreditspreadarna och förbättre riskhanteringen i kreditportföljer, särskilt med tanke på regelverket "Fundemental Review of the Tradring book" (FRTB). I rapporten undersöktes vidare en mer allmän modell för företagsobligationer med hjälp av HPCA och K-means-klustering. På grund av dataproblem är den mindre utforstkad, men det finns användbara resultat, särskild när det gäller möjligheten att använda kluster i kombination med HPCA.
269

Classification of Carpiodes Using Fourier Descriptors: A Content Based Image Retrieval Approach

Trahan, Patrick 06 August 2009 (has links)
Taxonomic classification has always been important to the study of any biological system. Many biological species will go unclassified and become lost forever at the current rate of classification. The current state of computer technology makes image storage and retrieval possible on a global level. As a result, computer-aided taxonomy is now possible. Content based image retrieval techniques utilize visual features of the image for classification. By utilizing image content and computer technology, the gap between taxonomic classification and species destruction is shrinking. This content based study utilizes the Fourier Descriptors of fifteen known landmark features on three Carpiodes species: C.carpio, C.velifer, and C.cyprinus. Classification analysis involves both unsupervised and supervised machine learning algorithms. Fourier Descriptors of the fifteen known landmarks provide for strong classification power on image data. Feature reduction analysis indicates feature reduction is possible. This proves useful for increasing generalization power of classification.
270

Shluková analýza rozsáhlých souborů dat: nové postupy založené na metodě k-průměrů / Cluster analysis of large data sets: new procedures based on the method k-means

Žambochová, Marta January 2005 (has links)
Abstract Cluster analysis has become one of the main tools used in extracting knowledge from data, which is known as data mining. In this area of data analysis, data of large dimensions are often processed, both in the number of objects and in the number of variables, which characterize the objects. Many methods for data clustering have been developed. One of the most widely used is a k-means method, which is suitable for clustering data sets containing large number of objects. It is based on finding the best clustering in relation to the initial distribution of objects into clusters and subsequent step-by-step redistribution of objects belonging to the clusters by the optimization function. The aim of this Ph.D. thesis was a comparison of selected variants of existing k-means methods, detailed characterization of their positive and negative characte- ristics, new alternatives of this method and experimental comparisons with existing approaches. These objectives were met. I focused on modifications of the k-means method for clustering of large number of objects in my work, specifically on the algorithms BIRCH k-means, filtering, k-means++ and two-phases. I watched the time complexity of algorithms, the effect of initialization distribution and outliers, the validity of the resulting clusters. Two real data files and some generated data sets were used. The common and different features of method, which are under investigation, are summarized at the end of the work. The main aim and benefit of the work is to devise my modifications, solving the bottlenecks of the basic procedure and of the existing variants, their programming and verification. Some modifications brought accelerate the processing. The application of the main ideas of algorithm k-means++ brought to other variants of k-means method better results of clustering. The most significant of the proposed changes is a modification of the filtering algorithm, which brings an entirely new feature of the algorithm, which is the detection of outliers. The accompanying CD is enclosed. It includes the source code of programs written in MATLAB development environment. Programs were created specifically for the purpose of this work and are intended for experimental use. The CD also contains the data files used for various experiments.

Page generated in 0.0322 seconds