• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 12
  • 6
  • 4
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 150
  • 150
  • 150
  • 79
  • 55
  • 54
  • 24
  • 24
  • 23
  • 23
  • 20
  • 20
  • 19
  • 19
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
111

Modelling Proxy Credit Cruves Using Recurrent Neural Networks / Modellering av Proxykreditkurvor med Rekursiva Neurala Nätverk

Fageräng, Lucas, Thoursie, Hugo January 2023 (has links)
Since the global financial crisis of 2008, regulatory bodies worldwide have implementedincreasingly stringent requirements for measuring and pricing default risk in financialderivatives. Counterparty Credit Risk (CCR) serves as the measure for default risk infinancial derivatives, and Credit Valuation Adjustment (CVA) is the pricing method used toincorporate this default risk into derivatives prices. To calculate the CVA, one needs the risk-neutral Probability of Default (PD) for the counterparty, which is the centre in this type ofderivative.The traditional method for calculating risk-neutral probabilities of default involves constructingcredit curves, calibrated using the credit derivative Credit Default Swap (CDS). However,liquidity issues in CDS trading present a major challenge, as the majority of counterpartieslack liquid CDS spreads. This poses the difficult question of how to model risk-neutral PDwithout liquid CDS spreads.The current method for generating proxy credit curves, introduced by the Japanese BankNomura in 2013, involves a cross-sectional linear regression model. Although this model issufficient in most cases, it often generates credit curves unsuitable for larger counterpartiesin more volatile times. In this thesis, we introduce two Long Short-Term Memory (LSTM)models trained on similar entities, which use CDS spreads as input. Our introduced modelsshow some improvement in generating proxy credit curves compared to the Nomura model,especially during times of higher volatility. While the result were more in line with the tradedCDS-market, there remains room for improvement in the model structure by using a moreextensive dataset. / Ända sedan 2008 års finanskris har styrande finansiella organ ökat kraven för mätning ochprissättning av konkursrisk inom derivat. Ett område av särskilt högt intresse för detta arbete ärmotpartskreditrisker (CCR). I detta är Kreditvärdesjustering (CVA) den huvudsakliga metodenför prissättning av konkursrisk inom finansiella derivat och för att kunna få fram ett värde avCVA behövs en risk-neutral konkurssannolikhet (PD).En av de traditionella metoderna för att räkna ut denna sannolikhet är genom att skapakreditkurvor som sedan är kalibrerade utifrån CDS:ar. Detta handlade derivat (CDS) finns baraför ett mindre antal företag över hela världen vilket gör att en majoritet av marknaden saknaren tillräckligt handlad CDS. Lösning på detta är att ta fram proxy CDS för ett motsvarande bolag.Idag görs detta framförallt med en tvärsnitts-regressionsmodell som introducerades 2013 avden japanska banken Nomura. Den skapar i många fall rimliga kurvor men ett problem den harär att den oftare gör proxyn lägre än vad den borde vara.I detta arbete introducerar vi istället en LSTM modell som tränas på liknande företag. Resultatetav detta är att vi får en bättre modell i många fall för att skapa en proxy kurva men som delvishar liknande brister som Nomura modellen. Men med fortsatta undersökningar inom områdetsamt med mer data kan detta skapa en mer exakt och säkrare proxy modell.
112

Indoor scene verification : Evaluation of indoor scene representations for the purpose of location verification / Verifiering av inomhusbilder : Bedömning av en inomhusbilder framställda i syfte att genomföra platsverifiering

Finfando, Filip January 2020 (has links)
When human’s visual system is looking at two pictures taken in some indoor location, it is fairly easy to tell whether they were taken in exactly the same place, even when the location has never been visited in reality. It is possible due to being able to pay attention to the multiple factors such as spatial properties (windows shape, room shape), common patterns (floor, walls) or presence of specific objects (furniture, lighting). Changes in camera pose, illumination, furniture location or digital alteration of the image (e.g. watermarks) has little influence on this ability. Traditional approaches to measuring the perceptual similarity of images struggled to reproduce this skill. This thesis defines the Indoor scene verification (ISV) problem as distinguishing whether two indoor scene images were taken in the same indoor space or not. It explores the capabilities of state-of-the-art perceptual similarity metrics by introducing two new datasets designed specifically for this problem. Perceptual hashing, ORB, FaceNet and NetVLAD are evaluated as the baseline candidates. The results show that NetVLAD provides the best results on both datasets and therefore is chosen as the baseline for the experiments aiming to improve it. Three of them are carried out testing the impact of using the different training dataset, changing deep neural network architecture and introducing new loss function. Quantitative analysis of AUC score shows that switching from VGG16 to MobileNetV2 allows for improvement over the baseline. / Med mänskliga synförmågan är det ganska lätt att bedöma om två bilder som tas i samma inomhusutrymme verkligen har tagits i exakt samma plats även om man aldrig har varit där. Det är möjligt tack vare många faktorer, sådana som rumsliga egenskaper (fönsterformer, rumsformer), gemensamma mönster (golv, väggar) eller närvaro av särskilda föremål (möbler, ljus). Ändring av kamerans placering, belysning, möblernas placering eller digitalbildens förändring (t. ex. vattenstämpel) påverkar denna förmåga minimalt. Traditionella metoder att mäta bildernas perceptuella likheter hade svårigheter att reproducera denna färdighet . Denna uppsats definierar verifiering av inomhusbilder, Indoor SceneVerification (ISV), som en ansats att ta reda på om två inomhusbilder har tagits i samma utrymme eller inte. Studien undersöker de främsta perceptuella identitetsfunktionerna genom att introducera två nya datauppsättningar designade särskilt för detta. Perceptual hash, ORB, FaceNet och NetVLAD identifierades som potentiella referenspunkter. Resultaten visar att NetVLAD levererar de bästa resultaten i båda datauppsättningarna, varpå de valdes som referenspunkter till undersökningen i syfte att förbättra det. Tre experiment undersöker påverkan av användning av olika datauppsättningar, ändring av struktur i neuronnätet och införande av en ny minskande funktion. Kvantitativ AUC-värdet analys visar att ett byte frånVGG16 till MobileNetV2 tillåter förbättringar i jämförelse med de primära lösningarna.
113

Supervised Speech Separation And Processing

Han, Kun January 2014 (has links)
No description available.
114

Supervised Speech Separation Using Deep Neural Networks

Wang, Yuxuan 21 May 2015 (has links)
No description available.
115

On Generalization of Supervised Speech Separation

Chen, Jitong 30 August 2017 (has links)
No description available.
116

Efficient Continual Learning in Deep Neural Networks

Gobinda Saha (18512919) 07 May 2024 (has links)
<p dir="ltr">Humans exhibit remarkable ability in continual adaptation and learning new tasks throughout their lifetime while maintaining the knowledge gained from past experiences. In stark contrast, artificial neural networks (ANNs) under such continual learning (CL) paradigm forget the information learned in the past tasks upon learning new ones. This phenomenon is known as ‘Catastrophic Forgetting’ or ‘Catastrophic Interference’. The objective of this thesis is to enable efficient continual learning in deep neural networks while mitigating this forgetting phenomenon. Towards this, first, a continual learning algorithm (SPACE) is proposed where a subset of network filters or neurons is allocated for each task using Principal Component Analysis (PCA). Such task-specific network isolation not only ensures zero forgetting but also creates structured sparsity in the network which enables energy-efficient inference. Second, a fast and more efficient training algorithm for CL is proposed by introducing Gradient Projection Memory (GPM). Here, the most important gradient spaces (GPM) for each task are computed using Singular Value Decomposition (SVD) and the new tasks are learned in the orthogonal direction to GPM to minimize forgetting. Third, to improve new learning while minimizing forgetting, a Scaled Gradient Projection (SGP) method is proposed that, in addition to orthogonal gradient updates, allows scaled updates along the important gradient spaces of the past task. Next, for continual learning on an online stream of tasks a memory efficient experience replay method is proposed. This method utilizes saliency maps explaining network’s decision for selecting memories that are replayed during new tasks for preventing forgetting. Finally, a meta-learning based continual learner - Amphibian - is proposed that achieves fast online continual learning without any experience replay. All the algorithms are evaluated on short and long sequences of tasks from standard image-classification datasets. Overall, the methods proposed in this thesis address critical limitations of DNNs for continual learning and advance the state-of-the-art in this domain.</p>
117

Characterization and Optimization of Perception Deep Neural Networks on the Edge for Connected Autonomous Vehicles

Tang, Sihai 05 1900 (has links)
This dissertation presents novel approaches to optimizing convolutional neural network (CNN) architectures for connected autonomous vehicle (CAV) workload on edge, tailored to surmount the challenges inherent in cooperative perception under the stringent resource constraints of edge devices (an endpoint on the network, the interface between the data center and the real world). Employing a modular methodology, this research utilizes the insights from granular examination of CAV perception workloads on edge platforms, identifying and analyzing critical bottlenecks. Through memory contention-aware neural architecture search (NAS), coupled with multi-objective optimization (MOO) and the Non-dominated Sorting Genetic Algorithm II (NSGA-II), this work dynamically optimizes CNN architectures, focusing on reducing memory cost, layer configuration and parameter optimization to reach set hardware constraints whilst maintaining a target precision performance. The results of this exploration are significant, achieving a 63% reduction in memory usage while maintaining a precision rate above 80% for CAV relevant object classes. This dissertation makes novel contributions to the field of edge computing in CAVs, offering a scalable and automated pipeline framework for dynamically obtaining an optimized model for given constraints, thus enabling CAV workloads on edge. In future research, this dissertation also opens multiple different venues for areas of integration. The modular aspect of the pipeline allows for security, privacy, scalability, and energy constraints to be added natively. Through detailed layer by layer analysis and refinement, this dissertation can ensure that CAVs can fully utilize any suitable edge device for the workload requested to realize autonomous driving for everyone.
118

From Historical Newspapers to Machine-Readable Data: The Origami OCR Pipeline

Liebl, Bernhard, Burghardt, Manuel 20 June 2024 (has links)
While historical newspapers recently have gained a lot of attention in the digital humanities, transforming them into machine-readable data by means of OCR poses some major challenges. In order to address these challenges, we have developed an end-to-end OCR pipeline named Origami. This pipeline is part of a current project on the digitization and quantitative analysis of the German newspaper “Berliner Börsen-Zeitung” (BBZ), from 1872 to 1931. The Origami pipeline reuses existing open source OCR components and on top offers a new configurable architecture for layout detection, a simple table recognition, a two-stage X-Y cut for reading order detection, and a new robust implementation for document dewarping. In this paper we describe the different stages of the workflow and discuss how they meet the above-mentioned challenges posed by historical newspapers.
119

Multimodal Deep Learning for Multi-Label Classification and Ranking Problems

Dubey, Abhishek January 2015 (has links) (PDF)
In recent years, deep neural network models have shown to outperform many state of the art algorithms. The reason for this is, unsupervised pretraining with multi-layered deep neural networks have shown to learn better features, which further improves many supervised tasks. These models not only automate the feature extraction process but also provide with robust features for various machine learning tasks. But the unsupervised pretraining and feature extraction using multi-layered networks are restricted only to the input features and not to the output. The performance of many supervised learning algorithms (or models) depends on how well the output dependencies are handled by these algorithms [Dembczy´nski et al., 2012]. Adapting the standard neural networks to handle these output dependencies for any specific type of problem has been an active area of research [Zhang and Zhou, 2006, Ribeiro et al., 2012]. On the other hand, inference into multimodal data is considered as a difficult problem in machine learning and recently ‘deep multimodal neural networks’ have shown significant results [Ngiam et al., 2011, Srivastava and Salakhutdinov, 2012]. Several problems like classification with complete or missing modality data, generating the missing modality etc., are shown to perform very well with these models. In this work, we consider three nontrivial supervised learning tasks (i) multi-class classification (MCC), (ii) multi-label classification (MLC) and (iii) label ranking (LR), mentioned in the order of increasing complexity of the output. While multi-class classification deals with predicting one class for every instance, multi-label classification deals with predicting more than one classes for every instance and label ranking deals with assigning a rank to each label for every instance. All the work in this field is associated around formulating new error functions that can force network to identify the output dependencies. Aim of our work is to adapt neural network to implicitly handle the feature extraction (dependencies) for output in the network structure, removing the need of hand crafted error functions. We show that the multimodal deep architectures can be adapted for these type of problems (or data) by considering labels as one of the modalities. This also brings unsupervised pretraining to the output along with the input. We show that these models can not only outperform standard deep neural networks, but also outperform standard adaptations of neural networks for individual domains under various metrics over several data sets considered by us. We can observe that the performance of our models over other models improves even more as the complexity of the output/ problem increases.
120

Sequential modeling, generative recurrent neural networks, and their applications to audio

Mehri, Soroush 12 1900 (has links)
No description available.

Page generated in 0.0728 seconds