• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 143
  • 50
  • 46
  • 22
  • 10
  • 6
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 346
  • 83
  • 66
  • 65
  • 64
  • 44
  • 40
  • 37
  • 37
  • 36
  • 35
  • 31
  • 30
  • 29
  • 28
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Odstraňování šumu v obraze pomocí metod hlubokého učení / Removing noise in images using deep learning methods

Strejček, Jakub January 2021 (has links)
This thesis focuses on comparing methods of denoising by deep learning and their implementation. In the last few years, it has become clear that it is not necessary to have paired data, as for noisy and clean pictures, to train convolution neural networks but it is sufficient to have only noisy pictures for denoising in particular cases. By using methods described in this thesis it is possible to effectively remove i.e. additive Gaussian noise and what more, it is possible to achieve better results than by using statistic methods, which are being used for denoising these days.
182

Numerické metody zpracování obrazů z mikroskopu s rotujícím kondenzorem. / Numerical image processing methods for rotating condenser microscope images processing

Týč, Matěj January 2008 (has links)
Tato práce popisuje vznik obrazu v transmisním světelném mikroskopu s proměnnou aperturou kondenzoru. Tento způsob pořizování obrazu je výhodný při pozorování tlustých vzorků, což jsou objekty, u kterých je v mikroskopickém měřítku významná výška. Klasické transmisní mikroskopy pro tento výzkum vhodné nejsou, protože výsledný obraz, který produkují, obsahuje patrné informace z velkého objemu vzorku mimo oblast, která je vyšetřovaná. Tento problém byl vyřešen po vynálezu konfokálního mikroskopu, který je ovšem daleko dražší a má i některé nevýhody navíc. Cílem je zpracovat obrazy pořízené pomocí kondenzoru s rotující aperturou tak, aby došlo k redukci podílu nežádoucí informace a obraz se tak stal "čistším". Tato metoda zpracování obrazu nemohla být použita v minulosti, protože tehdejší počítače neměly dostatečnou paměť a výkon. Na tuto metodu se dá nahlížet jako na převod množiny výstupů z mikroskopu s vylepšenou osvětlovací soustavou na odpovídající množinu výstupů konfokálního mikroskopu.
183

Doporučení optimálního mířicího bodu při střelbě na terč / Looking for an optimal aiming point in playing darts

Mareček, Petr January 2017 (has links)
The thesis deals with recommendation optimal point of dartboard. The first part is theoretical, where is described issue of probability, distribution of variation and significant distribution of variation. At the end of theoretical part is described the Fourier transfrom. The second part of the thesis deals with the design and implementation of web application. The design and implemention of web application uses the theoretical part for assigment optimal point of dartboard.
184

RMNv2: Reduced Mobilenet V2 An Efficient Lightweight Model for Hardware Deployment

MANEESH AYI (8735112) 22 April 2020 (has links)
Humans can visually see things and can differentiate objects easily but for computers, it is not that easy. Computer Vision is an interdisciplinary field that allows computers to comprehend, from digital videos and images, and differentiate objects. With the Introduction to CNNs/DNNs, computer vision is tremendously used in applications like ADAS, robotics and autonomous systems, etc. This thesis aims to propose an architecture, RMNv2, that is well suited for computer vision applications such as ADAS, etc.<br><div>RMNv2 is inspired by its original architecture Mobilenet V2. It is a modified version of Mobilenet V2. It includes changes like disabling downsample layers, Heterogeneous kernel-based convolutions, mish activation, and auto augmentation. The proposed model is trained from scratch in the CIFAR10 dataset and produced an accuracy of 92.4% with a total number of parameters of 1.06M. The results indicate that the proposed model has a model size of 4.3MB which is like a 52.2% decrease from its original implementation. Due to its less size and competitive accuracy the proposed model can be easily deployed in resource-constrained devices like mobile and embedded devices for applications like ADAS etc. Further, the proposed model is also implemented in real-time embedded devices like NXP Bluebox 2.0 and NXP i.MX RT1060 for image classification tasks. <br></div>
185

Regularization schemes for transfer learning with convolutional networks / Stratégies de régularisation pour l'apprentissage par transfert des réseaux de neurones à convolution

Li, Xuhong 10 September 2019 (has links)
L’apprentissage par transfert de réseaux profonds réduit considérablement les coûts en temps de calcul et en données du processus d’entraînement des réseaux et améliore largement les performances de la tâche cible par rapport à l’apprentissage à partir de zéro. Cependant, l’apprentissage par transfert d’un réseau profond peut provoquer un oubli des connaissances acquises lors de l’apprentissage de la tâche source. Puisque l’efficacité de l’apprentissage par transfert vient des connaissances acquises sur la tâche source, ces connaissances doivent être préservées pendant le transfert. Cette thèse résout ce problème d’oubli en proposant deux schémas de régularisation préservant les connaissances pendant l’apprentissage par transfert. Nous examinons d’abord plusieurs formes de régularisation des paramètres qui favorisent toutes explicitement la similarité de la solution finale avec le modèle initial, par exemple, L1, L2, et Group-Lasso. Nous proposons également les variantes qui utilisent l’information de Fisher comme métrique pour mesurer l’importance des paramètres. Nous validons ces approches de régularisation des paramètres sur différentes tâches de segmentation sémantique d’image ou de calcul de flot optique. Le second schéma de régularisation est basé sur la théorie du transport optimal qui permet d’estimer la dissimilarité entre deux distributions. Nous nous appuyons sur la théorie du transport optimal pour pénaliser les déviations des représentations de haut niveau entre la tâche source et la tâche cible, avec le même objectif de préserver les connaissances pendant l’apprentissage par transfert. Au prix d’une légère augmentation du temps de calcul pendant l’apprentissage, cette nouvelle approche de régularisation améliore les performances des tâches cibles et offre une plus grande précision dans les tâches de classification d’images par rapport aux approches de régularisation des paramètres. / Transfer learning with deep convolutional neural networks significantly reduces the computation and data overhead of the training process and boosts the performance on the target task, compared to training from scratch. However, transfer learning with a deep network may cause the model to forget the knowledge acquired when learning the source task, leading to the so-called catastrophic forgetting. Since the efficiency of transfer learning derives from the knowledge acquired on the source task, this knowledge should be preserved during transfer. This thesis solves this problem of forgetting by proposing two regularization schemes that preserve the knowledge during transfer. First we investigate several forms of parameter regularization, all of which explicitly promote the similarity of the final solution with the initial model, based on the L1, L2, and Group-Lasso penalties. We also propose the variants that use Fisher information as a metric for measuring the importance of parameters. We validate these parameter regularization approaches on various tasks. The second regularization scheme is based on the theory of optimal transport, which enables to estimate the dissimilarity between two distributions. We benefit from optimal transport to penalize the deviations of high-level representations between the source and target task, with the same objective of preserving knowledge during transfer learning. With a mild increase in computation time during training, this novel regularization approach improves the performance of the target tasks, and yields higher accuracy on image classification tasks compared to parameter regularization approaches.
186

Stochastic Invariance and Stochastic Volterra Equations / Invariance stochastique et équations stochastiques de Volterra

Abi Jaber, Eduardo 18 October 2018 (has links)
La présente thèse traite de la théorie des équations stochastiques en dimension finie. Dans la première partie, nous dérivons des conditions géométriques nécessaires et suffisantes sur les coefficients d’une équation différentielle stochastique pour l’existence d’une solution contrainte à rester dans un domaine fermé, sous de faibles conditions de régularité sur les coefficients.Dans la seconde partie, nous abordons des problèmes d’existence et d’unicité d’équations de Volterra stochastiques de type convolutif. Ces équations sont en général non-Markoviennes. Nous établissons leur correspondance avec des équations en dimension infinie ce qui nous permet de les approximer par des équations différentielles stochastiques Markoviennes en dimension finie.Enfin, nous illustrons nos résultats par une application en finance mathématique, à savoir la modélisation de la volatilité rugueuse. En particulier, nous proposons un modèle à volatilité stochastique assurant un bon compromis entre flexibilité et tractabilité. / The present thesis deals with the theory of finite dimensional stochastic equations.In the first part, we derive necessary and sufficient geometric conditions on the coefficients of a stochastic differential equation for the existence of a constrained solution, under weak regularity on the coefficients. In the second part, we tackle existence and uniqueness problems of stochastic Volterra equations of convolution type. These equations are in general non-Markovian. We establish their correspondence with infinite dimensional equations which allows us to approximate them by finite dimensional stochastic differential equations of Markovian type. Finally, we illustrate our findings with an application to mathematical finance, namely rough volatility modeling. We design a stochastic volatility model with an appealing trade-off between flexibility and tractability.
187

Convolution type operators on cones and asymptotic spectral theory

Mascarenhas, Helena 23 January 2004 (has links)
Die Arbeit beschäftigt sich mit Faltungsoperatoren auf Kegeln, die in Lebesgueräumen L^p(R^2) (1<p<\infty) von Funktionen auf der Ebene wirken. Es werden asymptotische Spektraleigenschaften der zugehörigen Finite Sections studiert. Im Falle p=2 (Hilbertraum) wird das Invertierbarkeitsproblem von Operatoren vom Faltungstyp auf Kegeln mit Hilfe der Methode der Standard-Modell-Algebren untersucht.
188

HBONEXT: AN EFFICIENT DNN FOR LIGHT EDGE EMBEDDED DEVICES

Sanket Ramesh Joshi (10716561) 10 May 2021 (has links)
<div>Every year the most effective Deep learning models, CNN architectures are showcased based on their compatibility and performance on the embedded edge hardware, especially for applications like image classification. These deep learning models necessitate a significant amount of computation and memory, so they can only be used on high-performance computing systems like CPUs or GPUs. However, they often struggle to fulfill portable specifications due to resource, energy, and real-time constraints. Hardware accelerators have recently been designed to provide the computational resources that AI and machine learning tools need. These edge accelerators have high-performance hardware which helps maintain the precision needed to accomplish this mission. Furthermore, this classification dilemma that investigates channel interdependencies using either depth-wise or group-wise convolutional features, has benefited from the inclusion of Bottleneck modules. Because of its increasing use in portable applications, the classic inverted residual block, a well-known architecture technique, has gotten more recognition. This work takes it a step forward by introducing a design method for porting CNNs to low-resource embedded systems, essentially bridging the difference between deep learning models and embedded edge systems. To achieve these goals, we use closer computing strategies to reduce the computer's computational load and memory usage while retaining excellent deployment efficiency. This thesis work introduces HBONext, a mutated version of Harmonious Bottlenecks (DHbneck) combined with a Flipped version of Inverted Residual (FIR), which outperforms the current HBONet architecture in terms of accuracy and model size miniaturization. Unlike the current definition of inverted residual, this FIR block performs identity mapping and spatial transformation at its higher dimensions. The HBO solution, on the other hand, focuses on two orthogonal dimensions: spatial (H/W) contraction-expansion and later channel (C) expansion-contraction, which are both organized in a bilaterally symmetric manner. HBONext is one of those versions that was designed specifically for embedded and mobile applications. In this research work, we also show how to use NXP Bluebox 2.0 to build a real-time HBONext image classifier. The integration of the model into this hardware has been a big hit owing to the limited model size of 3 MB. The model was trained and validated using CIFAR10 dataset, which performed exceptionally well due to its smaller size and higher accuracy. The validation accuracy of the baseline HBONet architecture is 80.97%, and the model is 22 MB in size. The proposed architecture HBONext variants, on the other hand, gave a higher validation accuracy of 89.70% and a model size of 3.00 MB measured using the number of parameters. The performance metrics of HBONext architecture and its various variants are compared in the following chapters.</div>
189

Convolution and Localization Operators in Ultradistribution Spaces / Konvolucija i lokalizacijski operatori u ultradistribucionim prostorima

Prangoski Bojan 30 September 2012 (has links)
<p>We investigate the Laplace transform in Komatsu ultradistributions and give conditions under which an analytic function is a Laplace transformation of an ultradistribution. We&nbsp; prove the equivalence of several denitions of convolu-tion of two Roumieu ultradistributions. For that purpose, we consider the _ ten-sor product of _~BfMpg<br />and a locally convex space. We dene specic global symbol classes of Shubin type and study the corresponding pseudodierential operators of innite order that act continuously on the spaces of tempered ultradistributions of Beurling and Roumieu type. For these classes, we develop symbolic calculus. We investigate the connection between the Anti-Wick and Weyl quantization when the symbols belong to these classes. We nd the largest subspace of ultradistri-butions for which the convolution with the gaussian kernel exists. This gives a way to extend the denition of Anti-Wick quantization for symbols that are not necessarily tempered ultradistributions.</p> / <p>Prouqavamo Laplasovu transformaciju u prostorima Komat-suove ultradistribucije i dajemo uslov pod kojim analitiqka funk-cija je Laplasova transformacija ultradistribucije. Dokazujemo ek-vivalentnost nekoliko definicija o konvoluciji dve Rumie ultradis-tribucije. Za &nbsp; ovu svrhu razmatramo &quot; tenzorski proizvod&nbsp; ~ B fMpg i lokalno konveksni prostor. Definiramo specifiqne globalne simbol klase Xubinovog tipa i prouqavamo odgovarajue psevdo diferenci-jalne operatore beskonaqnog reda koji neprekidno deluju na prosto-rima temperiranih ultradistribucija Berlineovog i Rumieovog tipa. Za ove klase gradimo simboliqki&nbsp; kalkulus. Prouqavamo vezu izmeu Anti-Wick-ove i Weyl-ove kvantizacije kad simboli pripadaju ove sim-bol klase. Nalazimo najvei podprostor ultradistribucija za koje konvolucija sa gausovog jezgra postoji. To prua mogunost da pro-xirimo definiciju Anti-Wick kvantizacije za simbole koje nemoraju da su temperirane ultradistribucije.</p>
190

Semantic Segmentation of RGB images for feature extraction in Real Time

Elavarthi, Pradyumna January 2019 (has links)
No description available.

Page generated in 0.073 seconds