• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 56
  • 24
  • 11
  • 8
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 116
  • 19
  • 16
  • 14
  • 14
  • 14
  • 14
  • 13
  • 13
  • 13
  • 13
  • 11
  • 10
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Hardware Consolidation Of Systolic Algorithms On A Coarse Grained Runtime Reconfigurable Architecture

Biswas, Prasenjit 07 1900 (has links) (PDF)
Application domains such as Bio-informatics, DSP, Structural Biology, Fluid Dynamics, high resolution direction finding, state estimation, adaptive noise cancellation etc. demand high performance computing solutions for their simulation environments. The core computations of these applications are in Numerical Linear Algebra (NLA) kernels. Direct solvers are predominantly required in the domains like DSP, estimation algorithms like Kalman Filter etc, where the matrices on which operations need to be performed are either small or medium sized, but dense. Faddeev's Algorithm is often used for solving dense linear system of equations. Modified Faddeev's algorithm (MFA) is a general algorithm on which LU decomposition, QR factorization or SVD of matrices can be realized. MFA has the good property of realizing a host of matrix operations by computing the Schur complements on four blocked matrices, thereby reducing the overall computation requirements. We will use MFA as a representative Direct Solver in this work. We further discuss Given's rotation based QR algorithm for Decomposition of any matrix, often used to solve the linear least square problem. Systolic Array Architectures are widely accepted ASIC solutions for NLA algorithms. But the \can of worms" associated with this traditional solution spawns the need for alternative solutions. While popular custom hardware solution in form of systolic arrays can deliver high performance, but because of their rigid structure they are not scalable and reconfigurable, and hence not commercially viable. We show how a Reconfigurable computing platform can serve to contain the \can of worms". REDEFINE, a coarse grained runtime reconfigurable architecture has been used for systolic actualization of NLA kernels. We elaborate upon streaming NLA-specific enhancements to REDEFINE in order to meet expected performance goals. We explore the need for an algorithm aware custom compilation framework. We bring about a proposition to realize Faddeev's Algorithm on REDEFINE. We show that REDEFINE performs several times faster than traditional GPPs. Further we direct our interest to QR Decomposition to be the next NLA kernel as it ensures better stability than LU and other decompositions. We use QR Decomposition as a case study to explore the design space of the proposed solution on REDEFINE. We also investigate the architectural details of the Custom Functional Units (CFU) for these NLA kernels. We determine the right size of the sub-array in accordance with the optimal pipeline depth of the core execution units and the number of such units to be used per sub-array. The framework used to realize QR Decomposition can be generalized for the realization of other algorithms dealing with decompositions like LU, Faddeev's Algorithm, Gauss-Jordon etc with different CFU definitions .
32

Núcleos isotrópicos e positivos definidos sobre espaços 2-homogêneos / Positive definite and isotropic kernels on compact two-point homogeneous spaces

Bonfim, Rafaela Neves 25 July 2017 (has links)
Este trabalho é composto de duas partes distintas, ambas dentro de um mesmo tema: núcleos positivos definidos sobre variedades. Na primeira delas fornecemos uma caracterização para os núcleos contínuos, isotrópicos e positivos definidos a valores matriciais sobre um espaço compacto 2-homogêneo. Utilizando-a, investigamos a positividade definida estrita destes núcleos, apresentando inicialmente algumas condições suficientes para garantir tal propriedade. No caso em que o espaço 2-homogêneo não é uma esfera, descrevemos uma caracterização definitiva para a positividade definida estrita do núcleo. Neste mesmo caso, para núcleos a valores no espaço das matrizes de ordem 2, apresentamos uma caraterização alternativa para a positividade definida estrita do núcleo via os dois elementos na diagonal principal da representação matricial do núcleo. Na segunda parte, nos restringimos a núcleos positivos definidos escalares sobre os mesmos espaços e determinamos condições necessárias e suficientes para a positividade definida estrita de um produto de núcleos positivos definidos sobre um mesmo espaço compacto 2-homogêneo. Apresentamos ainda uma extensão deste resultado para núcleos positivos definidos sobre o produto cartesiano de um grupo localmente compacto com uma esfera de dimensão alta, mantendo-se a isotropia na componente esférica. / In this work we present a characterization for the continuous, isotropic and positive definite matrix-valued kernels on a compact two-point homogeneous space. After that, we consider the strict positive definiteness of the kernels, describing some independent sufficient conditions for that property to hold. In the case the space is not a sphere, one of the conditions becomes necessary and sufficient for the strict positive definiteness of the kernel. Further, for 22- matrix-valued kernels on a compact two-point homogeneous space which is not a sphere, we present a characterization for the strict positive definiteness of the kernels based upon the main diagonal elements in its matrix representation. In the last part of this work, we restrict ourselves to scalar kernels and determine necessary and sufficient conditions in order that the product of two continuous, isotropic and positive definite kernels on a compact two-point homogeneous space be strictly positive definite. We also discuss the extension of this result for kernels defined on a product of a locally compact group and a high dimensional sphere.
33

Stable Bases for Kernel Based Methods

Pazouki, Maryam 13 June 2012 (has links)
No description available.
34

Núcleos isotrópicos e positivos definidos sobre espaços 2-homogêneos / Positive definite and isotropic kernels on compact two-point homogeneous spaces

Rafaela Neves Bonfim 25 July 2017 (has links)
Este trabalho é composto de duas partes distintas, ambas dentro de um mesmo tema: núcleos positivos definidos sobre variedades. Na primeira delas fornecemos uma caracterização para os núcleos contínuos, isotrópicos e positivos definidos a valores matriciais sobre um espaço compacto 2-homogêneo. Utilizando-a, investigamos a positividade definida estrita destes núcleos, apresentando inicialmente algumas condições suficientes para garantir tal propriedade. No caso em que o espaço 2-homogêneo não é uma esfera, descrevemos uma caracterização definitiva para a positividade definida estrita do núcleo. Neste mesmo caso, para núcleos a valores no espaço das matrizes de ordem 2, apresentamos uma caraterização alternativa para a positividade definida estrita do núcleo via os dois elementos na diagonal principal da representação matricial do núcleo. Na segunda parte, nos restringimos a núcleos positivos definidos escalares sobre os mesmos espaços e determinamos condições necessárias e suficientes para a positividade definida estrita de um produto de núcleos positivos definidos sobre um mesmo espaço compacto 2-homogêneo. Apresentamos ainda uma extensão deste resultado para núcleos positivos definidos sobre o produto cartesiano de um grupo localmente compacto com uma esfera de dimensão alta, mantendo-se a isotropia na componente esférica. / In this work we present a characterization for the continuous, isotropic and positive definite matrix-valued kernels on a compact two-point homogeneous space. After that, we consider the strict positive definiteness of the kernels, describing some independent sufficient conditions for that property to hold. In the case the space is not a sphere, one of the conditions becomes necessary and sufficient for the strict positive definiteness of the kernel. Further, for 22- matrix-valued kernels on a compact two-point homogeneous space which is not a sphere, we present a characterization for the strict positive definiteness of the kernels based upon the main diagonal elements in its matrix representation. In the last part of this work, we restrict ourselves to scalar kernels and determine necessary and sufficient conditions in order that the product of two continuous, isotropic and positive definite kernels on a compact two-point homogeneous space be strictly positive definite. We also discuss the extension of this result for kernels defined on a product of a locally compact group and a high dimensional sphere.
35

Modélisation du transport des électrons de basse énergie avec des modèles physiques alternatifs dans Geant4-DNA et application à la radioimmunothérapie / Low-energy electron transport with alternative physics models within Geant4-DNA code and radioimmunotherapy applications

Bordes, Julien 11 December 2017 (has links)
Ce travail de thèse nous a mené à apporter de nouveaux développements au code Monte-Carlo de simulation détaillée Geant4-DNA pour étudier les interactions des électrons de basse énergie dans l'eau liquide, principal constituant des organismes biologiques. La précision des résultats obtenus avec les codes Monte-Carlo repose sur le réalisme de leurs modèles physiques : les sections efficaces. CPA100 est un autre code Monte-Carlo de structure de trace. Il dispose de sections efficaces d'ionisation, d'excitation électronique et de diffusion élastique dont les méthodes de calculs sont indépendantes de celles utilisées pour les sections efficaces de Geant4-DNA (modèles physique " option 2 " et son amélioration " option 4 "). De plus, les sections efficaces de CPA100 sont en meilleur accord avec certaines données expérimentales. Nous avons implémenté les sections efficaces de CPA100 dans Geant4-DNA pour offrir aux utilisateurs l'opportunité d'utiliser des modèles physiques alternatifs désignés Geant4-DNA-CPA100. Ils sont disponibles en libre accès dans la plateforme Geant4 depuis juillet 2017. La vérification de l'implémentation correcte de ces modèles physiques dans Geant4-DNA a consisté à comparer la simulation de plusieurs grandeurs de base obtenues avec Geant4-DNA-CPA100 et CPA100 et des résultats très similaires ont été obtenus. Par exemple, un excellent accord entre les longueurs de trajectoire et les nombres d'interactions a été mis en évidence. Puis, nous avons évalué l'impact des sections efficaces en utilisant les modèles physiques originaux de Geant4-DNA (" option 2 " et " option 4 "), Geant4-DNA-CPA100 et le code PENELOPE, pour obtenir des grandeurs d'intérêt pour des calculs dosimétriques : les " dose-point kernels " (DPK, pour des électrons monoénergétiques) et les facteurs S (pour des électrons monoénergétiques et des émetteurs d'électrons Auger). Les calculs de DPK de Geant4-DNA avec les modèles physiques " option 2 " et " option 4 " sont similaires et une différence systématique a été mise en évidence avec Geant4-DNA-CPA100. Les DPK calculés par ce dernier ont montré un bon accord avec le code PENELOPE. Les facteurs S obtenus avec Geant4-DNA " option 2 " sont globalement proches de Geant4-DNA-CPA100. Enfin, nous avons cartographié les dépôts d'énergie dans un contexte de radioimmunothérapie. De telles simulations sont habituellement réalisées en considérant des tumeurs sphériques et des biodistributions uniformes d'anticorps monoclonaux. Nous avons extrait des données plus réalistes d'un modèle 3D innovant de lymphome folliculaire, incubé avec des anticorps. Les dépôts d'énergie ont été calculés pour différents émetteurs d'électrons Auger (111In et 125I) et de particules ß- (90Y, 131I et 177Lu). Ces calculs ont montré que les émetteurs de particules ß- délivrent plus d'énergie et irradient une plus grande fraction du volume que les émetteurs d'électrons Auger. L'émetteur de particule ß- le plus efficace dépend de la taille du modèle qui est utilisé. / During this PhD thesis, new developments have been brought to Geant4-DNA step-by-step Monte Carlo code. They were used to study low-energy electron interactions in liquid water - the major component of living organisms. The accuracy of results obtained through Monte Carlo code is limited by the validity of their cross sections. CPA100 is another step-by-step Monte Carlo code. It is equipped with ionization, electronic excitation and elastic scattering cross sections. However, these cross sections are calculated according to methods independent of those used for Geant4-DNA cross section calculations, which consisted of two original physics models: "option 2" and its improvement, "option 4". Moreover, in some cases CPA100 cross sections are in better agreement with experimental data. Therefore, the first objective of this research was to implement CPA100 cross sections into Geant4-DNA in order to give users the choice of alternative physics models, known as Geant4-DNA-CPA100. They have been available to users since July 2017. The verification of the correct implementation of these physics models within Geant4-DNA involved a comparison of different basic quantities between Geant4-DNA-CPA100 and CPA100 and extremely similar results were obtained. For instance, a very good agreement was highlighted between the calculations of the track length and the number of interactions. Consequently, the impact of cross sections was assessed using the original Geant4-DNA physics models ("option 2" and "option 4"), the alternative Geant4-DNA-CPA100 physics models and PENELOPE code for calculations of useful quantities in nuclear medicine, such as dose-point kernels (DPKs for monoenergetic electrons) and S values (for monoenergetic electrons and Auger electron emitters). With regards to DPK calculations, Geant4-DNA with "option 2" and "option 4" physics models were in close agreement, showing a systematic difference with Geant4-DNA-CPA100, which in turn were close to those calculated with PENELOPE code. For S value calculations, however, Geant4-DNA results were in good agreement with Geant4-DNA-CPA100. Finally, in the context of radioimmunotherapy, energy depositions were mapped. Such simulations are usually performed assuming spherical tumor geometries and uniform monoclonal antibody distributions. Realistic data was extracted from an innovative 3D follicular lymphoma model incubated with antibodies. Energy depositions were calculated for Auger electron (111In and 125I) and ß- particle (90Y, 131I and 177Lu) emitters. It was demonstrated that ß- particle emitters delivered more energy and irradiated greater volume than Auger electron emitters. The most effective ß- particle emitter depends on the size of the model that is used.
36

Asymptotic properties of Non-parametric Regression with Beta Kernels

Natarajan, Balasubramaniam January 1900 (has links)
Doctor of Philosophy / Department of Statistics / Weixing Song / Kernel based non-parametric regression is a popular statistical tool to identify the relationship between response and predictor variables when standard parametric regression models are not appropriate. The efficacy of kernel based methods depend both on the kernel choice and the smoothing parameter. With insufficient smoothing, the resulting regression estimate is too rough and with excessive smoothing, important features of the underlying relationship is lost. While the choice of the kernel has been shown to have less of an effect on the quality of regression estimate, it is important to choose kernels to best match the support set of the underlying predictor variables. In the past few decades, there have been multiple efforts to quantify the properties of asymmetric kernel density and regression estimators. Unlike classic symmetric kernel based estimators, asymmetric kernels do not suffer from boundary problems. For example, Beta kernel estimates are especially suitable for investigating the distribution structure of predictor variables with compact support. In this dissertation, two types of Beta kernel based non parametric regression estimators are proposed and analyzed. First, a Nadaraya-Watson type Beta kernel estimator is introduced within the regression setup followed by a local linear regression estimator based on Beta kernels. For both these regression estimators, a comprehensive analysis of its large sample properties is presented. Specifically, for the first time, the asymptotic normality and the uniform almost sure convergence results for the new estimators are established. Additionally, general guidelines for bandwidth selection is provided. The finite sample performance of the proposed estimator is evaluated via both a simulation study and a real data application. The results presented and validated in this dissertation help advance the understanding and use of Beta kernel based methods in other non-parametric regression applications.
37

Bayesian Optimization for Neural Architecture Search using Graph Kernels

Krishnaswami Sreedhar, Bharathwaj January 2020 (has links)
Neural architecture search is a popular method for automating architecture design. Bayesian optimization is a widely used approach for hyper-parameter optimization and can estimate a function with limited samples. However, Bayesian optimization methods are not preferred for architecture search as it expects vector inputs while graphs are high dimensional data. This thesis presents a Bayesian approach with Gaussian priors that use graph kernels specifically targeted to work in the higherdimensional graph space. We implemented three different graph kernels and show that on the NAS-Bench-101 dataset, an untrained graph convolutional network kernel outperforms previous methods significantly in terms of the best network found and the number of samples required to find it. We follow the AutoML guidelines to make this work reproducible. / Neural arkitektur sökning är en populär metod för att automatisera arkitektur design. Bayesian-optimering är ett vanligt tillvägagångssätt för optimering av hyperparameter och kan uppskatta en funktion med begränsade prover. Bayesianska optimeringsmetoder är dock inte att föredra för arkitektonisk sökning eftersom vektoringångar förväntas medan grafer är högdimensionella data. Denna avhandling presenterar ett Bayesiansk tillvägagångssätt med gaussiska prior som använder grafkärnor som är särskilt fokuserade på att arbeta i det högre dimensionella grafutrymmet. Vi implementerade tre olika grafkärnor och visar att det på NASBench- 101-data, till och med en otränad Grafkonvolutionsnätverk-kärna, överträffar tidigare metoder när det gäller det bästa nätverket som hittats och antalet prover som krävs för att hitta det. Vi följer AutoML-riktlinjerna för att göra detta arbete reproducerbart.
38

Text Mining Infrastructure in R

Meyer, David, Hornik, Kurt, Feinerer, Ingo 31 March 2008 (has links) (PDF)
During the last decade text mining has become a widely used discipline utilizing statistical and machine learning methods. We present the tm package which provides a framework for text mining applications within R. We give a survey on text mining facilities in R and explain how typical application tasks can be carried out using our framework. We present techniques for count-based analysis methods, text clustering, text classiffication and string kernels. (authors' abstract)
39

Compression guidée par automate et noyaux rationnels / Compression guided by automata and rational kernels

Amarni, Ahmed 11 May 2015 (has links)
En raison de l'expansion des données, les algorithmes de compression sont désormais cruciaux. Nous abordons ici le problème de trouver des algorithmes de compression optimaux par rapport à une source de Markov donnée. A cet effet, nous étendons l'algorithme de Huffman classique. Pour se faire premièrement on applique Huffman localement à chaque état de la source Markovienne, en donnant le résultat de l'efficacité obtenue pour cet algorithme. Mais pour bien approfondir et optimiser quasiment l'efficacité de l'algorithme, on donne un autre algorithme qui est toujours appliqué localement à chaque états de la source Markovienne, mais cette fois ci en codant les facteurs partant de ces états de la source Markovienne de sorte à ce que la probabilité du facteur soit une puissance de 1/2 (sachant que l'algorithme de Huffman est optimal si et seulement si tous les symboles à coder ont une probabilité puissance de 1/2). En perspective de ce chapitre on donne un autre algorithme (restreint à la compression de l'étoile) pour coder une expression à multiplicité, en attendant dans l'avenir à coder une expression complète / Due to the expansion of datas, compression algorithms are now crucial algorithms. We address here the problem of finding an optimal compression algorithm with respect to a given Markovian source. To this purpose, we extend the classical Huffman algorithm. The kernels are popular methods to measure the similarity between words for classication and learning. We generalize the definition of rational kernels in order to apply kernels to the comparison of languages. We study this generalization for factor and subsequence kerneland prove that these kernels are defined for parameters chosen in an appropriate interval. We give different methods to build weighted transducers which compute these kernels
40

Learning via Query Synthesis

Alabdulmohsin, Ibrahim Mansour 07 May 2017 (has links)
Active learning is a subfield of machine learning that has been successfully used in many applications. One of the main branches of active learning is query synthe- sis, where the learning agent constructs artificial queries from scratch in order to reveal sensitive information about the underlying decision boundary. It has found applications in areas, such as adversarial reverse engineering, automated science, and computational chemistry. Nevertheless, the existing literature on membership query synthesis has, generally, focused on finite concept classes or toy problems, with a limited extension to real-world applications. In this thesis, I develop two spectral algorithms for learning halfspaces via query synthesis. The first algorithm is a maximum-determinant convex optimization method while the second algorithm is a Markovian method that relies on Khachiyan’s classical update formulas for solving linear programs. The general theme of these methods is to construct an ellipsoidal approximation of the version space and to synthesize queries, afterward, via spectral decomposition. Moreover, I also describe how these algorithms can be extended to other settings as well, such as pool-based active learning. Having demonstrated that halfspaces can be learned quite efficiently via query synthesis, the second part of this thesis proposes strategies for mitigating the risk of reverse engineering in adversarial environments. One approach that can be used to render query synthesis algorithms ineffective is to implement a randomized response. In this thesis, I propose a semidefinite program (SDP) for learning a distribution of classifiers, subject to the constraint that any individual classifier picked at random from this distributions provides reliable predictions with a high probability. This algorithm is, then, justified both theoretically and empirically. A second approach is to use a non-parametric classification method, such as similarity-based classification. In this thesis, I argue that learning via the empirical kernel maps, also commonly referred to as 1-norm Support Vector Machine (SVM) or Linear Programming (LP) SVM, is the best method for handling indefinite similarities. The advantages of this method are established both theoretically and empirically.

Page generated in 0.0328 seconds