• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 16
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modelagem estocástica de sequências de disparos de um conjunto de neurônios / Stochastic modeling of spike trains of a set of neurons

Arias Rodriguez, Azrielex Andres 13 August 2013 (has links)
O presente trabalho constitui um primeiro esforço por modelar disparos de neurônios usando cadeias estocásticas de memória de alcance variável. Esses modelos foram introduzidos por Rissanen (1983). A ideia principal deste tipo de modelos consiste em que a definição probabilística de cada símbolo depende somente de uma porção finita do passado e o comprimento dela é função do passado mesmo, tal porção foi chamada de \"contexto\" e o conjunto de contextos pode ser representado através de uma árvore. No passado vários métodos de estimação foram propostos, nos quais é necessário especificar algumas constantes, de forma que Galves et al.(2012) apresentaram o \"critério do menor maximizador\" (SMC), sendo este um algoritmo consistente que independe de qualquer constante. De outro lado na área da neurociência vem tomando força a ideia de que o processamento de informação do cérebro é feito de forma probabilística, por esta razão foram usados os dados coletados por Sidarta Ribeiro e sua equipe, correspondentes à atividade neuronal em ratos, para estimar as árvores de contextos que caracterizam os disparos de quatro neurônios do hipocampo e identificar possíveis associações entre eles, também foram feitas comparações de acordo com o estado comportamental do rato (Vigília / Sono), em todos os casos foi usado o algoritmo SMC para a estimação das árvores de contexto. Por último, é aberta uma discussão sobre o tamanho de amostra necessário para a implementação deste tipo de análise. / This work describes an initial effort to model spike trains of neurons using Variable Length Markov Chains (VLMC). These models were introduced by Rissanen(1983). The principal idea of this kind of models is thaht the probabilistic definition of each symbol only depends on a finite part of the past and the length of this relevant portion is a function of the past itself. This portion were called \"context\" and the set of contexts can be represented as a rooted labeled tree. In the past, several methods of estimation were proposed, where is necessary to fix any constants, for this reason Galves et al.(2012) introduced the \"smallest maximizer criterion\" (SMC), which is a consistent and constant free model selection procedure. By the other side, in the neuroscience area has gained strength the idea that the information processing in the brain is done in a probabilistic way, for this reason were used the data collected by Sidarta Ribeiro and his team, related to the neuronal activity in rats, to estimate the context trees that describing the spike trains of four neurons of hipocampus region and to identify associations between them, comparisions were also made according to the behavioural state of the rat (Wake / Sleep), in all cases the algorithm were used the SMC algortithm to estimate the context trees. Finally, is opened a discussion on the sample size required for the implementation of this kind of analysis.
12

Etude de mécanismes d’hybridation pour les détecteurs d’imagerie Infrarouge / Study of hybridization mechanisms for two dimensional infrared detectors

Bria, Toufiq 07 December 2012 (has links)
L’évolution de la microélectronique suit plusieurs axes notamment la miniaturisation des éléments actifs (réduction de taille des transistors), et l’augmentation de la densité d’interconnexion qui se traduisent par la loi de Gordon Moore qui prédit que la densité d'intégration sur silicium doublerait tous les deux ans. Ces évolutions ont pour conséquence la réduction des prix et du poids des composants. L’hybridation ou flip chip est une technologie qui s’inscrit dans cette évolution, elle consiste en l’assemblage de matériaux hétérogènes. Dans cette étude il s‘agit d’un circuit de lecture Silicium et d’un circuit de détection InP ou GaAs assemblés par l’intermédiaire d’une matrice de billes d’indium. La connexion flip chip est basée sur l’utilisation d’une jonction par plots métalliques de faibles dimensions qui permet de diminuer les pertes électriques (faible inductance et faible bruit), une meilleure dissipation thermique, une bonne tenue mécanique. Enfin elle favorise la miniaturisation avec l’augmentation de la compacité et de la densité d’interconnexion.Les travaux de thèse se concentrent sur deux axes principaux. Le premier concerne l’hybridation par brasure avec la technologie des billes d’indium par refusion, et le second concerne l’hybridation par pression à température ambiante (nano-scratch) par l’intermédiaire des nanostructures (Nano-fils d’or, Nano-fils ZnO). Ces travaux ont permis la réalisation d’un détecteur InGaAs avec extension visible de format TV 640*512 pixels au pas de 15 µm. Ces travaux ont également permis la validation mécanique de l’assemblage d’un composant de format double TV 1280*1024 pixels au pas de 10 µm par cette même méthode de reflow. Pour l’axe hybridation à froid, nos travaux ont permis la validation d’une méthode de croissance de nano-fils ZnO par une voix hydrothermique à basse température (<90°C). / Evolution of microelectronics follows several major roads, in particular the size decrease of active elements (reduction of size of transistors), better electrical performances, high I/O density and smaller size. This revolution has been predicted by Gordon Moore who suggested that integrated circuits would double in complexity every 24 months. As a consequence, this evolution induces both the reduction of prices and the weight of components.The term flip chip describes the method of electrically connecting the die to the package substrate. Flip chip microelectronic assembly is the direct electrical connection of face-down (or flipped) integrated circuit (IC) chips onto substrates, circuit boards, or carriers, using conductive bumps on the chip bond pads. Flip chip offers the highest speed electrical performance, reduces the delaying inductance and capacitance of the connection, Smallest Size Greatest I/O Flexibility, Most Rugged, high I/O density and Lowest Cost.This thesis work study concentrates on two main directions. The first one concerns hybridization by means of the technology of Indium bumps associated to a reflow process and the second one is about pressure induced hybridization at low temperature using nanostructures (Nano-scratch). In this work, we have developed a complete process to assemble a focal plane array format of 640 x 512 pixels with a pitch of 15 µm. These studies also allowed the mechanical validation of hybridization of a focal plane arrays 1280*1024 pixels with a pitch of 10 µm. Concerning alternative technologies to flip chip reflow, we introduced and demonstrate the relevance of a method of growth of ZnO nanorods using low temperature wet chemical growth and further hybridization at ambient temperature.
13

Modelagem estocástica de sequências de disparos de um conjunto de neurônios / Stochastic modeling of spike trains of a set of neurons

Azrielex Andres Arias Rodriguez 13 August 2013 (has links)
O presente trabalho constitui um primeiro esforço por modelar disparos de neurônios usando cadeias estocásticas de memória de alcance variável. Esses modelos foram introduzidos por Rissanen (1983). A ideia principal deste tipo de modelos consiste em que a definição probabilística de cada símbolo depende somente de uma porção finita do passado e o comprimento dela é função do passado mesmo, tal porção foi chamada de \"contexto\" e o conjunto de contextos pode ser representado através de uma árvore. No passado vários métodos de estimação foram propostos, nos quais é necessário especificar algumas constantes, de forma que Galves et al.(2012) apresentaram o \"critério do menor maximizador\" (SMC), sendo este um algoritmo consistente que independe de qualquer constante. De outro lado na área da neurociência vem tomando força a ideia de que o processamento de informação do cérebro é feito de forma probabilística, por esta razão foram usados os dados coletados por Sidarta Ribeiro e sua equipe, correspondentes à atividade neuronal em ratos, para estimar as árvores de contextos que caracterizam os disparos de quatro neurônios do hipocampo e identificar possíveis associações entre eles, também foram feitas comparações de acordo com o estado comportamental do rato (Vigília / Sono), em todos os casos foi usado o algoritmo SMC para a estimação das árvores de contexto. Por último, é aberta uma discussão sobre o tamanho de amostra necessário para a implementação deste tipo de análise. / This work describes an initial effort to model spike trains of neurons using Variable Length Markov Chains (VLMC). These models were introduced by Rissanen(1983). The principal idea of this kind of models is thaht the probabilistic definition of each symbol only depends on a finite part of the past and the length of this relevant portion is a function of the past itself. This portion were called \"context\" and the set of contexts can be represented as a rooted labeled tree. In the past, several methods of estimation were proposed, where is necessary to fix any constants, for this reason Galves et al.(2012) introduced the \"smallest maximizer criterion\" (SMC), which is a consistent and constant free model selection procedure. By the other side, in the neuroscience area has gained strength the idea that the information processing in the brain is done in a probabilistic way, for this reason were used the data collected by Sidarta Ribeiro and his team, related to the neuronal activity in rats, to estimate the context trees that describing the spike trains of four neurons of hipocampus region and to identify associations between them, comparisions were also made according to the behavioural state of the rat (Wake / Sleep), in all cases the algorithm were used the SMC algortithm to estimate the context trees. Finally, is opened a discussion on the sample size required for the implementation of this kind of analysis.
14

Covering systems

Klein, Jonah 12 1900 (has links)
Un système couvrant est un ensemble fini de progressions arithmétiques avec la propriété que chaque entier appartient à au moins une des progressions. L’étude des systèmes couvrants a été initié par Erdős dans les années 1950, et il posa dans les années qui suivirent plusieurs questions sur ces objets mathématiques. Une de ses questions les plus célèbres est celle du plus petit module : est-ce que le plus petit module de tous les systèmes couvrants avec modules distinct est borné uniformément? En 2015, Hough a montré que la réponse était affirmative, et qu’une borne admissible est 1016. En se basant sur son travail, mais en simplifiant la méthode, Balister, Bollobás, Morris, Sahasrabudhe et Tiba on réduit cette borne a 616, 000. Leur méthode a menée a plusieurs applications supplémentaires. Entre autres, ils ont compté le nombre de système couvrant avec un nombre fixe de module. La première partie de ce mémoire vise a étudier une question similaire. Nous allons essayer de compter le nombre de système couvrant avec un ensemble de module fixé. La technique que nous utiliserons nous mènera vers l’étude des symmétries de système couvrant. Dans la seconde partie, nous répondrons à des variantes du problème du plus petit module. Nous regarderons des bornes sur le plus petit module d’un système couvrant de multiplicité s, c’est-à-dire un système couvrant dans lequel chaque module apparait au plus s fois. Nous utiliserons ensuite ce résultat afin montrer que le plus petit module d’un système couvrant de multiplicité 1 d’une progression arithmétique est borné, ainsi que pour montrer que le n-eme plus petit module dans un système couvrant de multiplicité 1 est borné. / A covering system is a finite set of arithmetic progressions with the property that every integer belongs to at least one of them. The study of covering systems was started by Erdős in the 1950’s, and he asked many questions about them in the following years. One of the most famous questions he asked was if the minimum modulus of a covering system with distinct moduli is bounded uniformly. In 2015, Hough showed that it is at most 1016. Following on his work, but simplifying the method, Balister, Bollobás, Morris, Sahasrabudhe and Tiba showed that it is at most 616, 000. Their method led them to many further applications. Notably, they counted the number of covering systems with a fixed number of moduli. The first part of this thesis seeks to study a related question, that is to count the number of covering systems with a given set of moduli. The technique developped to do this for some sets will lead us to look at symmetries of covering systems. The second part of this thesis will look at variants of the minimum modulus problem. Notably, we will be looking at bounds on the minimum modulus of a covering system of multiplicity s, that is a covering system in which each moduli appears at most s times, as well as bounds on the minimum modulus of a covering system of multiplicity 1 of an arithmetic progression, and finally look at bounds for the n-th smallest modulus in a covering system.
15

World-wide body size patterns in freshwater fish by geography, size class, trophic level, and taxonomy

Adhikari, Shishir 01 September 2015 (has links)
No description available.
16

Distributed Support Vector Machine With Graphics Processing Units

Zhang, Hang 06 August 2009 (has links)
Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. Sequential Minimal Optimization (SMO) is a decomposition-based algorithm which breaks this large QP problem into a series of smallest possible QP problems. However, it still costs O(n2) computation time. In our SVM implementation, we can do training with huge data sets in a distributed manner (by breaking the dataset into chunks, then using Message Passing Interface (MPI) to distribute each chunk to a different machine and processing SVM training within each chunk). In addition, we moved the kernel calculation part in SVM classification to a graphics processing unit (GPU) which has zero scheduling overhead to create concurrent threads. In this thesis, we will take advantage of this GPU architecture to improve the classification performance of SVM.

Page generated in 0.0371 seconds