Spelling suggestions: "subject:"polychronic"" "subject:"polychronicity""
1 |
Detekce a analýza polychronních skupin neuronů v spikujících sítích. / Detection and analysis of polychronous groups emerging in spiking neural network models.Šťastný, Bořek January 2018 (has links)
How is information represented in real neural networks? Experimental results continue to provide evidence for presence of spiking patterns in network activity. The concept of polychronous groups attempts to explain these results by proposing that neurons group together to fire in non- synchronous but precise time-locked chains. Several methods for the detection of such groups have been proposed, however, they all employ extensive searching in network structure, which limits their usefulness. We present a new method by observing spiking dependencies in network activity to directly detect polychronous groups. Our method shows comparatively more efficient computation by trading off detection selectivity. The method allows for analysis of polychronous groups emerging in noisy networks. Our results support the existence of structure-forming properties of spontaneous activity in neural network.
|
2 |
Formal Approaches to Globally Asynchronous and Locally Synchronous DesignXue, Bin 30 September 2011 (has links)
The research reported in this dissertation is motivated by two trends in the system-on-chip (SoC) design industry. First, due to the incessant technology scaling, the interconnect delays are getting larger compared to gate delays, leading to multi-cycle delays in communication between functional blocks on the chip, which makes implementing a synchronous global clock difficult, and power consuming. As a result, globally asynchronous and locally synchronous (GALS) designs have been proposed for future SoCs. Second, due to time-to-market pressure, and productivity gain, intellectual property (IP) block reuse is a rising trend in SoC design industry. Predesigned IPs may already be optimized and verified for timing for certain clock frequency, and hence when used in an SoC, GALS offers a good solution that avoids reoptimizing or redesigning the existing IPs. A special case of GALS, known as Latency-Insensitive Protocol (LIP) lets designers adopt the well-understood and developed design flow of synchronous design while solving the multi-cycle latency at the interconnects. The communication fabrics for LIP are synchronous pipelines with hand shaking. However, handshake based protocol has complex control logics and the unnecessary handshake brings down the system's throughput. That is why scheduling based LIP was proposed to avoid the hand-shakes by pre-calculated clock gating sequences for each block. It is shown to have better throughput and easier to implement. Unfortunately, static scheduling only exists for bounded systems. Therefore, this type of design in literatures restrict their discussions to systems whose graphic representation has a single strongly connected component (SCC), which by the theory is bounded.
This dissertation provides an optimization design flow for LIP synthesis with respect to back pressure, throughput and buffer sizes. This is based on extending the scheduled LIP with minimum modifications to render it general enough to be applicable to most systems, especially those with multiple SCCs. In order to guarantee the design correctness, a formal framework that can analyze concurrency and prevent fallacious behaviors such as overflow, deadlock etc., is required. Among many formal models of concurrency used previously in asynchronous system design, marked graphs, periodic clock calculus and polychrony are chosen for the purpose of modeling, analyzing and verifying in this work.
Polychrony, originally developed for embedded software modeling and synthesis, is able to specify multi-rate interfaces. Then a synchronous composition can be analyzed to avoid incompatibly and combinational loops which causes incorrect GALS distribution. The marked graph model is a good candidate to represent the interconnection network which is quite suitable for modeling the communication and synchronizations in LIP. The periodic clock calculus is useful in analyzing clock gating sequences because periodic clock calculus easily captures data dependencies, throughput constraints as well as buffer sizes required for synchronization. These formal methods help establish a formally based design flow for creating a synchronous design and then transforming it into a GALS implementation either using LIP or in a more general GALS mechanisms. / Ph. D.
|
3 |
Formal Model Driven Software Synthesis for Embedded SystemsJose, Bijoy Antony 31 August 2011 (has links)
Due to the ever increasing complexity of safety-critical applications, handwritten code is being replaced by automatically generated code derived from a high level specification. Code generation from high level specification requires a model of computation with an underlying formalism and correctness-preserving refinement steps to generate the lower level application code. Such software synthesis techniques are said to be 'correct-by-construction'. Synchronous programming languages such as Esterel, LUSTRE, which are based on a synchronous model of computation are used for sequential code generation. They work on a synchrony assumption (zero time intraprocess computation and zero time inter process communication) at the specification level. Early versions of synchronous languages followed an execution pattern where an iteration of software was mapped to an interval between ticks of an external reference clock. Since this external reference tick was unrelated to variables (or signals) within the software, redundant operations such as reading of ports, computation of guards were performed for each tick. In this dissertation, we highlight some of these performance issues and missed optimization opportunities. Also we show how a multi-clock (or polychronous) formalism, where each variable has an independent rate of execution associated with it, can avoid these problems.
An existing polychronous language named SIGNAL, creates a hierarchy of clocks based on the rate of execution of individual variables, to form a root clock which acts a reference tick. We seek to replace the clock analysis with a technique to form a unique order of events without a reference time line. For this purpose, we present a new polychronous formalism termed Multi-rate Instantaneous Channel connected Data Flow (MRICDF). Our new synthesis technique inspects the specification to identify a master trigger at a Boolean equation level to act as the reference tick. Furthermore, we attempt to make polychronous specification based software synthesis more accessible to practicing engineers, by constructing a software tool EmCodeSyn, with a visual environment for specification and a more intuitive analysis technique. Our Boolean approach to sequential synthesis of embedded software has multiple implementations, each of which utilizes existing academic software tools. Optimizations are proposed to minimize synthesis time by simplifying the input to these external tools. Weaknesses in causal loop analysis techniques applied by existing synthesis tools are highlighted and solutions for performing time efficient loop analysis are integrated into EmCodeSyn. We have also determined that a part of the non-synthesizable polychronous specifications can be used to generate correct multi-threaded code. Additionally, we investigate composition of polychronous modules and propose properties that are necessary to guarantee agreement on shared signals. / Ph. D.
|
4 |
Computação por assembleias neurais em redes neurais pulsadas. / Computing with neural assemblies in spiking neural networks.João Henrique Ranhel Ribeiro 05 December 2011 (has links)
Um dos grandes mistérios da ciência é compreender como sistemas nervosos são capazes de realizar as extraordinárias operações computacionais que realizam. Provavelmente, encéfalos são as estruturas nas quais energia e matéria estão organizadas da forma mais complexa no universo. Central na computação cerebral está o conceito de neurônio. A forma como neurônios computam é motivo de intensa investigação científica. Um consenso atual é que neurônios formam grupos transientes (assembleias) a fim de representar coisas, de realizar operações computacionais, e de executar processos cognitivos; embora os mecanismos que fundamentam a computação por assembleias ainda não seja bem compreendido. Aqui é proposta uma forma pela qual se explica como computação por assembleias pode acontecer. Dois componentes são fundamentais para formação de coalizões neurais: a relação temporal entre grupos de neurônios e o fator de acoplamento entre eles. Assembleias pressupõe neurônios pulsantes; portanto, simulamos computação por assembleias em redes neurais pulsantes. A abordagem usada nesta tese é funcional; apresentamos um arcabouço teórico sobre propriedades, princípios, e dinâmicas que permitem operações computacionais por coalizões neurais. É apresentado na tese que: (i) quando neurônios formam assembleias está implícito que um tipo de função lógica estocástica ocorre, (ii) assembleias podem formar grupos com feedback, criando grupos biestáveis, (iii) grupos biestáveis criam representações internas dos eventos que os criaram, (iv) assembleias podem se ramificar e também dissolver outras assembleias, o que dá origem a algoritmos complexos. Esta é uma investigação inicial sobre computação em assembleias neurais, e há muito a ser feito. Nesta tese apresentamos os conceitos basais para esta nova abordagem. Há um conjunto de programas nos apêndices que permitem ao leitor simular formações de assembleias, ramificações, inibições, reverberações, entre outras propriedades e componentes de nossa proposta. / One of the greatest mysteries in science is to comprehend how brains are capable of realizing the extraordinary computational operations they do. Probably, brains are the structures in which matter and energy are organized in the most complex way in the Universe. Central to the brain computation is the concept of neuron. How neurons compute is motive of intensive scientific investigation. A prevailing consensus is that neurons form transient groups (assemblies) in order to represent things, for realizing computational operations, and for executing cognitive processes; although the mechanisms that substantiate such computation by neural assemblies are not yet well understood. In this thesis we propose a form that explains how neural assembly computation may occur. It is shown that two components are fundamentals for neural coalition formation: the temporal relation among neural groups, and the coupling factor among them. In this sense, neural assemblies presuppose spiking neurons; therefore, here we simulate assembly computing using spiking neural networks. In this thesis it is presented basically a functional approach; thus, it presents a theoretical approach concerning the properties, principles, characteristics, and components that allow the computational operations in neural coalitions. It is presented in the thesis that: (i) as neurons form assemblies it is implicit that a kind of stochastic logic function occurs; (ii) assemblies may form groups that feedback each other, creating bistable groups; (iii) bistable groups internally represent the event that created them; (iv) assemblies may branch and dissolve other assemblies, what give rise to complex algorithms. This is an initial investigation about neural assembly computing and there is a lot to be done; however, in this thesis we present the basal concepts for this new approach. There are programs in the appendices that allow the reader to simulate assembly formation, branching, inhibition, reverberation, among other properties and components in our proposal.
|
5 |
Computação por assembleias neurais em redes neurais pulsadas. / Computing with neural assemblies in spiking neural networks.Ribeiro, João Henrique Ranhel 05 December 2011 (has links)
Um dos grandes mistérios da ciência é compreender como sistemas nervosos são capazes de realizar as extraordinárias operações computacionais que realizam. Provavelmente, encéfalos são as estruturas nas quais energia e matéria estão organizadas da forma mais complexa no universo. Central na computação cerebral está o conceito de neurônio. A forma como neurônios computam é motivo de intensa investigação científica. Um consenso atual é que neurônios formam grupos transientes (assembleias) a fim de representar coisas, de realizar operações computacionais, e de executar processos cognitivos; embora os mecanismos que fundamentam a computação por assembleias ainda não seja bem compreendido. Aqui é proposta uma forma pela qual se explica como computação por assembleias pode acontecer. Dois componentes são fundamentais para formação de coalizões neurais: a relação temporal entre grupos de neurônios e o fator de acoplamento entre eles. Assembleias pressupõe neurônios pulsantes; portanto, simulamos computação por assembleias em redes neurais pulsantes. A abordagem usada nesta tese é funcional; apresentamos um arcabouço teórico sobre propriedades, princípios, e dinâmicas que permitem operações computacionais por coalizões neurais. É apresentado na tese que: (i) quando neurônios formam assembleias está implícito que um tipo de função lógica estocástica ocorre, (ii) assembleias podem formar grupos com feedback, criando grupos biestáveis, (iii) grupos biestáveis criam representações internas dos eventos que os criaram, (iv) assembleias podem se ramificar e também dissolver outras assembleias, o que dá origem a algoritmos complexos. Esta é uma investigação inicial sobre computação em assembleias neurais, e há muito a ser feito. Nesta tese apresentamos os conceitos basais para esta nova abordagem. Há um conjunto de programas nos apêndices que permitem ao leitor simular formações de assembleias, ramificações, inibições, reverberações, entre outras propriedades e componentes de nossa proposta. / One of the greatest mysteries in science is to comprehend how brains are capable of realizing the extraordinary computational operations they do. Probably, brains are the structures in which matter and energy are organized in the most complex way in the Universe. Central to the brain computation is the concept of neuron. How neurons compute is motive of intensive scientific investigation. A prevailing consensus is that neurons form transient groups (assemblies) in order to represent things, for realizing computational operations, and for executing cognitive processes; although the mechanisms that substantiate such computation by neural assemblies are not yet well understood. In this thesis we propose a form that explains how neural assembly computation may occur. It is shown that two components are fundamentals for neural coalition formation: the temporal relation among neural groups, and the coupling factor among them. In this sense, neural assemblies presuppose spiking neurons; therefore, here we simulate assembly computing using spiking neural networks. In this thesis it is presented basically a functional approach; thus, it presents a theoretical approach concerning the properties, principles, characteristics, and components that allow the computational operations in neural coalitions. It is presented in the thesis that: (i) as neurons form assemblies it is implicit that a kind of stochastic logic function occurs; (ii) assemblies may form groups that feedback each other, creating bistable groups; (iii) bistable groups internally represent the event that created them; (iv) assemblies may branch and dissolve other assemblies, what give rise to complex algorithms. This is an initial investigation about neural assembly computing and there is a lot to be done; however, in this thesis we present the basal concepts for this new approach. There are programs in the appendices that allow the reader to simulate assembly formation, branching, inhibition, reverberation, among other properties and components in our proposal.
|
6 |
Formal verification of a synchronous data-flow compiler : from Signal to CNgô, Van Chan 01 July 2014 (has links) (PDF)
Synchronous languages such as Signal, Lustre and Esterel are dedicated to designing safety-critical systems. Their compilers are large and complicated programs that may be incorrect in some contexts, which might produce silently bad compiled code when compiling source programs. The bad compiled code can invalidate the safety properties that are guaranteed on the source programs by applying formal methods. Adopting the translation validation approach, this thesis aims at formally proving the correctness of the highly optimizing and industrial Signal compiler. The correctness proof represents both source program and compiled code in a common semantic framework, then formalizes a relation between the source program and its compiled code to express that the semantics of the source program are preserved in the compiled code.
|
7 |
Formal verification of a synchronous data-flow compiler : from Signal to C / Vérification formelle d’un compilateur synchrone : de Signal vers CNgô, Van Chan 01 July 2014 (has links)
Les langages synchrones tels que Signal, Lustre et Esterel sont dédiés à la conception de systèmes critiques. Leurs compilateurs, qui sont de très gros programmes complexes, peuvent a priori se révéler incorrects dans certains situations, ce qui donnerait lieu alors à des résultats de compilation erronés non détectés. Ces codes fautifs peuvent invalider des propriétés de sûreté qui ont été prouvées en appliquant des méthodes formelles sur les programmes sources. En adoptant une approche de validation de la traduction, cette thèse vise à prouver formellement la correction d'un compilateur optimisé et industriel de Signal. La preuve de correction représente dans un cadre sémantique commun le programme source et le code compilé, et formalise une relation entre eux pour exprimer la préservation des sémantiques du programme source dans le code compilé. / Synchronous languages such as Signal, Lustre and Esterel are dedicated to designing safety-critical systems. Their compilers are large and complicated programs that may be incorrect in some contexts, which might produce silently bad compiled code when compiling source programs. The bad compiled code can invalidate the safety properties that are guaranteed on the source programs by applying formal methods. Adopting the translation validation approach, this thesis aims at formally proving the correctness of the highly optimizing and industrial Signal compiler. The correctness proof represents both source program and compiled code in a common semantic framework, then formalizes a relation between the source program and its compiled code to express that the semantics of the source program are preserved in the compiled code.
|
Page generated in 0.3134 seconds