• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 76
  • 6
  • 5
  • 4
  • 3
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 106
  • 106
  • 44
  • 38
  • 20
  • 19
  • 15
  • 15
  • 15
  • 14
  • 11
  • 9
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Techniques for FPGA neural modeling

Weinstein, Randall Kenneth 21 November 2006 (has links)
Neural simulations and general dynamical system modeling consistently push the limits of available computational horsepower. This is occurring for a number of reasons: 1) models are progressing in complexity as our biological understanding increases, 2) high-level analysis tools including parameter searches and sensitivity analyses are becoming more prevalent, and 3) computational models are increasingly utilized alongside with biological preparations in a dynamic clamp configuration. General-purpose computers, as the primary target for modeling problems, are the simplest platform to implement models due to the rich variety of available tools. However, computers, limited by their generality, perform sub-optimally relative to custom hardware solutions. The goal of this thesis is to develop a new cost-effective and easy-to-use platform delivering orders of magnitude improvement in throughput over personal computers. We suggest that FPGAs, or field programmable gate arrays, provide an outlet for dramatically enhanced performance. FPGAs are high-speed, reconfigurable devices that can implement any digital logic operation using an array of parallel computing elements. Already common in fields such as signal processing, radar, medical imaging, and consumer electronics, FPGAs have yet to gain traction in neural modeling due to their steep learning curve and lack of sufficient tools despite their high-performance capability. The overall objective of this work has been to overcome the shortfalls of FPGAs to enable adoption of FPGAs within the neural modeling community. We embarked on an incremental process to develop an FPGA-based modeling environment. We first developed a prototype multi-compartment motoneuron model using a standard digital-design methodology. FPGAs at this point were shown to exceed software simulations by 10x to 100x. Next, we developed canonical modeling methodologies for manual generation of typical neural model topologies. We then developed a series of tools and techniques for analog interfacing, digital protocol processing, and real-time model tuning. This thesis culminates with the development of Dynamo, a fully-automated model compiler for the direct conversion of a model description into an FPGA implementation.
52

A constraint-based approach to verification of programs with floating-point numbers

Acosta Zapién, Carlos Eduardo, January 2007 (has links)
Thesis (M.S.)--University of Texas at El Paso, 2007. / Title from title screen. Vita. CD-ROM. Includes bibliographical references. Also available online.
53

Techniques for FPGA neural modeling

Weinstein, Randall Kenneth. January 2006 (has links)
Thesis (Ph.D)--Bioengineering, Georgia Institute of Technology, 2007. / Committee Chair: Lee, Robert; Committee Member: Butera, Robert; Committee Member: DeWeerth, Steve; Committee Member: Madisetti, Vijay; Committee Member: Voit, Eberhard. Part of the SMARTech Electronic Thesis and Dissertation Collection.
54

From Machine Arithmetic to Approximations and back again : Improved SMT Methods for Numeric Data Types

Zeljić, Aleksandar January 2017 (has links)
Safety-critical systems, especially those found in avionics and automotive industries, rely on machine arithmetic to perform their tasks: integer arithmetic, fixed-point arithmetic or floating-point arithmetic (FPA). Machine arithmetic exhibits subtle differences in behavior compared to the ideal mathematical arithmetic, due to fixed-size representation in memory. Failure of safety-critical systems is unacceptable, due to high-stakes involving human lives or huge amounts of money, time and effort. By formally proving properties of systems, we can be assured that they meet safety requirements. However, to prove such properties it is necessary to reason about machine arithmetic. SMT techniques for machine arithmetic are lacking scalability. This thesis presents approaches that augment or complement existing SMT techniques for machine arithmetic. In this thesis, we explore approximations as a means of augmenting existing decision procedures. A general approximation refinement framework is presented, along with its implementation called UppSAT. The framework solves a sequence of approximations. Initially very crude, these approximations are fairly easy to solve. Results of solving approximate constraints are used to either reconstruct a solution of original constraints, obtain a proof of unsatisfiability or to refine the approximation. The framework preserves soundness, completeness, and termination of the underlying decision procedure, guaranteeing that eventually, either a solution is found or a proof that solution does not exist. We evaluate the impact of approximations implemented in the UppSAT framework on the state-of-the-art in SMT for floating-point arithmetic. A novel method to reason about the theory of fixed-width bit-vectors called mcBV is presented. It is an instantiation of the model constructing satisfiability calculus, mcSAT, and uses a new lazy representation of bit-vectors that allows both bit- and word-level reasoning. It uses a greedy explanation generalization mechanism capable of more general learning compared to traditional approaches. Evaluation of mcBV shows that it can outperform bit-blasting on several classes of problems.
55

Stratégies de recherches dédiées à la résolution de systèmes de contraintes sur les flottants pour la vérification de programmes / Search strategies for solving constraint systems over floats for program verification

Zitoun, Heytem 26 October 2018 (has links)
La vérification des programmes est un enjeu majeur pour les applications critiques comme l'aviation, l'aérospatiale ou les systèmes embarqués. Les approches Bounded model checking (e.g., CBMC) et de programmation par contraintes (e.g., CPBPV, …) reposent sur la recherche de contre-exemples qui violent une propriété du programme à vérifier. La recherche de tels contre-exemples peut être très longue et coûteuse lorsque les programmes à vérifier contiennent des calculs en virgule flottante. Ceci est dû en grande partie au fait que les stratégies de recherche existantes ont été conçues pour des domaines finis et, dans une moindre mesure, pour des domaines continus. Dans cette thèse, nous proposons un ensemble de stratégie de recherche dédié à la vérification de programme avec du calcul sur les flottants. Les stratégies proposées pour les choix de variables et de choix de valeur se basent sur des propriétés propres aux flottants. Ces propriétés utilisent des caractéristiques des domaines des variables, ou de la structure des contraintes. Certaines propriétés qui portent sur les domaines des variables sont classiques comme la taille et la cardinalité et d'autres beaucoup plus spécifiques comme la densité. Les notions de taille et cardinalité sont équivalentes sur les entiers, mais ne le sont pas sur les flottants. Ainsi la densité capture une variabilité qui est très spécifique aux flottants dont la moitié se trouve entre [-1,1]. De manière similaire les propriétés qui portent sur la structure des contraintes sont, pour certaines tels que le degré ou le nombre d’occurrences, issues des domaines finis, et pour d’autres beaucoup plus spécifiques, comme l’absorption, et la cancellation; ces deux propriétés capturent des phénomènes qui sont généralement la cause de fortes déviations du programme flottant vis-à-vis son interprétation sur les réels et donc de l’existence même de beaucoup de contre-exemples. Pour chaque propriété, deux stratégies de choix de variables sont proposées. La première choisit la variable qui minimise la propriété, alors que la seconde choisit la variable qui la maximise. Les stratégies de choix de valeurs essaient quant à elles de tirer profit des phénomènes d'absorption et de cancellation. L'évaluation de ces stratégies sur un ensemble de programmes réalistes est très encourageante : ces stratégies sont plus efficaces que les stratégies standards. / Program verification is a major issue for critical applications such as aviation, aerospace or embedded systems. Bounded model checking (e.g., CBMC) and constraint programming (e.g., CPBPV,...) approaches are based on the search for counter-examples that violate a property of the program to verify. The search for such counter-examples can be very time-consuming and costly when the programs to be verified contain floating point calculations. This is largely due to the fact that existing research strategies have been designed for finite domains and, to a lesser extent, for continuous domains. In this thesis, we propose a set of search strategies dedicated to program verification with floating point computation. The proposed strategies for variable and value selection are based on specific floating properties. These properties use characteristics of the variable domains, or the constraint structure. Some properties that focus on the domains of the variables are classic such as size and cardinality and others much more specific like density. The notions of size and cardinality are equivalent on the integers, but not on the floats. Density captures a variability that is very specific to the floats, half of which are between[-1.1]. Similarly, the properties that concern the structure of constraints are, for some such as the degree or number of occurrences, derived from finite domains, and for others much more specific, such as absorption, and cancellation; these two properties capture phenomena that are generally the cause of strong deviations of the floating point program from its interpretation on the reals and hence the existence of many counterexamples. For each property, two variable selection strategies are proposed. The first one chooses the variable that minimizes the property, while the second one chooses the variable that maximizes it. Value choice strategies try to take advantage of the phenomena of absorption and cancellation.
56

Synaptisches Rauschen auf Grundlage der Poissonverteilung: Systementwurf und Integration in FPGA

Azanzar, Youssef 22 January 2018 (has links)
Unter Rauschen versteht man allgemein eine Störgröße. Bei der Übertragung von Nachrichtensignalen ist das Rauschen meist die größte Störquelle. Die Rauschquellen treten dabei im gesamten Übertragungssystem, also im Sender, im Empfänger und auf dem Übertragungsweg auf. Man unterscheidet dabei zwischen der durch äußere und innere Rauschquellen erzeugten Rauschleistung. Rauschen kann signifikante Auswirkung auf die Antwortdynamik nichtlinearer Systeme haben. Für Neuronen ist die primäre Quelle des Rauschens synaptische Hintergrundaktivität; bezeichnet als synaptisches Rauschen.
57

Pipelining Of Double Precision Floating Point Division And Square Root Operations On Field-programmable Gate Arrays

Thakkar, Anuja 01 January 2006 (has links)
Many space applications, such as vision-based systems, synthetic aperture radar, and radar altimetry rely increasingly on high data rate DSP algorithms. These algorithms use double precision floating point arithmetic operations. While most DSP applications can be executed on DSP processors, the DSP numerical requirements of these new space applications surpass by far the numerical capabilities of many current DSP processors. Since the tradition in DSP processing has been to use fixed point number representation, only recently have DSP processors begun to incorporate floating point arithmetic units, even though most of these units handle only single precision floating point addition/subtraction, multiplication, and occasionally division. While DSP processors are slowly evolving to meet the numerical requirements of newer space applications, FPGA densities have rapidly increased to parallel and surpass even the gate densities of many DSP processors and commodity CPUs. This makes them attractive platforms to implement compute-intensive DSP computations. Even in the presence of this clear advantage on the side of FPGAs, few attempts have been made to examine how wide precision floating point arithmetic, particularly division and square root operations, can perform on FPGAs to support these compute-intensive DSP applications. In this context, this thesis presents the sequential and pipelined designs of IEEE-754 compliant double floating point division and square root operations based on low radix digit recurrence algorithms. FPGA implementations of these algorithms have the advantage of being easily testable. In particular, the pipelined designs are synthesized based on careful partial and full unrolling of the iterations in the digit recurrence algorithms. In the overall, the implementations of the sequential and pipelined designs are common-denominator implementations which do not use any performance-enhancing embedded components such as multipliers and block memory. As these implementations exploit exclusively the fine-grain reconfigurable resources of Virtex FPGAs, they are easily portable to other FPGAs with similar reconfigurable fabrics without any major modifications. The pipelined designs of these two operations are evaluated in terms of area, throughput, and dynamic power consumption as a function of pipeline depth. Pipelining experiments reveal that the area overhead tends to remain constant regardless of the degree of pipelining to which the design is submitted, while the throughput increases with pipeline depth. In addition, these experiments reveal that pipelining reduces power considerably in shallow pipelines. Pipelining further these designs does not necessarily lead to significant power reduction. By partitioning these designs into deeper pipelines, these designs can reach throughputs close to the 100 MFLOPS mark by consuming a modest 1% to 8% of the reconfigurable fabric within a Virtex-II XC2VX000 (e.g., XC2V1000 or XC2V6000) FPGA.
58

Episode 3.07 – Introduction to Floating Point Binary and IEEE 754 Notation

Tarnoff, David 01 January 2020 (has links)
Regardless of the numeric base, scientific notation breaks numbers into three parts: sign, mantissa, and exponent. In this episode, we discuss how the computer stores those three parts to memory, and why IEEE 754 puts them together the way it does.
59

Pipelined IEEE-754 Double Precision Floating Point Arithmetic Operators on Virtex FPGA’s

Pathanjali, Nandini 22 May 2002 (has links)
No description available.
60

AN FPGA IMPLEMENTATIN OF FDTD CODES FOR RECONFIGURABLE HIGH PERFORMANCE COMPUTING

GANDHI, SACHIN January 2004 (has links)
No description available.

Page generated in 0.0909 seconds