• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • 2
  • 1
  • Tagged with
  • 6
  • 6
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Evaluating the Robustness of Resource Allocations Obtained through Performance Modeling with Stochastic Process Algebra

Srivastava, Srishti 09 May 2015 (has links)
Recent developments in the field of parallel and distributed computing has led to a proliferation of solving large and computationally intensive mathematical, science, or engineering problems, that consist of several parallelizable parts and several non-parallelizable (sequential) parts. In a parallel and distributed computing environment, the performance goal is to optimize the execution of parallelizable parts of an application on concurrent processors. This requires efficient application scheduling and resource allocation for mapping applications to a set of suitable parallel processors such that the overall performance goal is achieved. However, such computational environments are often prone to unpredictable variations in application (problem and algorithm) and system characteristics. Therefore, a robustness study is required to guarantee a desired level of performance. Given an initial workload, a mapping of applications to resources is considered to be robust if that mapping optimizes execution performance and guarantees a desired level of performance in the presence of unpredictable perturbations at runtime. In this research, a stochastic process algebra, Performance Evaluation Process Algebra (PEPA), is used for obtaining resource allocations via a numerical analysis of performance modeling of the parallel execution of applications on parallel computing resources. The PEPA performance model is translated into an underlying mathematical Markov chain model for obtaining performance measures. Further, a robustness analysis of the allocation techniques is performed for finding a robustmapping from a set of initial mapping schemes. The numerical analysis of the performance models have confirmed similarity with the simulation results of earlier research available in existing literature. When compared to direct experiments and simulations, numerical models and the corresponding analyses are easier to reproduce, do not incur any setup or installation costs, do not impose any prerequisites for learning a simulation framework, and are not limited by the complexity of the underlying infrastructure or simulation libraries.
2

A Comparative Study on Methods for Stochastic Number Generation

Shenoi, Sangeetha Chandra January 2017 (has links)
No description available.
3

Energy-Efficient Detection of Atrial Fibrillation in the Context of Resource-Restrained Devices

Kheffache, Mansour January 2019 (has links)
eHealth is a recently emerging practice at the intersection between the ICT and healthcare fields where computing and communication technology is used to improve the traditional healthcare processes or create new opportunities to provide better health services, and eHealth can be considered under the umbrella of the Internet of Things. A common practice in eHealth is the use of machine learning for a computer-aided diagnosis, where an algorithm would be fed some biomedical signal to provide a diagnosis, in the same way a trained radiologist would do. This work considers the task of Atrial Fibrillation detection and proposes a novel range of algorithms to achieve energy-efficiency. Based on our working hypothesis, that computationally simple operations and low-precision data types are key for energy-efficiency, we evaluate various algorithms in the context of resource-restrained health-monitoring wearable devices. Finally, we assess the sustainability dimension of the proposed solution.
4

Modèles cellulaires de champs neuronaux dynamiques / Cellular model of dynamic neural fields

Chappet de Vangel, Benoît 14 November 2016 (has links)
Dans la recherche permanente de solutions pour dépasser les limitations de plus en plus visibles de nos architectures matérielles, le calcul non-conventionnel offre des alternatives variées comme l’ingénierie neuromorphique et le calcul cellulaire. Comme von Neumann qui s’était initialement inspiré du cerveau pour concevoir l’architecture des ordinateurs, l’ingénierie neuromorphique prend la même inspiration en utilisant un substrat analogique plus proche des neurones et des synapses. Le calcul cellulaire s’inspire lui des substrats de calcul naturels (chimique, physiques ou biologiques) qui imposent une certaine localité des calculs de laquelle va émerger une organisation et des calculs. La recherche sur les mécanismes neuronaux permet de comprendre les grands principes de calculs émergents des neurones. Un des grands principes que nous allons utiliser dans cette thèse est la dynamique d’attracteurs d’abord décrite par Amari (champs neuronaux dynamiques, ou DNF pour dynamic neural fields), Amit et Zhang (réseaux de neurones à attracteurs continus). Ces champs de neurones ont des propriétés de calcul variées mais sont particulièrement adaptés aux représentations spatiales et aux fonctions des étages précoces du cortex visuel. Ils ont été utilisés entre autres dans des applications de robotique autonome, dans des tâches de classification et clusterisation. Comme de nombreux modèles de calcul neuronal, ils sont également intéressants du point de vue des architectures matérielles en raison de leur robustesse au bruit et aux fautes. On voit donc l’intérêt que ces modèles de calcul peuvent avoir comme solution permettant de dépasser (ou poursuivre) la loi de Moore. La réduction de la taille des transistors provoque en effet beaucoup de bruit, de même que la relaxation de la contrainte de ~ 0% de fautes lors de la production ou du fonctionnement des circuits permettrait d’énormes économies. Par ailleurs, l’évolution actuelle vers des circuits many-core de plus en plus distribués implique des difficultés liées au mode de calcul encore centralisés de la plupart des modèles algorithmiques parallèles, ainsi qu’au goulot d’étranglement des communications. L’approche cellulaire est une réponse naturelle à ces enjeux. Partant de ces différents constats, l’objectif de cette thèse est de rendre possible les calculs et applications riches des champs neuronaux dynamiques sur des substrats matériels grâce à des modèles neuro-cellulaires assurant une véritable localité, décentralisation et mise à l’échelle des calculs. Cette thèse est donc une proposition argumentée pour dépasser les limites des architectures de type von Neumann en utilisant des principes de calcul neuronal et cellulaire. Nous restons cependant dans le cadre numérique en explorant les performances des architectures proposées sur FPGA. L’utilisation de circuits analogiques (VLSI) serait tous aussi intéressante mais n’est pas étudiée ici. Les principales contributions sont les suivantes : 1) Calcul DNF dans un environnement neuromorphique ; 2) Calcul DNF avec communication purement locale : modèle RSDNF (randomly spiking DNF) ; 3) Calcul DNF avec communication purement locale et asynchrone : modèle CASAS-DNF (cellular array of stochastic asynchronous spiking DNF). / In the constant search for design going beyond the limits of the von Neumann architecture, non conventional computing offers various solutions like neuromorphic engineering and cellular computing. Like von Neumann who roughly reproduced brain structures to design computers architecture, neuromorphic engineering takes its inspiration directly from neurons and synapses using analog substratum. Cellular computing influence comes from natural substratum (chemistry, physic or biology) imposing locality of interactions from which organisation and computation emerge. Research on neural mechanisms was able to demonstrate several emergent properties of the neurons and synapses. One of them is the attractor dynamics described in different frameworks by Amari with the dynamic neural fields (DNF) and Amit and Zhang with the continuous attractor neural networks. These neural fields have various computing properties and are particularly relevant for spatial representations and early stages of visual cortex processing. They were used, for instance, in autonomous robotics, classification and clusterization. Similarly to many neuronal computing models, they are robust to noise and faults and thus are good candidates for noisy hardware computation models which would enable to keep up or surpass the Moore law. Indeed, transistor area reductions is leading to more and more noise and the relaxation of the approx. 0% fault during production and operation of integrated circuits would lead to tremendous savings. Furthermore, progress towards many-cores circuits with more and more cores leads to difficulties due to the centralised computation mode of usual parallel algorithms and their communication bottleneck. Cellular computing is the natural answer to these problems. Based on these different arguments, the goal of this thesis is to enable rich computations and applications of dynamic neural fields on hardware substratum with neuro-cellular models enabling a true locality, decentralization and scalability of the computations. This work is an attempt to go beyond von Neumann architectures by using cellular and neuronal computing principles. However, we will stay in the digital framework by exploring performances of proposed architectures on FPGA. Analog hardware like VLSI would also be very interesting but is not studied here. The main contributions of this work are : 1) Neuromorphic DNF computation ; 2) Local DNF computations with randomly spiking dynamic neural fields (RSDNF model) ; 3) Local and asynchronous DNF computations with cellular arrays of stochastic asynchronous spiking DNFs (CASAS-DNF model).
5

Implementación en hardware de sistemas de alta fiabilidad basados en metodologías estocásticas

Canals Guinand, Vicente José 27 July 2012 (has links)
La sociedad actual demanda cada vez más aplicaciones computacionalmente exigentes y que se implementen de forma energéticamente eficiente. Esto obliga a la industria del semiconductor a mantener una continua progresión de la tecnología CMOS. No obstante, los expertos vaticinan que el fin de la era de la progresión de la tecnología CMOS se acerca, puesto que se prevé que alrededor del 2020 la tecnología CMOS llegue a su límite. Cuando ésta llegue al punto conocido como “Red Brick Wall”, las limitaciones físicas, tecnológicas y económicas no harán viable el proseguir por esta senda. Todo ello ha motivado que a lo largo de la última década tanto instituciones públicas como privadas apostasen por el desarrollo de soluciones tecnológicas alternativas como es el caso de la nanotecnología (nanotubos, nanohilos, tecnologías basadas en el grafeno, etc.). En esta tesis planteamos una solución alternativa para poder afrontar algunos de los problemas computacionalmente exigentes. Esta solución hace uso de la tecnología CMOS actual sustituyendo la forma de computación clásica desarrollada por Von Neumann por formas de computación no convencionales. Éste es el caso de las computaciones basadas en lógicas pulsantes y en especial la conocida como computación estocástica, la cual proporciona un aumento de la fiabilidad y del paralelismo en los sistemas digitales. En esta tesis se presenta el desarrollo y evaluación de todo un conjunto de bloques computacionales estocásticos implementados mediante elementos digitales clásicos. A partir de estos bloques se proponen diversas metodologías computacionalmente eficientes que mediante su uso permiten afrontar algunos problemas de computación masiva de forma mucho más eficiente. En especial se ha centrado el estudio en los problemas relacionados con el campo del reconocimiento de patrones. / Today's society demands the use of applications with a high computational complexity that must be executed in an energy-efficient way. Therefore the semiconductor industry is forced to maintain the CMOS technology progression. However, experts predict that the end of the age of CMOS technology progression is approaching. It is expected that at 2020 CMOS technology would reach the point known as "Red Brick Wall" at which the physical, technological and economic limitations of CMOS technology will be unavoidable. All of this has caused that over the last decade public and private institutions has bet by the development of alternative technological solutions as is the case of nanotechnology (nanotubes, nanowires, graphene, etc.). In this thesis we propose an alternative solution to address some of the computationally exigent problems by using the current CMOS technology but replacing the classical computing way developed by Von Neumann by other forms of unconventional computing. This is the case of computing based on pulsed logic and especially the stochastic computing that provide a significant increase of the parallelism and the reliability of the systems. This thesis presents the development and evaluation of different stochastic computing methodologies implemented by digital gates. The different methods proposed are able to face some massive computing problems more efficiently than classical digital electronics. This is the case of those fields related to pattern recognition, which is the field we have focused the main part of the research work developed in this thesis.
6

Analyse de fiabilité de circuits logiques et de mémoire basés sur dispositif spintronique / Reliability analysis of spintronic device based logic and memory circuits

Wang, You 13 February 2017 (has links)
La jonction tunnel magnétique (JTM) commutée par la couple de transfert de spin (STT) a été considérée comme un candidat prometteur pour la prochaine génération de mémoires non-volatiles et de circuits logiques, car elle fournit une solution pour surmonter le goulet d'étranglement de l'augmentation de puissance statique causée par la mise à l'échelle de la technologie CMOS. Cependant, sa commercialisation est limitée par la fiabilité faible, qui se détériore gravement avec la réduction de la taille du dispositif. Cette thèse porte sur l'étude de la fiabilité des circuits basés sur JTM. Tout d'abord, un modèle compact de JTM incluant les problèmes principaux de fiabilité est proposé et validé par la comparaison avec des données expérimentales. Sur la base de ce modèle précis, la fiabilité des circuits typiques est analysée et une méthodologie d'optimisation de la fiabilité est proposée. Enfin, le comportement de commutation stochastique est utilisé dans certaines nouvelles conceptions d'applications classiques. / Spin transfer torque magnetic tunnel junction (STT-MTJ) has been considered as a promising candidate for next generation of non-volatile memories and logic circuits, because it provides a perfect solution to overcome the bottleneck of increasing static power caused by CMOS technology scaling. However, its commercialization is limited by the poor reliability, which deteriorates severely with device scaling down. This thesis focuses on the reliability investigation of MTJ based non-volatile circuits. Firstly, a compact model of MTJ including main reliability issues is proposed and validated by the comparison with experimental data. Based on this accurate model, the reliability of typical circuits is analyzed and reliability optimization methodology is proposed. Finally, the stochastic switching behavior is utilized in some new designs of conventional applications.

Page generated in 0.0714 seconds