• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

IC design for reliability

Zhang, Bin 23 October 2009 (has links)
As the feature size of integrated circuits goes down to the nanometer scale, transient and permanent reliability issues are becoming a significant concern for circuit designers. Traditionally, the reliability issues were mostly handled at the device level as a device engineering problem. However, the increasing severity of reliability challenges and higher error rates due to transient upsets favor higher-level design for reliability (DFR). In this work, we develop several methods for DFR at the circuit level. A major source of transient errors is the single event upset (SEU). SEUs are caused by high-energy particles present in the cosmic rays or emitted by radioactive contaminants in the chip packaging materials. When these particles hit a N+/P+ depletion region of an MOS transistor, they may generate a temporary logic fault. Depending on where the MOS transistor is located and what state the circuit is at, an SEU may result in a circuit-level error. We analyze SEUs both in combinational logic and memories (SRAM). For combinational logic circuit, we propose FASER, a Fast Analysis tool of Soft ERror susceptibility for cell-based designs. The efficiency of FASER is achieved through its static and vector-less nature. In order to evaluate the impact of SEU on SRAM, a theory for estimating dynamic noise margins is developed analytically. The results allow predicting the transient error susceptibility of an SRAM cell using a closedform expression. Among the many permanent failure mechanisms that include time-dependent oxide breakdown (TDDB), electro-migration (EM), hot carrier effect (HCE), and negative bias temperature instability (NBTI), NBTI has recently become important. Therefore, the main focus of our work is NBTI. NBTI occurs when the gate of PMOS is negatively biased. The voltage stress across the gate generates interface traps, which degrade the threshold voltage of PMOS. The degraded PMOS may eventually fail to meet timing requirement and cause functional errors. NBTI becomes severe at elevated temperatures. In this dissertation, we propose a NBTI degradation model that takes into account the temperature variation on the chip and gives the accurate estimation of the degraded threshold voltage. In order to account for the degradation of devices, traditional design methods add guard-bands to ensure that the circuit will function properly during its lifetime. However, the worst-case based guard-bands lead to significant penalty in performance. In this dissertation, we propose an effective macromodel-based reliability tracking and management framework, based on a hybrid network of on-chip sensors, consisting of temperature sensors and ring oscillators. The model is concerned specifically with NBTIinduced transistor aging. The key feature of our work, in contrast to the traditional tracking techniques that rely solely on direct measurement of the increase of threshold voltage or circuit delay, is an explicit macromodel which maps operating temperature to circuit degradation (the increase of circuit delay). The macromodel allows for costeffective tracking of reliability using temperature sensors and is also essential for enabling the control loop of the reliability management system. The developed methods improve the over-conservatism of the device-level, worstcase reliability estimation techniques. As the severity of reliability challenges continue to grow with technology scaling, it will become more important for circuit designers/CAD tools to be equipped with the developed methods. / text
2

Improving message logging protocols towards extreme-scale HPC systems / Amélioration des protocoles de journalisation des messages vers des systèmes HPC extrême-échelle

Martsinkevich, Tatiana V. 22 September 2015 (has links)
Les machines pétascale qui existent aujourd'hui ont un temps moyen entre pannes de plusieurs heures. Il est prévu que dans les futurs systèmes ce temps diminuera. Pour cette raison, les applications qui fonctionneront sur ces systèmes doivent être capables de tolérer des défaillances fréquentes. Aujourd'hui, le moyen le plus commun de le faire est d'utiliser le mécanisme de retour arrière global où l'application fait des sauvegardes périodiques à partir d’un point de reprise. Si un processus s'arrête à cause d'une défaillance, tous les processus reviennent en arrière et se relancent à partir du dernier point de reprise. Cependant, cette solution deviendra infaisable à grande échelle en raison des coûts de l'énergie et de l'utilisation inefficace des ressources. Dans le contexte des applications MPI, les protocoles de journalisation des messages offrent un meilleur confinement des défaillances car ils ne demandent que le redémarrage du processus qui a échoué, ou parfois d’un groupe de processus limité. Par contre, les protocoles existants ont souvent un surcoût important en l’absence de défaillances qui empêchent leur utilisation à grande échelle. Ce surcoût provient de la nécessité de sauvegarder de façon fiable tous les événements non-déterministes afin de pouvoir correctement restaurer l'état du processus en cas de défaillance. Ensuite, comme les journaux de messages sont généralement stockés dans la mémoire volatile, la journalisation risque de nécessiter une large utilisation de la mémoire. Une autre tendance importante dans le domaine des HPC est le passage des applications MPI simples aux nouveaux modèles de programmation hybrides tels que MPI + threads ou MPI + tâches en réponse au nombre croissant de cœurs par noeud. Cela offre l’opportunité de gérer les défaillances au niveau du thread / de la tâche contrairement à l'approche conventionnelle qui traite les défaillances au niveau du processus. Par conséquent, le travail de cette thèse se compose de trois parties. Tout d'abord, nous présentons un protocole de journalisation hiérarchique pour atténuer une défaillance de processus. Le protocole s'appelle Scalable Pattern-Based Checkpointing et il exploite un nouveau modèle déterministe appelé channel-determinism ainsi qu’une nouvelle relation always-happens-before utilisée pour mettre partiellement en ordre les événements de l'application. Le protocole est évolutif, son surcoût pendant l'exécution sans défaillance est limité, il n'exige l'enregistrement d'aucun évènement et, enfin, il a une reprise entièrement distribuée. Deuxièmement, afin de résoudre le problème de la limitation de la mémoire sur les nœuds de calcul, nous proposons d'utiliser des ressources dédiées supplémentaires, appelées logger nodes. Tous les messages qui ne rentrent pas dans la mémoire du nœud de calcul sont envoyés aux logger nodes et sauvegardés dans leur mémoire. À travers de nos expériences nous montrons que cette approche est réalisable et, associée avec un protocole de journalisation hiérarchique comme le SPBC, les logger nodes peuvent être une solution ultime au problème de mémoire limitée sur les nœuds de calcul. Troisièmement, nous présentons un protocole de tolérance aux défaillances pour des applications hybrides qui adoptent le modèle de programmation MPI + tâches. Ce protocole s'utilise pour tolérer des erreurs détectées non corrigées qui se produisent lors de l'exécution d'une tâche. Normalement, une telle erreur provoque une exception du système ce qui provoque un arrêt brutal de l'application. Dans ce cas, l'application doit redémarrer à partir du dernier point de reprise. Nous combinons la sauvegarde des données de la tâche avec une journalisation des messages afin d’aider à la reprise de la tâche qui a subi une défaillance. Ainsi, nous évitons le redémarrage au niveau du processus, plus coûteux. Nous démontrons les avantages de ce protocole avec l'exemple des applications hybrides MPI + OmpSs. / Existing petascale machines have a Mean Time Between Failures (MTBF) in the order of several hours. It is predicted that in the future systems the MTBF will decrease. Therefore, applications that will run on these systems need to be able to tolerate frequent failures. Currently, the most common way to do this is to use global application checkpoint/restart scheme: if some process fails the whole application rolls back the its last checkpointed state and re-executes from that point. This solution will become infeasible at large scale, due to its energy costs and inefficient resource usage. Therefore fine-grained failure containment is a strongly required feature for the fault tolerance techniques that target large-scale executions. In the context of message passing MPI applications, message logging fault tolerance protocols provide good failure containment as they require restart of only one process or, in some cases, a bounded number of processes. However, existing logging protocols experience a number of issues which prevent their usage at large scale. In particular, they tend to have high failure-free overhead because they usually need to store reliably any nondeterministic events happening during the execution of a process in order to correctly restore its state in recovery. Next, as message logs are usually stored in the volatile memory, logging may incur large memory footprint, especially in communication-intensive applications. This is particularly important because the future exascale systems expect to have less memory available per core. Another important trend in HPC is switching from MPI-only applications to hybrid programming models like MPI+threads and MPI+tasks in response to the increasing number of cores per node. This gives opportunities for employing fault tolerance solutions that handle faults on the level of threads/tasks. Such approach has even better failure containment compared to message logging protocols which handle failures on the level of processes. Thus, the work in these dissertation consists of three parts. First, we present a hierarchical log-based fault tolerance solution, called Scalable Pattern-Based Checkpointing (SPBC) for mitigating process fail-stop failures. The protocol leverages a new deterministic model called channel-determinism and a new always-happens-before relation for partial ordering of events in the application. The protocol is scalable, has low overhead in failure-free execution and does not require logging any events, provides perfect failure containment and has a fully distributed recovery. Second, to address the memory limitation problem on compute nodes, we propose to use additional dedicated resources, or logger nodes. All the logs that do not fit in the memory of compute nodes are sent to the logger nodes and kept in their memory. In a series of experiments we show that not only this approach is feasible, but, combined with a hierarchical logging scheme like the SPBC, logger nodes can be an ultimate solution to the problem of memory limitation for logging protocols. Third, we present a log-based fault tolerance protocol for hybrid applications adopting MPI+tasks programming model. The protocol is used to tolerate detected uncorrected errors (DUEs) that happen during execution of a task. Normally, a DUE caused the system to raise an exception which lead to an application crash. Then, the application has to restart from a checkpoint. In the proposed solution, we combine task checkpointing with message logging in order to support task re-execution. Such task-level failure containment can be beneficial in large-scale executions because it avoids the more expensive process-level restart. We demonstrate the advantages of this protocol on the example of hybrid MPI+OmpSs applications.

Page generated in 0.0839 seconds