11 |
Error Detection in Number-Theoretic and Algebraic AlgorithmsVasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms.
Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories:
1. Correct and feasible -- the algorithm computes the correct result,
2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O,
3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O.
Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues.
Moreover, we show that typically, there
will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
|
12 |
Error Detection in Number-Theoretic and Algebraic AlgorithmsVasiga, Troy Michael John January 2008 (has links)
CPU's are unreliable: at any point in a computation, a bit may be altered with some (small) probability. This probability may seem negligible, but for large calculations (i.e., months of CPU time), the likelihood of an error being introduced becomes increasingly significant. Relying on this fact, this thesis defines a statistical measure called robustness, and measures the robustness of several number-theoretic and algebraic algorithms.
Consider an algorithm A that implements function f, such that f has range O and algorithm A has range O' where O⊆O'. That is, the algorithm may produce results which are not in the possible range of the function. Specifically, given an algorithm A and a function f, this thesis classifies the output of A into one of three categories:
1. Correct and feasible -- the algorithm computes the correct result,
2. Incorrect and feasible -- the algorithm computes an incorrect result and this output is in O,
3. Incorrect and infeasible -- the algorithm computes an incorrect result and output is in O'\O.
Using probabilistic measures, we apply this classification scheme to quantify the robustness of algorithms for computing primality (i.e., the Lucas-Lehmer and Pepin tests), group order and quadratic residues.
Moreover, we show that typically, there
will be an "error threshold" above which the algorithm is unreliable (that is, it will rarely give the correct result).
|
13 |
A Low-power Convolutional Decoder with Error Detection AbilityYeh, Wei-ting 03 August 2010 (has links)
In wireless communication systems, we may encounter many problems. One of the main issues is noise interference. To overcome the problem, the sender can use the Convolutional coding method to encode the data, and the receiver can utilize the Viterbi algorithm for decoding and correction purposes. Due to the high complexity of the Viterbi algorithm, the VLSI structure of Viterbi decoder will consume large amounts of power, leading the portable devices to short standby time and high operating temperature. In order to solve these problems we have to design a low power decoder.
As a matter of fact, the Viterbi decoder can be actually shutdown when no noise interference exists. As a consequence, we use a detection circuit to determine whether the signal is influenced by noise. If the signal is interfered, we choose the Viterbi decoder to perform the decoding process. Otherwise, we utilize a low cost decoder to lessen the power consumed at the receiver end.
In addition, dynamic adjustment of SMU module is also developed and implemented in the proposed decoder. SMU module consumes the most power in Viterbi decoder. So, our developed and goal is to reduce the usage of SMU module. If noise distribution is not so dense, we don¡¦t have to use high decoding ability to decode section data. Therefore, the registers in SMU can be decreased. Clock gating technique is adopted in this thesis to shutdown these idle registers to reduce the power consumption of SMU.
The proposed decoder has been implemented and synthesized using the Artisan TSMC 0.13£gm standard cell library. Compared with the traditional Viterbi decoder, the proposed decoder can achieve 25% and nearly 60% power saving when the SNR is 1dB and 8dB respectively, with 6% area reduction. According to the above experimental results, we can say that the proposed decoder is able to reduce power consumption.
|
14 |
Low-Power Adaptive Viterbi Decoder with Section Error IdentificationLi, Shih-Jie 28 July 2011 (has links)
In wireless communication system, convolutional coding method is often used to encode the data. In decoding convolutional code (CC), Viterbi algorithm is considered to be the best mechanism. Viterbi decoder (VD) was developed to execute the algorithm on mobile devices more effectively. This decoder is often used on 2G and 3G mobile phones. However, on 2G phones, VD consumes about one third of total power consumption of the signal receiver. Therefore it is very necessary to reduce the power consumption of VD on 2G and 3G phones.
VD uses large amount of register in survivor metric unit (SMU), so that the decoder can receive enough CC and converge automatically. The goal of this thesis is to decrease power consumption of SMU by using path metric compare unit (PMCU) to find the best state of path metric unit (PMU). This way decreases half of registers and multiplexers required in SMU, leading to significant area reduction in decoder. During the process of signal transmission in wireless communication, different causes like the atmosphere, outer space radiation and man-made will interfere the signal by different degree. The stronger the noise is, the more interference CC will get.
The error detection circuit used will mark the sections with noise interference before the CC enters the VD. If CC is interfered, it will be decoded by the whole VD. Otherwise, it will be decoded by low power decoder, where the controller will start clock gating mechanism on SMU to close up unnecessary power consumption block.
The power consumption of is varying proposed Adaptive Viterbi decoder according to the interference degree. When interference degree is high, the power consumption is 21% less than conventional VD; when interference is low, it is 44% less. The results show that the proposed method can effectively reduce the power consumption of VD.
|
15 |
Error Detection and Correction for H.264/AVC Using Hybrid WatermarkingYou, Yuan-syun 19 July 2007 (has links)
none
|
16 |
Are There Age Differences in Shallow Processing of Text?Burton, Christine Millicent 06 December 2012 (has links)
There is growing evidence that young adult readers frequently fail to create exhaustive textbased representations as they read. Although there has been a significant amount of research devoted to age-related effects on text processing, there has been little research concerning this so-called shallow processing by older readers. This dissertation uses eye tracking to explore age-related effects in shallow processing across different levels of text representations. Experiment 1 investigated shallow processing by older readers at the textbase level by inserting semantic anomalies into passages read by participants. Older readers frequently failed to report the anomalies, but no more frequently than did younger readers. The eye-fixation behaviour revealed that older readers detected some of the anomalies sooner than did younger readers, but had to allocate disproportionately more processing resources to looking back to the anomalies to achieve comparable levels of detection success as their younger counterparts. Experiment 2 examined age-related effects of shallow processing at the surface form by inserting syntactic anomalies into passages read by older and younger adults. Older readers were less likely to detect syntactic anomalies when first encountering them relative to younger readers and engaged in increased regressive fixations to the anomalies. However, older readers with high
iii
reading comprehension skill were able to use their familiarity with text content to increase their likelihood of syntactic anomaly detection. Experiment 3 investigated the role of aging on shallow processing of the temporal dimension of the situation model. No age-related differences reporting the anomalies were found. The eye-fixation behaviour revealed that older readers with high working memory capacity detected some anomalies sooner than did younger readers; however, they had to allocate increased processing resources looking back to the anomalies to achieve comparable levels of detection as younger readers. Together, the results demonstrate that older readers are susceptible to shallow processing, but no more so than younger readers when they can rely on their linguistic skill or their existing knowledge to help reduce processing demands. However, older readers appear to require additional processing time to achieve comparable levels of anomaly detection as younger readers.
|
17 |
Are There Age Differences in Shallow Processing of Text?Burton, Christine Millicent 06 December 2012 (has links)
There is growing evidence that young adult readers frequently fail to create exhaustive textbased representations as they read. Although there has been a significant amount of research devoted to age-related effects on text processing, there has been little research concerning this so-called shallow processing by older readers. This dissertation uses eye tracking to explore age-related effects in shallow processing across different levels of text representations. Experiment 1 investigated shallow processing by older readers at the textbase level by inserting semantic anomalies into passages read by participants. Older readers frequently failed to report the anomalies, but no more frequently than did younger readers. The eye-fixation behaviour revealed that older readers detected some of the anomalies sooner than did younger readers, but had to allocate disproportionately more processing resources to looking back to the anomalies to achieve comparable levels of detection success as their younger counterparts. Experiment 2 examined age-related effects of shallow processing at the surface form by inserting syntactic anomalies into passages read by older and younger adults. Older readers were less likely to detect syntactic anomalies when first encountering them relative to younger readers and engaged in increased regressive fixations to the anomalies. However, older readers with high
iii
reading comprehension skill were able to use their familiarity with text content to increase their likelihood of syntactic anomaly detection. Experiment 3 investigated the role of aging on shallow processing of the temporal dimension of the situation model. No age-related differences reporting the anomalies were found. The eye-fixation behaviour revealed that older readers with high working memory capacity detected some anomalies sooner than did younger readers; however, they had to allocate increased processing resources looking back to the anomalies to achieve comparable levels of detection as younger readers. Together, the results demonstrate that older readers are susceptible to shallow processing, but no more so than younger readers when they can rely on their linguistic skill or their existing knowledge to help reduce processing demands. However, older readers appear to require additional processing time to achieve comparable levels of anomaly detection as younger readers.
|
18 |
Fine-grained error detection techniques for fast repair of FPGAsNazar, Gabriel Luca January 2013 (has links)
Field Programmable Gate Arrays (FPGAs) são componentes reconfiguráveis de hardware que encontraram grande sucesso comercial ao longo dos últimos anos em uma grande variedade de nichos de aplicação. Alta vazão de processamento, flexibilidade e tempo de projeto reduzido estão entre os principais atrativos desses dispositivos, e são essenciais para o seu sucesso comercial. Essas propriedades também são valiosas para sistemas críticos, que frequentemente enfrentam restrições severas de desempenho. Além disso, a possibilidade de reprogramação após implantação é relevante, uma vez que permite a adição de novas funcionalidades ou a correção de erros de projeto, estendendo a vida útil do sistema. Tais dispositivos, entretanto, dependem de grandes memórias para armazenar o bitstream de configuração, responsável por definir a função presente do FPGA. Assim, falhas afetando esta configuração são capazes de causar defeitos funcionais, sendo uma grande ameaça à confiabilidade. A forma mais tradicional de remover tais erros, isto é, scrubbing de configuração, consiste em periodicamente sobrescrever a memória com o seu conteúdo desejado. Entretanto, devido ao seu tamanho significativo e à banda de acesso limitada, scrubbing sofre de um longo tempo médio de reparo, e que está aumentando à medida que FPGAs ficam maiores e mais complexos a cada geração. Partições reconfiguráveis são úteis para reduzir este tempo, já que permitem a execução de um procedimento local de reparo na partição afetada. Para este propósito, mecanismos rápidos de detecção de erros são necessários para rapidamente disparar este scrubbing localizado e reduzir a latência de erro. Além disso, diagnóstico preciso é necessário para identificar a localização do erro dentro do espaço de endereçamento da configuração. Técnicas de redundância de grão fino têm o potencial de prover ambos, mas normalmente introduzem custos significativos devido à necessidade de numerosos verificadores de redundância. Neste trabalho, propomos uma técnica de detecção de erros de grão fino que utiliza recursos abundantes e subutilizados encontrados em FPGAs do estado da arte, especificamente as cadeias de propagação de vai-um. Assim, a técnica provê os principais benefícios da redundância de grão fino enquanto minimiza sua principal desvantagem. Reduções bastante significativas na latência de erro são atingíveis com a técnica proposta. Também é proposto um mecanismo heurístico para explorar o diagnóstico provido por técnicas desta natureza. Este mecanismo tem por objetivo identificar as localizações mais prováveis do erro na memória de configuração, baseado no diagnóstico de grão fino, e fazer uso dessa informação de forma a minimizar o tempo de reparo. / Field Programmable Gate Arrays (FPGAs) are reconfigurable hardware components that have found great commercial success over the past years in a wide variety of application niches. High processing throughput, flexibility and reduced design time are among the main assets of such devices, and are essential to their commercial success. These features are also valuable for critical systems that often face stringent performance constraints. Furthermore, the possibility to perform post-deployment reprogramming is relevant, as it allows adding new functionalities or correcting design mistakes, extending the system lifetime. Such devices, however, rely on large memories to store the configuration bitstream, responsible for defining the current FPGA function. Thus, faults affecting this configuration are able to cause functional failures, posing a major dependability threat. The most traditional means to remove such errors, i.e., configuration scrubbing, consists in periodically overwriting the memory with its desired contents. However, due to its significant size and limited access bandwidth, scrubbing suffers from a long mean time to repair, and which is increasing as FPGAs get larger and more complex after each generation. Reconfigurable partitions are useful to reduce this time, as they allow performing a local repair procedure on the affected partition. For that purpose, fast error detection mechanisms are required, in order to quickly trigger this localized scrubbing and reduce error latency. Moreover, precise diagnosis is necessary to identify the error location within the configuration addressing space. Fine-grained redundancy techniques have the potential to provide both, but usually introduce significant costs due to the need of numerous redundancy checkers. In this work we propose a fine-grained error detection technique that makes use of abundant and underused resources found in state-of-the-art FPGAs, namely the carry propagation chains. Thereby, the technique provides the main benefits of fine-grained redundancy while minimizing its main drawback. Very significant reductions in error latency are attainable with the proposed approach. A heuristic mechanism to explore the diagnosis provided by techniques of this nature is also proposed. This mechanism aims at identifying the most likely error locations in the configuration memory, based on the fine-grained diagnosis, and to make use of this information in order to minimize the repair time of scrubbing.
|
19 |
Fine-grained error detection techniques for fast repair of FPGAsNazar, Gabriel Luca January 2013 (has links)
Field Programmable Gate Arrays (FPGAs) são componentes reconfiguráveis de hardware que encontraram grande sucesso comercial ao longo dos últimos anos em uma grande variedade de nichos de aplicação. Alta vazão de processamento, flexibilidade e tempo de projeto reduzido estão entre os principais atrativos desses dispositivos, e são essenciais para o seu sucesso comercial. Essas propriedades também são valiosas para sistemas críticos, que frequentemente enfrentam restrições severas de desempenho. Além disso, a possibilidade de reprogramação após implantação é relevante, uma vez que permite a adição de novas funcionalidades ou a correção de erros de projeto, estendendo a vida útil do sistema. Tais dispositivos, entretanto, dependem de grandes memórias para armazenar o bitstream de configuração, responsável por definir a função presente do FPGA. Assim, falhas afetando esta configuração são capazes de causar defeitos funcionais, sendo uma grande ameaça à confiabilidade. A forma mais tradicional de remover tais erros, isto é, scrubbing de configuração, consiste em periodicamente sobrescrever a memória com o seu conteúdo desejado. Entretanto, devido ao seu tamanho significativo e à banda de acesso limitada, scrubbing sofre de um longo tempo médio de reparo, e que está aumentando à medida que FPGAs ficam maiores e mais complexos a cada geração. Partições reconfiguráveis são úteis para reduzir este tempo, já que permitem a execução de um procedimento local de reparo na partição afetada. Para este propósito, mecanismos rápidos de detecção de erros são necessários para rapidamente disparar este scrubbing localizado e reduzir a latência de erro. Além disso, diagnóstico preciso é necessário para identificar a localização do erro dentro do espaço de endereçamento da configuração. Técnicas de redundância de grão fino têm o potencial de prover ambos, mas normalmente introduzem custos significativos devido à necessidade de numerosos verificadores de redundância. Neste trabalho, propomos uma técnica de detecção de erros de grão fino que utiliza recursos abundantes e subutilizados encontrados em FPGAs do estado da arte, especificamente as cadeias de propagação de vai-um. Assim, a técnica provê os principais benefícios da redundância de grão fino enquanto minimiza sua principal desvantagem. Reduções bastante significativas na latência de erro são atingíveis com a técnica proposta. Também é proposto um mecanismo heurístico para explorar o diagnóstico provido por técnicas desta natureza. Este mecanismo tem por objetivo identificar as localizações mais prováveis do erro na memória de configuração, baseado no diagnóstico de grão fino, e fazer uso dessa informação de forma a minimizar o tempo de reparo. / Field Programmable Gate Arrays (FPGAs) are reconfigurable hardware components that have found great commercial success over the past years in a wide variety of application niches. High processing throughput, flexibility and reduced design time are among the main assets of such devices, and are essential to their commercial success. These features are also valuable for critical systems that often face stringent performance constraints. Furthermore, the possibility to perform post-deployment reprogramming is relevant, as it allows adding new functionalities or correcting design mistakes, extending the system lifetime. Such devices, however, rely on large memories to store the configuration bitstream, responsible for defining the current FPGA function. Thus, faults affecting this configuration are able to cause functional failures, posing a major dependability threat. The most traditional means to remove such errors, i.e., configuration scrubbing, consists in periodically overwriting the memory with its desired contents. However, due to its significant size and limited access bandwidth, scrubbing suffers from a long mean time to repair, and which is increasing as FPGAs get larger and more complex after each generation. Reconfigurable partitions are useful to reduce this time, as they allow performing a local repair procedure on the affected partition. For that purpose, fast error detection mechanisms are required, in order to quickly trigger this localized scrubbing and reduce error latency. Moreover, precise diagnosis is necessary to identify the error location within the configuration addressing space. Fine-grained redundancy techniques have the potential to provide both, but usually introduce significant costs due to the need of numerous redundancy checkers. In this work we propose a fine-grained error detection technique that makes use of abundant and underused resources found in state-of-the-art FPGAs, namely the carry propagation chains. Thereby, the technique provides the main benefits of fine-grained redundancy while minimizing its main drawback. Very significant reductions in error latency are attainable with the proposed approach. A heuristic mechanism to explore the diagnosis provided by techniques of this nature is also proposed. This mechanism aims at identifying the most likely error locations in the configuration memory, based on the fine-grained diagnosis, and to make use of this information in order to minimize the repair time of scrubbing.
|
20 |
Fine-grained error detection techniques for fast repair of FPGAsNazar, Gabriel Luca January 2013 (has links)
Field Programmable Gate Arrays (FPGAs) são componentes reconfiguráveis de hardware que encontraram grande sucesso comercial ao longo dos últimos anos em uma grande variedade de nichos de aplicação. Alta vazão de processamento, flexibilidade e tempo de projeto reduzido estão entre os principais atrativos desses dispositivos, e são essenciais para o seu sucesso comercial. Essas propriedades também são valiosas para sistemas críticos, que frequentemente enfrentam restrições severas de desempenho. Além disso, a possibilidade de reprogramação após implantação é relevante, uma vez que permite a adição de novas funcionalidades ou a correção de erros de projeto, estendendo a vida útil do sistema. Tais dispositivos, entretanto, dependem de grandes memórias para armazenar o bitstream de configuração, responsável por definir a função presente do FPGA. Assim, falhas afetando esta configuração são capazes de causar defeitos funcionais, sendo uma grande ameaça à confiabilidade. A forma mais tradicional de remover tais erros, isto é, scrubbing de configuração, consiste em periodicamente sobrescrever a memória com o seu conteúdo desejado. Entretanto, devido ao seu tamanho significativo e à banda de acesso limitada, scrubbing sofre de um longo tempo médio de reparo, e que está aumentando à medida que FPGAs ficam maiores e mais complexos a cada geração. Partições reconfiguráveis são úteis para reduzir este tempo, já que permitem a execução de um procedimento local de reparo na partição afetada. Para este propósito, mecanismos rápidos de detecção de erros são necessários para rapidamente disparar este scrubbing localizado e reduzir a latência de erro. Além disso, diagnóstico preciso é necessário para identificar a localização do erro dentro do espaço de endereçamento da configuração. Técnicas de redundância de grão fino têm o potencial de prover ambos, mas normalmente introduzem custos significativos devido à necessidade de numerosos verificadores de redundância. Neste trabalho, propomos uma técnica de detecção de erros de grão fino que utiliza recursos abundantes e subutilizados encontrados em FPGAs do estado da arte, especificamente as cadeias de propagação de vai-um. Assim, a técnica provê os principais benefícios da redundância de grão fino enquanto minimiza sua principal desvantagem. Reduções bastante significativas na latência de erro são atingíveis com a técnica proposta. Também é proposto um mecanismo heurístico para explorar o diagnóstico provido por técnicas desta natureza. Este mecanismo tem por objetivo identificar as localizações mais prováveis do erro na memória de configuração, baseado no diagnóstico de grão fino, e fazer uso dessa informação de forma a minimizar o tempo de reparo. / Field Programmable Gate Arrays (FPGAs) are reconfigurable hardware components that have found great commercial success over the past years in a wide variety of application niches. High processing throughput, flexibility and reduced design time are among the main assets of such devices, and are essential to their commercial success. These features are also valuable for critical systems that often face stringent performance constraints. Furthermore, the possibility to perform post-deployment reprogramming is relevant, as it allows adding new functionalities or correcting design mistakes, extending the system lifetime. Such devices, however, rely on large memories to store the configuration bitstream, responsible for defining the current FPGA function. Thus, faults affecting this configuration are able to cause functional failures, posing a major dependability threat. The most traditional means to remove such errors, i.e., configuration scrubbing, consists in periodically overwriting the memory with its desired contents. However, due to its significant size and limited access bandwidth, scrubbing suffers from a long mean time to repair, and which is increasing as FPGAs get larger and more complex after each generation. Reconfigurable partitions are useful to reduce this time, as they allow performing a local repair procedure on the affected partition. For that purpose, fast error detection mechanisms are required, in order to quickly trigger this localized scrubbing and reduce error latency. Moreover, precise diagnosis is necessary to identify the error location within the configuration addressing space. Fine-grained redundancy techniques have the potential to provide both, but usually introduce significant costs due to the need of numerous redundancy checkers. In this work we propose a fine-grained error detection technique that makes use of abundant and underused resources found in state-of-the-art FPGAs, namely the carry propagation chains. Thereby, the technique provides the main benefits of fine-grained redundancy while minimizing its main drawback. Very significant reductions in error latency are attainable with the proposed approach. A heuristic mechanism to explore the diagnosis provided by techniques of this nature is also proposed. This mechanism aims at identifying the most likely error locations in the configuration memory, based on the fine-grained diagnosis, and to make use of this information in order to minimize the repair time of scrubbing.
|
Page generated in 0.0355 seconds