Spelling suggestions: "subject:"approximate"" "subject:"pproximate""
21 |
Approximate Diagonalization of HomomorphismsRo, Min 18 August 2015 (has links)
In this dissertation, we explore the approximate diagonalization of unital homomorphisms between C*-algebras. In particular, we prove that unital homomorphisms from commutative C*-algebras into simple separable unital C*-algebras with tracial rank at most one are approximately diagonalizable. This is equivalent to the approximate diagonalization of commuting sets of normal matrices.
We also prove limited generalizations of this theorem. Namely, certain injective unital homomorphisms from commutative C*-algebras into simple separable unital C*-algebras with rational tracial rank at most one are shown to be approximately diagonalizable. Also unital injective homomorphisms from AH-algebras with unique tracial state into separable simple unital C*-algebras of tracial rank at most one are proved to be approximately diagonalizable. Counterexamples are provided showing that these results cannot be extended in general.
Finally, we prove that for unital homomorphisms between AF-algebras, approximate diagonalization is equivalent to a combinatorial problem involving sections of lattice points in cones.
|
22 |
Image Processing using Approximate Data-path UnitsJanuary 2013 (has links)
abstract: In this work, we present approximate adders and multipliers to reduce data-path complexity of specialized hardware for various image processing systems. These approximate circuits have a lower area, latency and power consumption compared to their accurate counterparts and produce fairly accurate results. We build upon the work on approximate adders and multipliers presented in [23] and [24]. First, we show how choice of algorithm and parallel adder design can be used to implement 2D Discrete Cosine Transform (DCT) algorithm with good performance but low area. Our implementation of the 2D DCT has comparable PSNR performance with respect to the algorithm presented in [23] with ~35-50% reduction in area. Next, we use the approximate 2x2 multiplier presented in [24] to implement parallel approximate multipliers. We demonstrate that if some of the 2x2 multipliers in the design of the parallel multiplier are accurate, the accuracy of the multiplier improves significantly, especially when two large numbers are multiplied. We choose Gaussian FIR Filter and Fast Fourier Transform (FFT) algorithms to illustrate the efficacy of our proposed approximate multiplier. We show that application of the proposed approximate multiplier improves the PSNR performance of 32x32 FFT implementation by 4.7 dB compared to the implementation using the approximate multiplier described in [24]. We also implement a state-of-the-art image enlargement algorithm, namely Segment Adaptive Gradient Angle (SAGA) [29], in hardware. The algorithm is mapped to pipelined hardware blocks and we synthesized the design using 90 nm technology. We show that a 64x64 image can be processed in 496.48 µs when clocked at 100 MHz. The average PSNR performance of our implementation using accurate parallel adders and multipliers is 31.33 dB and that using approximate parallel adders and multipliers is 30.86 dB, when evaluated against the original image. The PSNR performance of both designs is comparable to the performance of the double precision floating point MATLAB implementation of the algorithm. / Dissertation/Thesis / M.S. Computer Science 2013
|
23 |
Intelligent Energy-Efficient Storage System for Big-Data ApplicationsGong, Yifu January 2020 (has links)
Static Random Access Memory (SRAM) is a critical component in mobile video processing systems. Because of the large video data size, the memory is frequently accessed, which dominates the power consumption and limits battery life. In energy-efficient SRAM design, a substantial amount of research is presented to discuss the mechanisms of approximate storage, but the content and environment adaptations were never a part of the consideration in memory design. This dissertation focuses on optimization methods for the SRAM system, specifically addressing three areas of Intelligent Energy-Efficient Storage system design. First, the SRAM stability is discussed. The relationships among supply voltage, SRAM transistor sizes, and SRAM failure rate are derived in this section. The result of this study is applied to all of the later work. Second, intelligent voltage scaling techniques are detailed. This method utilizes the conventional voltage scaling technique by integrating self-correction and sizing techniques. Third, intelligent bit-truncation techniques are developed. Viewing environment and video content characteristics are considered in the memory design. The performance of all designed SRAMs are compared to published literature and are proven to have improvement.
|
24 |
Von-Neumann and Beyond: Memristor ArchitecturesNaous, Rawan 05 1900 (has links)
An extensive reliance on technology, an abundance of data, and increasing processing
requirements have imposed severe challenges on computing and data processing.
Moreover, the roadmap for scaling electronic components faces physical and reliability
limits that hinder the utilization of the transistors in conventional systems and promotes
the need for faster, energy-efficient, and compact nano-devices. This work thus
capitalizes on emerging non-volatile memory technologies, particularly the memristor
for steering novel design directives. Moreover, aside from the conventional deterministic
operation, a temporal variability is encountered in the devices functioning. This
inherent stochasticity is addressed as an enabler for endorsing the stochastic electronics field of study. We tackle this approach of design by proposing and verifying a statistical approach to modelling the stochastic memristors behaviour. This mode of
operation allows for innovative computing designs within the approximate computing
and beyond Von-Neumann domains.
In the context of approximate computing, sacrificing functional accuracy for the
sake of energy savings is proposed based on inherently stochastic electronic components. We introduce mathematical formulation and probabilistic analysis for Boolean logic operators and correspondingly incorporate them into arithmetic blocks. Gate- and system-level accuracy of operation is presented to convey configurability and the different effects that the unreliability of the underlying memristive components has on the intermediary and overall output. An image compression application is presented
to reflect the efficiency attained along with the impact on the output caused by the
relative precision quantification.
In contrast, in neuromorphic structures the memristors variability is mapped onto
abstract models of the noisy and unreliable brain components. In one approach, we
propose using the stochastic memristor as an inherent source of variability in the
neuron that allows it to produce spikes stochastically. Alternatively, the stochastic
memristors are mapped onto bi-stable stochastic synapses. The intrinsic variation
is modelled as added noise that aids in performing the underlying computational
tasks. Both aspects are tested within a probabilistic neural network operation for a
handwritten MNIST digit recognition application. Synaptic adaptation and neuronal
selectivity are achieved with both approaches, which demonstrates the savings, interchangeability, robustness, and relaxed design space of brain-inspired unconventional computing systems.
|
25 |
Exploring Per-Input Filter Selection and Approximation Techniques for Deep Neural NetworksGaur, Yamini 21 June 2019 (has links)
We propose a dynamic, input dependent filter approximation and selection technique to improve the computational efficiency of Deep Neural Networks. The approximation techniques convert 32 bit floating point representation of filter weights in neural networks into smaller precision values. This is done by reducing the number of bits used to represent the weights. In order to calculate the per-input error between the trained full precision filter weights and the approximated weights, a metric called Multiplication Error (ME) has been chosen. For convolutional layers, ME is calculated by subtracting the approximated filter weights from the original filter weights, convolving the difference with the input and calculating the grand-sum of the resulting matrix. For fully connected layers, ME is calculated by subtracting the approximated filter weights from the original filter weights, performing matrix multiplication between the difference and the input and calculating the grand-sum of the resulting matrix. ME is computed to identify approximated filters in a layer that result in low inference accuracy. In order to maintain the accuracy of the network, these filters weights are replaced with the original full precision weights.
Prior work has primarily focused on input independent (static) replacement of filters to low precision weights. In this technique, all the filter weights in the network are replaced by approximated filter weights. This results in a decrease in inference accuracy. The decrease in accuracy is higher for more aggressive approximation techniques. Our proposed technique aims to achieve higher inference accuracy by not approximating filters that generate high ME. Using the proposed per-input filter selection technique, LeNet achieves an accuracy of 95.6% with 3.34% drop from the original accuracy value of 98.9% for truncating to 3 bits for the MNIST dataset. On the other hand upon static filter approximation, LeNet achieves an accuracy of 90.5% with 8.5% drop from the original accuracy.
The aim of our research is to potentially use low precision weights in deep learning algorithms to achieve high classification accuracy with less computational overhead. We explore various filter approximation techniques and implement a per-input filter selection and approximation technique that selects the filters to approximate during run-time. / Master of Science / Deep neural networks, just like the human brain can learn important information about the data provided to them and can classify a new input based on the labels corresponding to the provided dataset. Deep learning technology is heavily employed in devices using computer vision, image and video processing and voice detection. The computational overhead incurred in the classification process of DNNs prohibits their use in smaller devices. This research aims to improve network efficiency in deep learning by replacing 32 bit weights in neural networks with less precision weights in an input-dependent manner. Trained neural networks are numerically robust. Different layers develop tolerance to minor variations in network parameters. Therefore, differences induced by low-precision calculations fall well within tolerance limit of the network. However, for aggressive approximation techniques like truncating to 3 and 2 bits, inference accuracy drops severely. We propose a dynamic technique that during run-time, identifies the approximated filters resulting in low inference accuracy for a given input and replaces those filters with the original filters to achieve high inference accuracy. The proposed technique has been tested for image classification on Convolutional Neural Networks. The datasets used are MNIST and CIFAR-10. The Convolutional Neural Networks used are 4-layered CNN, LeNet-5 and AlexNet.
|
26 |
Enabling Approximate Storage through Lossy Media Data CompressionWorek, Brian David 08 February 2019 (has links)
Memory capacity, bandwidth, and energy all continue to present hurdles in the quest for efficient, high-speed computing. Recognition, mining, and synthesis (RMS) applications in particular are limited by the efficiency of the memory subsystem due to their large datasets and need to frequently access memory. RMS applications, such as those in machine learning, deliver intelligent analysis and decision making through their ability to learn, identify, and create complex data models. To meet growing demand for RMS application deployment in battery constrained devices, such as mobile and Internet-of-Things, designers will need novel techniques to improve system energy consumption and performance. Fortunately, many RMS applications demonstrate inherent error resilience, a property that allows them to produce acceptable outputs even when data used in computation contain errors. Approximate storage techniques across circuits, architectures, and algorithms exploit this property to improve the energy consumption and performance of the memory subsystem through quality-energy scaling. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data. / MS / Computer memory systems present challenges in the quest for more powerful overall computing systems. Computer applications with the ability to learn from large sets of data in particular are limited because they need to frequently access the memory system. These applications are capable of intelligent analysis and decision making due to their ability to learn, identify, and create complex data models. To meet growing demand for intelligent applications in smartphones and other Internet connected devices, designers will need novel techniques to improve energy consumption and performance. Fortunately, many intelligent applications are naturally resistant to errors, which means they can produce acceptable outputs even when there are errors in inputs or computation. Approximate storage techniques across computer hardware and software exploit this error resistance to improve the energy consumption and performance of computer memory by purposefully reducing data precision. This thesis reviews state of the art techniques in approximate storage and presents our own contribution that uses lossy compression to reduce the storage cost of media data.
|
27 |
Adding Threshold Concepts to the Description Logic ELFernández Gil, Oliver 14 June 2016 (has links) (PDF)
We introduce a family of logics extending the lightweight Description Logic EL, that allows us to define concepts in an approximate way. The main idea is to use a graded membership function m, which for each individual and concept yields a number in the interval [0,1] expressing the degree to which the individual belongs to the concept. Threshold concepts C~t for ~ in {<,<=,>,>=} then collect all the individuals that belong to C with degree ~t. We further study this framework in two particular directions. First, we define a specific graded membership function deg and investigate the complexity of reasoning in the resulting Description Logic tEL(deg) w.r.t. both the empty terminology and acyclic TBoxes. Second, we show how to turn concept similarity measures into membership degree functions. It turns out that under certain conditions such functions are well-defined, and therefore induce a wide range of threshold logics. Last, we present preliminary results on the computational complexity landscape of reasoning in such a big family of threshold logics.
|
28 |
Divergência populacional e expansão demográfica de Dendrocolaptes platyrostris (Aves: Dendrocolaptidae) no final do Quaternário / Population divergence and demographic expansion of Dendrocolaptes platyrostris (Aves: Dendrocolaptidae) in the late QuaternaryCampos Junior, Ricardo Fernandes 29 October 2012 (has links)
Dendrocolaptes platyrostris é uma espécie de ave florestal associada às matas de galeria do corredor de vegetação aberta da América do sul (D. p. intermedius) e à Floresta Atlântica (D. p. platyrostris). Em um trabalho anterior, foi observada estrutura genética populacional associada às subespécies, além de dois clados dentro da Floresta Atlântica e evidências de expansão na população do sul, o que é compatível com o modelo Carnaval-Moritz. Utilizando approximate Bayesian computation, o presente trabalho avaliou a diversidade genética de dois marcadores nucleares e um marcador mitocondrial dessa espécie com o objetivo de comparar os resultados obtidos anteriormente com os obtidos utilizando uma estratégia multi-locus e considerando variação coalescente. Os resultados obtidos sugerem uma relação de politomia entre as populações que se separaram durante o último período interglacial, mas expandiram após o último máximo glacial. Este resultado é consistente com o modelo de Carnaval-Moritz, o qual sugere que as populações sofreram alterações demográficas devido às alterações climáticas ocorridas nestes períodos. Trabalhos futuros incluindo outros marcadores e modelos que incluam estabilidade em algumas populações e expansão em outras são necessários para avaliar o presente resultado / Dendrocolaptes platyrostris is a forest specialist bird associated to gallery forests of the open vegetation corridor of South America (D. p. intermedius) and to the Atlantic forest (D. p. platyrostris). A previous study showed a population genetic structure associated with the subspecies, two clades within the Atlantic forest, and evidence of population expansion in the south, which is compatible with Carnaval- Moritz\'s model. The present study evaluated the genetic diversity of two nuclear and one mitochondrial markers of this species using approximate Bayesian computation, in order to compare the results previously obtained with those based on a multi-locus strategy and considering the coalescent variation. The results suggest a polytomic relationship among the populations that split during the last interglacial period and expanded after the last glacial maximum. This result is consistent with the model of Carnaval-Moritz, which suggests that populations have undergone demographic changes due to climatic changes that occurred in these periods. Future studies including other markers and models that include stability in some populations and expansion in others are needed to evaluate the present result
|
29 |
Aplicações do approximate Bayesian computation a controle de qualidade / Applications of approximate Bayesian computation in quality controlCampos, Thiago Feitosa 11 June 2015 (has links)
Neste trabalho apresentaremos dois problemas do contexto de controle estatístico da qualidade: monitoramento \"on-line\'\' de qualidade e environmental stress screening, analisados pela óptica bayesiana. Apresentaremos os problemas dos modelos bayesianos relativos a sua aplicação e, os reanalisamos com o auxílio do ABC o que nos fornece resultados de uma maneira mais rápida, e assim possibilita análises diferenciadas e a previsão novas observações. / In this work we will present two problems in the context of statistical quality control: on line quality monitoring and environmental stress screening, analyzed from the Bayesian perspective. We will present problems of the Bayesian models related to their application, and also we reanalyze the problems with the assistance of ABC methods which provides results in a faster way, and so enabling differentiated analyzes and new observations forecast.
|
30 |
Aplicações do approximate Bayesian computation a controle de qualidade / Applications of approximate Bayesian computation in quality controlThiago Feitosa Campos 11 June 2015 (has links)
Neste trabalho apresentaremos dois problemas do contexto de controle estatístico da qualidade: monitoramento \"on-line\'\' de qualidade e environmental stress screening, analisados pela óptica bayesiana. Apresentaremos os problemas dos modelos bayesianos relativos a sua aplicação e, os reanalisamos com o auxílio do ABC o que nos fornece resultados de uma maneira mais rápida, e assim possibilita análises diferenciadas e a previsão novas observações. / In this work we will present two problems in the context of statistical quality control: on line quality monitoring and environmental stress screening, analyzed from the Bayesian perspective. We will present problems of the Bayesian models related to their application, and also we reanalyze the problems with the assistance of ABC methods which provides results in a faster way, and so enabling differentiated analyzes and new observations forecast.
|
Page generated in 0.0508 seconds