Spelling suggestions: "subject:"approximately""
81 |
Dual sequential approximation methods in structural optimisationWood, Derren Wesley 03 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2012 / ENGLISH ABSTRACT: This dissertation addresses a number of topics that arise from the use of a dual method of sequential
approximate optimisation (SAO) to solve structural optimisation problems. Said approach is
widely used because it allows relatively large problems to be solved efficiently by minimising the
number of expensive structural analyses required. Some extensions to traditional implementations
are suggested that can serve to increase the efficacy of such algorithms. The work presented herein
is concerned primarily with three topics: the use of nonconvex functions in the definition of SAO
subproblems, the global convergence of the method, and the application of the dual SAO approach
to large-scale problems. Additionally, a chapter is presented that focuses on the interpretation of
Sigmund’s mesh independence sensitivity filter in topology optimisation.
It is standard practice to formulate the approximate subproblems as strictly convex, since strict
convexity is a sufficient condition to ensure that the solution of the dual problem corresponds
with the unique stationary point of the primal. The incorporation of nonconvex functions in the
definition of the subproblems is rarely attempted. However, many problems exhibit nonconvex
behaviour that is easily represented by simple nonconvex functions. It is demonstrated herein that,
under certain conditions, such functions can be fruitfully incorporated into the definition of the
approximate subproblems without destroying the correspondence or uniqueness of the primal and
dual solutions.
Global convergence of dual SAO algorithms is examined within the context of the CCSA method,
which relies on the use and manipulation of conservative convex and separable approximations.
This method currently requires that a given problem and each of its subproblems be relaxed to
ensure that the sequence of iterates that is produced remains feasible. A novel method, called the
bounded dual, is presented as an alternative to relaxation. Infeasibility is catered for in the solution
of the dual, and no relaxation-like modification is required. It is shown that when infeasibility is
encountered, maximising the dual subproblem is equivalent to minimising a penalised linear combination
of its constraint infeasibilities. Upon iteration, a restorative series of iterates is produced
that gains feasibility, after which convergence to a feasible local minimum is assured.
Two instances of the dual SAO solution of large-scale problems are addressed herein. The first
is a discrete problem regarding the selection of the point-wise optimal fibre orientation in the
two-dimensional minimum compliance design for fibre-reinforced composite plates. It is solved
by means of the discrete dual approach, and the formulation employed gives rise to a partially
separable dual problem. The second instance involves the solution of planar material distribution
problems subject to local stress constraints. These are solved in a continuous sense using a sparse
solver. The complexity and dimensionality of the dual is controlled by employing a constraint
selection strategy in tandem with a mechanism by which inconsequential elements of the Jacobian of the active constraints are omitted. In this way, both the size of the dual and the amount of
information that needs to be stored in order to define the dual are reduced. / AFRIKAANSE OPSOMMING: Hierdie proefskrif spreek ’n aantal onderwerpe aan wat spruit uit die gebruik van ’n duale metode
van sekwensi¨ele benaderde optimering (SBO; sequential approximate optimisation (SAO)) om
strukturele optimeringsprobleme op te los. Hierdie benadering word breedvoerig gebruik omdat
dit die moontlikheid skep dat relatief groot probleme doeltreffend opgelos kan word deur die aantal
duur strukturele analises wat vereis word, te minimeer. Sommige uitbreidings op tradisionele
implementerings word voorgestel wat kan dien om die doeltreffendheid van sulke algoritmes te
verhoog. Die werk wat hierin aangebied word, het hoofsaaklik betrekking op drie onderwerpe: die
gebruik van nie-konvekse funksies in die defini¨ering van SBO-subprobleme, die globale konvergensie
van die metode, en die toepassing van die duale SBO-benadering op grootskaalse probleme.
Daarbenewens word ’n hoofstuk aangebied wat fokus op die interpretasie van Sigmund se maasonafhanklike
sensitiwiteitsfilter (mesh independence sensitivity filter) in topologie-optimering.
Dit is standaard praktyk om die benaderde subprobleme as streng konveks te formuleer, aangesien
streng konveksiteit ’n voldoende voorwaarde is om te verseker dat die oplossing van die duale
probleem ooreenstem met die unieke stasionˆere punt van die primaal. Die insluiting van niekonvekse
funksies in die definisie van die subprobleme word selde gepoog. Baie probleme toon
egter nie-konvekse gedrag wat maklik deur eenvoudige nie-konvekse funksies voorgestel kan word.
In hierdie werk word daar gedemonstreer dat sulke funksies onder sekere voorwaardes met vrug in
die definisie van die benaderde subprobleme inkorporeer kan word sonder om die korrespondensie
of uniekheid van die primale en duale oplossings te vernietig.
Globale konvergensie van duale SBO-algoritmes word ondersoek binne die konteks van die CCSAmetode,
wat afhanklik is van die gebruik en manipulering van konserwatiewe konvekse en skeibare
benaderings. Hierdie metode vereis tans dat ’n gegewe probleem en elk van sy subprobleme verslap
word om te verseker dat die sekwensie van iterasies wat geproduseer word, toelaatbaar bly. ’n
Nuwe metode, wat die begrensde duaal genoem word, word aangebied as ’n alternatief tot verslapping.
Daar word vir ontoelaatbaarheid voorsiening gemaak in die oplossing van die duaal, en geen
verslappings-tipe wysiging word benodig nie. Daar word gewys dat wanneer ontoelaatbaarheid
te¨engekom word, maksimering van die duaal-subprobleem ekwivalent is aan minimering van sy
begrensingsontoelaatbaarhede (constraint infeasibilities). Met iterasie word ’n herstellende reeks
iterasies geproduseer wat toelaatbaarheid bereik, waarna konvergensie tot ’n plaaslike KKT-punt
verseker word.
Twee gevalle van die duale SBO-oplossing van grootskaalse probleme word hierin aangespreek.
Die eerste geval is ’n diskrete probleem betreffende die seleksie van die puntsgewyse optimale
veselori¨entasie in die tweedimensionele minimum meegeefbaarheidsontwerp vir veselversterkte
saamgestelde plate. Dit word opgelos deur middel van die diskrete duale benadering, en die formulering wat gebruik word, gee aanleiding tot ’n gedeeltelik skeibare duale probleem. Die tweede
geval behels die oplossing van in-vlak materiaalverspredingsprobleme onderworpe aan plaaslike
spanningsbegrensings. Hulle word in ’n kontinue sin opgelos met die gebruik van ’n yl oplosser.
Die kompleksiteit en dimensionaliteit van die duaal word beheer deur gebruik te maak van ’n
strategie om begrensings te selekteer tesame met ’n meganisme waardeur onbelangrike elemente
van die Jacobiaan van die aktiewe begrensings uitgelaat word. Op hierdie wyse word beide die
grootte van die duaal en die hoeveelheid inligting wat gestoor moet word om die duaal te definieer,
verminder.
|
82 |
Accurate and Robust Preconditioning Techniques for Solving General Sparse Linear SystemsLee, Eun-Joo 01 January 2008 (has links)
Please download this dissertation to see the abstract.
|
83 |
Inférence rétrospective de réseaux de gènes à partir de données génomiques temporellesRau, Andrea 01 June 2010 (has links) (PDF)
Les réseaux de gènes régulateurs représentent un ensemble de gènes qui interagissent, directement ou indirectement, les uns avec les autres ainsi qu'avec d'autres produits cellulaires. Comme ces interactions réglementent le taux de transcription des gènes et la production subséquente de protéines fonctionnelles, l'identification de ces réseaux peut conduire à une meilleure compréhension des systèmes biologiques complexes. Les technologies telles que les puces à ADN (microarrays) et le séquençage à ultra-haut débit (RNA sequencing) permettent une étude simultanée de l'expression des milliers de gènes chez un organisme, soit le transcriptome. En mesurant l'expression des gènes au cours du temps, il est possible d'inférer (soit "reverse-engineer") la structure des réseaux biologiques qui s'impliquent pendant un processus cellulaire particulier. Cependant, ces réseaux sont en général très compliqués et difficilement élucidés, surtout vu le grand nombre de gènes considérés et le peu de répliques biologiques disponibles dans la plupart des données expérimentales.<br /> <br /> Dans ce travail, nous proposons deux méthodes pour l'identification des réseaux de gènes régulateurs qui se servent des réseaux Bayésiens dynamiques et des modèles linéaires. Dans la première méthode, nous développons un algorithme dans un cadre bayésien pour les modèles linéaires espace-état (state-space model). Les hyperparamètres sont estimés avec une procédure bayésienne empirique et une adaptation de l'algorithme espérance-maximisation. Dans la deuxième approche, nous développons une extension d'une méthode de Approximate Bayesian Computation basé sur une procédure de Monte Carlo par chaînes de Markov pour l'inférence des réseaux biologiques. Cette méthode échantillonne des lois approximatives a posteriori des interactions gène-à-gène et fournit des informations sur l'identifiabilité et le robustesse des structures sous-réseaux. La performance des deux approches est étudié via un ensemble de simulations, et les deux sont appliqués aux données transcriptomiques.
|
84 |
Solution to boundary-contact problems of elasticity in mathematical models of the printing-plate contact system for flexographic printingKotik, Nikolai January 2007 (has links)
<p>Boundary-contact problems (BCPs) are studied within the frames of</p><p>classical mathematical theory of elasticity and plasticity</p><p>elaborated by Landau, Kupradze, Timoshenko, Goodier, Fichera and</p><p>many others on the basis of analysis of two- and three-dimensional</p><p>boundary value problems for linear partial differential equations.</p><p>A great attention is traditionally paid both to theoretical</p><p>investigations using variational methods and boundary singular</p><p>integral equations (Muskhelishvili) and construction of solutions</p><p>in the form that admit efficient numerical evaluation (Kupradze).</p><p>A special family of BCPs considered by Shtaerman, Vorovich,</p><p>Alblas, Nowell, and others arises within the frames of the models</p><p>of squeezing thin multilayer elastic sheets. We show that</p><p>mathematical models based on the analysis of BCPs can be also</p><p>applied to modeling of the clich\'{e}-surface printing contacts</p><p>and paper surface compressibility in flexographic printing.</p><p>The main result of this work is formulation and complete</p><p>investigation of BCPs in layered structures, which includes both</p><p>the theoretical (statement of the problems, solvability and</p><p>uniqueness) and applied parts (approximate and numerical</p><p>solutions, codes, simulation).</p><p>We elaborate a mathematical model of squeezing a thin elastic</p><p>sheet placed on a stiff base without friction by weak loads</p><p>through several openings on one of its boundary surfaces. We</p><p>formulate and consider the corresponding BCPs in two- and</p><p>three-dimensional bands, prove the existence and uniqueness of</p><p>solutions, and investigate their smoothness including the behavior</p><p>at infinity and in the vicinity of critical points. The BCP in a</p><p>two-dimensional band is reduced to a Fredholm integral equation</p><p>(IE) with a logarithmic singularity of the kernel. The theory of</p><p>logarithmic IEs developed in the study includes the analysis of</p><p>solvability and development of solution techniques when the set of</p><p>integration consists of several intervals. The IE associated with</p><p>the BCP is solved by three methods based on the use of</p><p>Fourier-Chebyshev series, matrix-algebraic determination of the</p><p>entries in the resulting infinite system matrix, and</p><p>semi-inversion. An asymptotic theory for the BCP is developed and</p><p>the solutions are obtained as asymptotic series in powers of the</p><p>characteristic small parameter.</p><p>We propose and justify a technique for the solution of BCPs and</p><p>boundary value problems with boundary conditions of mixed type</p><p>called the approximate decomposition method (ADM). The main idea</p><p>of ADM is simplifying general BCPs and reducing them to a chain</p><p>of auxiliary problems for 'shifted' Laplacian in long rectangles</p><p>or parallelepipeds and then to a sequence of iterative problems</p><p>such that each of them can be solved (explicitly) by the Fourier</p><p>method. The solution to the initial BCP is then obtained as a</p><p>limit using a contraction operator, which constitutes in</p><p>particular an independent proof of the BCP unique solvability.</p><p>We elaborate a numerical method and algorithms based on the</p><p>approximate decomposition and the computer codes and perform</p><p>comprehensive numerical analysis of the BCPs including the</p><p>simulation for problems of practical interest. A variety of</p><p>computational results are presented and discussed which form the</p><p>basis for further applications for the modeling and simulation of</p><p>printing-plate contact systems and other structures of</p><p>flexographic printing. A comparison with finite-element solution</p><p>is performed.</p>
|
85 |
Population genetic patterns in continuous environments in relation to conservation managementWennerström, Lovisa January 2016 (has links)
Genetic variation is a prerequisite for the viability and evolution of species. Information on population genetic patterns on spatial and temporal scales is therefore important for effective management and for protection of biodiversity. However, incorporation of genetics into management has been difficult, even though the need has been stressed for decades. In this thesis population genetic patterns in continuous environments are described, compared among species, and related to conservation management. The model systems are moose (Alces alces) in Sweden and multiple species in the Baltic Sea, with particular focus on the Northern pike (Esox lucius). The spatial scope of the studies is large, and much focus is dedicated towards comprehensive sampling over large geographic areas. The moose population in Sweden is shown to be divided into two major subpopulations, a northern and a southern one. Both subpopulations show genetic signals of major population bottlenecks, which coincide with known population reductions due to high hunting pressure (Paper I). The Northern pike in the Baltic Sea shows relatively weak, but temporally stable population genetic structure. An isolation by distance pattern suggests that gene flow primarily takes place among neighboring populations, either over shortest waterway distance or along the mainland coast, with island populations acting as stepping stones (Paper III). In a comparative study of seven Baltic Sea species no shared genetic patterns were found, either in terms of genetic divergence among or genetic diversity within geographic regions. These results complicate the incorporation of genetic data into management, because it suggests that no generalization can be made among species in the Baltic Sea, but that species-specific management is needed (Paper II). Over the last 50 years, 61 species in the Baltic Sea have been studied with respect to spatial genetic patterns. For over 20 of these species information of direct relevance for management is available. Relevant information is synthesized into management recommendations (Paper IV). This thesis provides vital information on spatial and temporal genetic structure for a number of ecologically and socio-economically important species. It shows that such information is important to consider species by species and that both local and metapopulation approaches are needed to effectively manage genetic diversity in e.g. moose and pike. Further, it identifies for which organisms in the Baltic Sea genetic information exists, how it can be used, and where important information is lacking. In order to successfully make use of genetic data in management, effective communication channels between academia and policy-makers are needed. / <p>At the time of the doctoral defense, the following papers were unpublished and had a status as follows: Paper 3: Manuscript. Paper 4: Manuscript.</p>
|
86 |
Implementace neúplného inverzního rozkladu na grafických kartách / Implementing incomplete inverse decomposition on graphical processing unitsDědeček, Jan January 2013 (has links)
The goal of this Thesis was to evaluate a possibility to solve systems of linear algebraic equations with the help of graphical processing units (GPUs). While such solvers for generally dense systems seem to be more or less a part of standard production libraries, the Thesis concentrates on this low-level parallelization of equations with a sparse system that still presents a challenge. In particular, the Thesis considers a specific algorithm of an approximate inverse decomposition of symmetric and positive definite systems combined with the conjugate gradient method. An important part of this work is an innovative parallel implementation. The presented experimental results for systems of various sizes and sparsity structures point out that the approach is rather promising and should be further developed. Summarizing our results, efficient preconditioning of sparse systems by approximate inverses on GPUs seems to be worth of consideration. Powered by TCPDF (www.tcpdf.org)
|
87 |
Exploration of Energy Efficient Hardware and Algorithms for Deep LearningSyed Sarwar (6634835) 14 May 2019 (has links)
<div>Deep Neural Networks (DNNs) have emerged as the state-of-the-art technique in a wide range of machine learning tasks for analytics and computer vision in the next generation of embedded (mobile, IoT, wearable) devices. Despite their success, they suffer from high energy requirements both in inference and training. In recent years, the inherent error resiliency of DNNs has been exploited by introducing approximations at either the algorithmic or the hardware levels (individually) to obtain energy savings while incurring tolerable accuracy degradation. We perform a comprehensive analysis to determine the effectiveness of cross-layer approximations for the energy-efficient realization of large-scale DNNs. Our experiments on recognition benchmarks show that cross-layer approximation provides substantial improvements in energy efficiency for different accuracy/quality requirements. Furthermore, we propose a synergistic framework for combining the approximation techniques. </div><div>To reduce the training complexity of Deep Convolutional Neural Networks (DCNN), we replace certain weight kernels of convolutional layers with Gabor filters. The convolutional layers use the Gabor filters as fixed weight kernels, which extracts intrinsic features, with regular trainable weight kernels. This combination creates a balanced system that gives better training performance in terms of energy and time, compared to the standalone Deep CNN (without any Gabor kernels), in exchange for tolerable accuracy degradation. We also explore an efficient training methodology and incrementally growing a DCNN to allow new classes to be learned while sharing part of the base network. Our approach is an end-to-end learning framework, where we focus on reducing the incremental training complexity while achieving accuracy close to the upper-bound without using any of the old training samples. We have also explored spiking neural networks for energy-efficiency. Training of deep spiking neural networks from direct spike inputs is difficult since its temporal dynamics are not well suited for standard supervision based training algorithms used to train DNNs. We propose a spike-based backpropagation training methodology for state-of-the-art deep Spiking Neural Network (SNN) architectures. This methodology enables real-time training in deep SNNs while achieving comparable inference accuracies on standard image recognition tasks.</div>
|
88 |
Probabilistic machine learning for circular statistics : models and inference using the multivariate Generalised von Mises distributionWu Navarro, Alexandre Khae January 2018 (has links)
Probabilistic machine learning and circular statistics—the branch of statistics concerned with data as angles and directions—are two research communities that have grown mostly in isolation from one another. On the one hand, probabilistic machine learning community has developed powerful frameworks for problems whose data lives on Euclidean spaces, such as Gaussian Processes, but have generally neglected other topologies studied by circular statistics. On the other hand, the approximate inference frameworks from probabilistic machine learning have only recently started to the circular statistics landscape. This thesis intends to redress the gap between these two fields by contributing to both fields with models and approximate inference algorithms. In particular, we introduce the multivariate Generalised von Mises distribution (mGvM), which allows the use of kernels in circular statistics akin to Gaussian Processes, and an augmented representation. These models account for a vast number of applications comprising both latent variable modelling and regression of circular data. Then, we propose methods to conduct approximate inference on these models. In particular, we investigate the use of Variational Inference, Expectation Propagation and Markov chain Monte Carlo methods. The variational inference route taken was a mean field approach to efficiently leverage the mGvM tractable conditionals and create a baseline for comparison with other methods. Then, an Expectation Propagation approach is presented drawing on the Expectation Consistent Framework for Ising models and connecting the approximations used to the augmented model presented. In the final MCMC chapter, efficient Gibbs and Hamiltonian Monte Carlo samplers are derived for the mGvM and the augmented model.
|
89 |
Técnicas de agrupamento de dados para computação aproximativaMalfatti, Guilherme Meneguzzi January 2017 (has links)
Dois dos principais fatores do aumento da performance em aplicações single-thread – frequência de operação e exploração do paralelismo no nível das instruções – tiveram pouco avanço nos últimos anos devido a restrições de potência. Neste contexto, considerando a natureza tolerante a imprecisões (i.e.: suas saídas podem conter um nível aceitável de ruído sem comprometer o resultado final) de muitas aplicações atuais, como processamento de imagens e aprendizado de máquina, a computação aproximativa torna-se uma abordagem atrativa. Esta técnica baseia-se em computar valores aproximados ao invés de precisos que, por sua vez, pode aumentar o desempenho e reduzir o consumo energético ao custo de qualidade. No atual estado da arte, a forma mais comum de exploração da técnica é através de redes neurais (mais especificamente, o modelo Multilayer Perceptron), devido à capacidade destas estruturas de aprender funções arbitrárias e aproximá-las. Tais redes são geralmente implementadas em um hardware dedicado, chamado acelerador neural. Contudo, essa execução exige uma grande quantidade de área em chip e geralmente não oferece melhorias suficientes que justifiquem este espaço adicional. Este trabalho tem por objetivo propor um novo mecanismo para fazer computação aproximativa, baseado em reúso aproximativo de funções e trechos de código. Esta técnica agrupa automaticamente entradas e saídas de dados por similaridade, armazena-os em uma tabela em memória controlada via software. A partir disto, os valores quantizados podem ser reutilizados através de uma busca a essa tabela, onde será selecionada a saída mais apropriada e desta forma a execução do trecho de código será substituído. A aplicação desta técnica é bastante eficaz, sendo capaz de alcançar uma redução, em média, de 97.1% em Energy-Delay-Product (EDP) quando comparado a aceleradores neurais. / Two of the major drivers of increased performance in single-thread applications - increase in operation frequency and exploitation of instruction-level parallelism - have had little advances in the last years due to power constraints. In this context, considering the intrinsic imprecision-tolerance (i.e., outputs may present an acceptable level of noise without compromising the result) of many modern applications, such as image processing and machine learning, approximate computation becomes a promising approach. This technique is based on computing approximate instead of accurate results, which can increase performance and reduce energy consumption at the cost of quality. In the current state of the art, the most common way of exploiting the technique is through neural networks (more specifically, the Multilayer Perceptron model), due to the ability of these structures to learn arbitrary functions and to approximate them. Such networks are usually implemented in a dedicated neural accelerator. However, this implementation requires a large amount of chip area and usually does not offer enough improvements to justify this additional cost. The goal of this work is to propose a new mechanism to address approximate computation, based on approximate reuse of functions and code fragments. This technique automatically groups input and output data by similarity and stores this information in a sofware-controlled memory. Based on these data, the quantized values can be reused through a search to this table, in which the most appropriate output will be selected and, therefore, execution of the original code will be replaced. Applying this technique is effective, achieving an average 97.1% reduction in Energy-Delay-Product (EDP) when compared to neural accelerators.
|
90 |
Energy-efficient Memory System Design with SpintronicsAshish Ranjan (5930180) 03 January 2019 (has links)
<p>Modern computing platforms, from servers to mobile devices,
demand ever-increasing amounts of memory to keep up with the growing amounts of
data they process, and to bridge the widening processor-memory gap. A large and
growing fraction of chip area and energy is expended in memories, which face
challenges with technology scaling due to increased leakage, process
variations, and unreliability. On the other hand, data intensive workloads such
as machine learning and data analytics pose increasing demands on memory
systems. Consequently, improving the energy-efficiency and performance of
memory systems is an important challenge for computing system designers.</p>
<p>Spintronic memories, which offer several desirable
characteristics - near-zero leakage, high density, non-volatility and high
endurance - are of great interest for designing future memory systems. However,
these memories are not drop-in replacements for current memory technologies,
viz. Static Random Access Memory (SRAM) and Dynamic Random Access Memory
(DRAM). They pose unique challenges such as variable access times, and require
higher write latency and write energy. This dissertation explores new
approaches to improving the energy efficiency of spintronic memory systems.</p>
<p>The dissertation first explores the design of approximate
memories, in which the need to store and access data precisely is foregone in
return for improvements in energy efficiency. This is of particular interest,
since many emerging workloads exhibit an inherent ability to tolerate
approximations to their underlying computations and data while still producing
outputs of acceptable quality. The dissertation proposes that approximate
spintronic memories can be realized either by reducing the amount of data that
is written to/read from them, or by reducing the energy consumed per access. To
reduce memory traffic, the dissertation proposes approximate memory
compression, wherein a quality-aware memory controller transparently
compresses/decompresses data written to or read from memory. For broader
applicability, the quality-aware memory controller can be programmed to specify
memory regions that can tolerate approximations, and conforms to a specified
error constraint for each such region. To reduce the per-access energy, various
mechanisms are identified at the circuit and architecture levels that yield
substantial energy benefits at the cost of small probabilities of read, write
or retention failures. Based on these mechanisms, a quality-configurable Spin
Transfer Torque Magnetic RAM (STT-MRAM) array is designed in which read/write
operations can be performed at varying levels of accuracy and energy at
runtime, depending on the needs of applications. To illustrate the utility of
the proposed quality-configurable memory array, it is evaluated as an L2 cache
in the context of a general-purpose processor, and as a scratchpad memory for a
domain-specific vector processor.</p>
<p>The dissertation also explores the design of caches with
Domain Wall Memory (DWM), a more advanced spintronic memory technology that offers
unparalleled density arising from a unique tape-like structure. However, this
structure also leads to serialized access to the bits in each bit-cell,
resulting in increased access latency, thereby degrading overall performance.
To mitigate the performance overheads, the dissertation proposes a reconfigurable
DWM-based cache architecture that modulates the active bits per tape with
minimal overheads depending on the application's memory access characteristics.
The proposed cache is evaluated in a general purpose processor and improvements
in performance are demonstrated over both CMOS and previously proposed
spintronic caches.</p>
<p>In summary, the dissertation suggests directions to improve
the energy efficiency of spintronic memories and re-affirms their potential for
the design of future memory systems.</p>
|
Page generated in 0.087 seconds